Apr 24 23:36:11.256468 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Apr 24 23:36:11.256513 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Apr 24 22:19:35 -00 2026 Apr 24 23:36:11.256538 kernel: KASLR disabled due to lack of seed Apr 24 23:36:11.256555 kernel: efi: EFI v2.7 by EDK II Apr 24 23:36:11.256572 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7b001a98 MEMRESERVE=0x7852ee18 Apr 24 23:36:11.256587 kernel: ACPI: Early table checksum verification disabled Apr 24 23:36:11.256606 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Apr 24 23:36:11.256621 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Apr 24 23:36:11.256638 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Apr 24 23:36:11.256654 kernel: ACPI: DSDT 0x0000000078640000 0013D2 (v02 AMAZON AMZNDSDT 00000001 AMZN 00000001) Apr 24 23:36:11.256674 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Apr 24 23:36:11.256690 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Apr 24 23:36:11.256706 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Apr 24 23:36:11.256723 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Apr 24 23:36:11.256742 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Apr 24 23:36:11.256763 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Apr 24 23:36:11.256781 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Apr 24 23:36:11.256797 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Apr 24 23:36:11.256814 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Apr 24 23:36:11.256831 kernel: printk: bootconsole [uart0] enabled Apr 24 23:36:11.256848 kernel: NUMA: Failed to initialise from firmware Apr 24 23:36:11.256865 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Apr 24 23:36:11.256882 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Apr 24 23:36:11.256899 kernel: Zone ranges: Apr 24 23:36:11.256916 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Apr 24 23:36:11.256933 kernel: DMA32 empty Apr 24 23:36:11.256953 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Apr 24 23:36:11.256971 kernel: Movable zone start for each node Apr 24 23:36:11.256987 kernel: Early memory node ranges Apr 24 23:36:11.257004 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Apr 24 23:36:11.257020 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Apr 24 23:36:11.257037 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Apr 24 23:36:11.257054 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Apr 24 23:36:11.257071 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Apr 24 23:36:11.257088 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Apr 24 23:36:11.257104 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Apr 24 23:36:11.257121 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Apr 24 23:36:11.257138 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Apr 24 23:36:11.257160 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Apr 24 23:36:11.257178 kernel: psci: probing for conduit method from ACPI. Apr 24 23:36:11.257204 kernel: psci: PSCIv1.0 detected in firmware. Apr 24 23:36:11.257223 kernel: psci: Using standard PSCI v0.2 function IDs Apr 24 23:36:11.257243 kernel: psci: Trusted OS migration not required Apr 24 23:36:11.257268 kernel: psci: SMC Calling Convention v1.1 Apr 24 23:36:11.257286 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000001) Apr 24 23:36:11.257305 kernel: percpu: Embedded 30 pages/cpu s85736 r8192 d28952 u122880 Apr 24 23:36:11.257323 kernel: pcpu-alloc: s85736 r8192 d28952 u122880 alloc=30*4096 Apr 24 23:36:11.257341 kernel: pcpu-alloc: [0] 0 [0] 1 Apr 24 23:36:11.257395 kernel: Detected PIPT I-cache on CPU0 Apr 24 23:36:11.257417 kernel: CPU features: detected: GIC system register CPU interface Apr 24 23:36:11.257436 kernel: CPU features: detected: Spectre-v2 Apr 24 23:36:11.257454 kernel: CPU features: detected: Spectre-v3a Apr 24 23:36:11.257472 kernel: CPU features: detected: Spectre-BHB Apr 24 23:36:11.257490 kernel: CPU features: detected: ARM erratum 1742098 Apr 24 23:36:11.257515 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Apr 24 23:36:11.257534 kernel: alternatives: applying boot alternatives Apr 24 23:36:11.257555 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=63304dd98a277d4592d17e0085ae3f91ca70cc8ec6dedfdd357a1e9755f9a8b3 Apr 24 23:36:11.257574 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 24 23:36:11.257592 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 24 23:36:11.257610 kernel: Fallback order for Node 0: 0 Apr 24 23:36:11.257628 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Apr 24 23:36:11.257646 kernel: Policy zone: Normal Apr 24 23:36:11.257664 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 24 23:36:11.257682 kernel: software IO TLB: area num 2. Apr 24 23:36:11.257700 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Apr 24 23:36:11.257725 kernel: Memory: 3820096K/4030464K available (10304K kernel code, 2180K rwdata, 8116K rodata, 39424K init, 897K bss, 210368K reserved, 0K cma-reserved) Apr 24 23:36:11.257744 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 24 23:36:11.257761 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 24 23:36:11.257780 kernel: rcu: RCU event tracing is enabled. Apr 24 23:36:11.257798 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 24 23:36:11.257817 kernel: Trampoline variant of Tasks RCU enabled. Apr 24 23:36:11.257835 kernel: Tracing variant of Tasks RCU enabled. Apr 24 23:36:11.257853 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 24 23:36:11.257871 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 24 23:36:11.257888 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Apr 24 23:36:11.257906 kernel: GICv3: 96 SPIs implemented Apr 24 23:36:11.257927 kernel: GICv3: 0 Extended SPIs implemented Apr 24 23:36:11.257946 kernel: Root IRQ handler: gic_handle_irq Apr 24 23:36:11.257963 kernel: GICv3: GICv3 features: 16 PPIs Apr 24 23:36:11.257980 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Apr 24 23:36:11.257998 kernel: ITS [mem 0x10080000-0x1009ffff] Apr 24 23:36:11.258016 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Apr 24 23:36:11.258034 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Apr 24 23:36:11.258052 kernel: GICv3: using LPI property table @0x00000004000d0000 Apr 24 23:36:11.258070 kernel: ITS: Using hypervisor restricted LPI range [128] Apr 24 23:36:11.258088 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Apr 24 23:36:11.258105 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 24 23:36:11.258123 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Apr 24 23:36:11.258145 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Apr 24 23:36:11.258163 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Apr 24 23:36:11.258181 kernel: Console: colour dummy device 80x25 Apr 24 23:36:11.258200 kernel: printk: console [tty1] enabled Apr 24 23:36:11.258218 kernel: ACPI: Core revision 20230628 Apr 24 23:36:11.258237 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Apr 24 23:36:11.258255 kernel: pid_max: default: 32768 minimum: 301 Apr 24 23:36:11.258273 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 24 23:36:11.258291 kernel: landlock: Up and running. Apr 24 23:36:11.258313 kernel: SELinux: Initializing. Apr 24 23:36:11.258332 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 24 23:36:11.260858 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 24 23:36:11.260904 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 24 23:36:11.260924 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 24 23:36:11.260943 kernel: rcu: Hierarchical SRCU implementation. Apr 24 23:36:11.260962 kernel: rcu: Max phase no-delay instances is 400. Apr 24 23:36:11.260981 kernel: Platform MSI: ITS@0x10080000 domain created Apr 24 23:36:11.260999 kernel: PCI/MSI: ITS@0x10080000 domain created Apr 24 23:36:11.261028 kernel: Remapping and enabling EFI services. Apr 24 23:36:11.261048 kernel: smp: Bringing up secondary CPUs ... Apr 24 23:36:11.261067 kernel: Detected PIPT I-cache on CPU1 Apr 24 23:36:11.261087 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Apr 24 23:36:11.261106 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Apr 24 23:36:11.261124 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Apr 24 23:36:11.261143 kernel: smp: Brought up 1 node, 2 CPUs Apr 24 23:36:11.261163 kernel: SMP: Total of 2 processors activated. Apr 24 23:36:11.261181 kernel: CPU features: detected: 32-bit EL0 Support Apr 24 23:36:11.261205 kernel: CPU features: detected: 32-bit EL1 Support Apr 24 23:36:11.261224 kernel: CPU features: detected: CRC32 instructions Apr 24 23:36:11.261246 kernel: CPU: All CPU(s) started at EL1 Apr 24 23:36:11.261301 kernel: alternatives: applying system-wide alternatives Apr 24 23:36:11.261404 kernel: devtmpfs: initialized Apr 24 23:36:11.261445 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 24 23:36:11.261478 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 24 23:36:11.261500 kernel: pinctrl core: initialized pinctrl subsystem Apr 24 23:36:11.261523 kernel: SMBIOS 3.0.0 present. Apr 24 23:36:11.261554 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Apr 24 23:36:11.261575 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 24 23:36:11.261594 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Apr 24 23:36:11.261613 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Apr 24 23:36:11.261633 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Apr 24 23:36:11.261652 kernel: audit: initializing netlink subsys (disabled) Apr 24 23:36:11.261671 kernel: audit: type=2000 audit(0.288:1): state=initialized audit_enabled=0 res=1 Apr 24 23:36:11.261690 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 24 23:36:11.261714 kernel: cpuidle: using governor menu Apr 24 23:36:11.261733 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Apr 24 23:36:11.261753 kernel: ASID allocator initialised with 65536 entries Apr 24 23:36:11.261772 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 24 23:36:11.261791 kernel: Serial: AMBA PL011 UART driver Apr 24 23:36:11.261809 kernel: Modules: 17488 pages in range for non-PLT usage Apr 24 23:36:11.261828 kernel: Modules: 509008 pages in range for PLT usage Apr 24 23:36:11.261848 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 24 23:36:11.261867 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Apr 24 23:36:11.261891 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Apr 24 23:36:11.261911 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Apr 24 23:36:11.261930 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 24 23:36:11.261949 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Apr 24 23:36:11.261967 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Apr 24 23:36:11.261986 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Apr 24 23:36:11.262005 kernel: ACPI: Added _OSI(Module Device) Apr 24 23:36:11.262024 kernel: ACPI: Added _OSI(Processor Device) Apr 24 23:36:11.262043 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 24 23:36:11.262066 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 24 23:36:11.262085 kernel: ACPI: Interpreter enabled Apr 24 23:36:11.262104 kernel: ACPI: Using GIC for interrupt routing Apr 24 23:36:11.262123 kernel: ACPI: MCFG table detected, 1 entries Apr 24 23:36:11.262141 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00]) Apr 24 23:36:11.262511 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 24 23:36:11.262740 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Apr 24 23:36:11.262951 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Apr 24 23:36:11.263194 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x200fffff] reserved by PNP0C02:00 Apr 24 23:36:11.263488 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x200fffff] for [bus 00] Apr 24 23:36:11.263516 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Apr 24 23:36:11.263536 kernel: acpiphp: Slot [1] registered Apr 24 23:36:11.263555 kernel: acpiphp: Slot [2] registered Apr 24 23:36:11.263574 kernel: acpiphp: Slot [3] registered Apr 24 23:36:11.263592 kernel: acpiphp: Slot [4] registered Apr 24 23:36:11.263611 kernel: acpiphp: Slot [5] registered Apr 24 23:36:11.263637 kernel: acpiphp: Slot [6] registered Apr 24 23:36:11.263656 kernel: acpiphp: Slot [7] registered Apr 24 23:36:11.263675 kernel: acpiphp: Slot [8] registered Apr 24 23:36:11.263694 kernel: acpiphp: Slot [9] registered Apr 24 23:36:11.263713 kernel: acpiphp: Slot [10] registered Apr 24 23:36:11.263732 kernel: acpiphp: Slot [11] registered Apr 24 23:36:11.263750 kernel: acpiphp: Slot [12] registered Apr 24 23:36:11.263769 kernel: acpiphp: Slot [13] registered Apr 24 23:36:11.263788 kernel: acpiphp: Slot [14] registered Apr 24 23:36:11.263807 kernel: acpiphp: Slot [15] registered Apr 24 23:36:11.263831 kernel: acpiphp: Slot [16] registered Apr 24 23:36:11.263850 kernel: acpiphp: Slot [17] registered Apr 24 23:36:11.263869 kernel: acpiphp: Slot [18] registered Apr 24 23:36:11.263888 kernel: acpiphp: Slot [19] registered Apr 24 23:36:11.263906 kernel: acpiphp: Slot [20] registered Apr 24 23:36:11.263925 kernel: acpiphp: Slot [21] registered Apr 24 23:36:11.263944 kernel: acpiphp: Slot [22] registered Apr 24 23:36:11.263963 kernel: acpiphp: Slot [23] registered Apr 24 23:36:11.263982 kernel: acpiphp: Slot [24] registered Apr 24 23:36:11.264006 kernel: acpiphp: Slot [25] registered Apr 24 23:36:11.264025 kernel: acpiphp: Slot [26] registered Apr 24 23:36:11.264044 kernel: acpiphp: Slot [27] registered Apr 24 23:36:11.264062 kernel: acpiphp: Slot [28] registered Apr 24 23:36:11.264081 kernel: acpiphp: Slot [29] registered Apr 24 23:36:11.264100 kernel: acpiphp: Slot [30] registered Apr 24 23:36:11.264119 kernel: acpiphp: Slot [31] registered Apr 24 23:36:11.264137 kernel: PCI host bridge to bus 0000:00 Apr 24 23:36:11.264375 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Apr 24 23:36:11.266746 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Apr 24 23:36:11.268741 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Apr 24 23:36:11.268962 kernel: pci_bus 0000:00: root bus resource [bus 00] Apr 24 23:36:11.269236 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Apr 24 23:36:11.269547 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Apr 24 23:36:11.269804 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Apr 24 23:36:11.270080 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Apr 24 23:36:11.270319 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Apr 24 23:36:11.271676 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Apr 24 23:36:11.271932 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Apr 24 23:36:11.272153 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Apr 24 23:36:11.274450 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Apr 24 23:36:11.274718 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Apr 24 23:36:11.275033 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Apr 24 23:36:11.275266 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Apr 24 23:36:11.275544 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Apr 24 23:36:11.275740 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Apr 24 23:36:11.275767 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Apr 24 23:36:11.275788 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Apr 24 23:36:11.275808 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Apr 24 23:36:11.275828 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Apr 24 23:36:11.275855 kernel: iommu: Default domain type: Translated Apr 24 23:36:11.275874 kernel: iommu: DMA domain TLB invalidation policy: strict mode Apr 24 23:36:11.275893 kernel: efivars: Registered efivars operations Apr 24 23:36:11.275912 kernel: vgaarb: loaded Apr 24 23:36:11.275932 kernel: clocksource: Switched to clocksource arch_sys_counter Apr 24 23:36:11.275951 kernel: VFS: Disk quotas dquot_6.6.0 Apr 24 23:36:11.275970 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 24 23:36:11.275989 kernel: pnp: PnP ACPI init Apr 24 23:36:11.276220 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Apr 24 23:36:11.276255 kernel: pnp: PnP ACPI: found 1 devices Apr 24 23:36:11.276275 kernel: NET: Registered PF_INET protocol family Apr 24 23:36:11.276295 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 24 23:36:11.276314 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 24 23:36:11.276334 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 24 23:36:11.278591 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 24 23:36:11.278633 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 24 23:36:11.278654 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 24 23:36:11.278684 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 24 23:36:11.278705 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 24 23:36:11.278724 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 24 23:36:11.278743 kernel: PCI: CLS 0 bytes, default 64 Apr 24 23:36:11.278762 kernel: kvm [1]: HYP mode not available Apr 24 23:36:11.278782 kernel: Initialise system trusted keyrings Apr 24 23:36:11.278801 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 24 23:36:11.278821 kernel: Key type asymmetric registered Apr 24 23:36:11.278840 kernel: Asymmetric key parser 'x509' registered Apr 24 23:36:11.278864 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Apr 24 23:36:11.278884 kernel: io scheduler mq-deadline registered Apr 24 23:36:11.278903 kernel: io scheduler kyber registered Apr 24 23:36:11.278922 kernel: io scheduler bfq registered Apr 24 23:36:11.279259 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Apr 24 23:36:11.279294 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Apr 24 23:36:11.279314 kernel: ACPI: button: Power Button [PWRB] Apr 24 23:36:11.279334 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Apr 24 23:36:11.279377 kernel: ACPI: button: Sleep Button [SLPB] Apr 24 23:36:11.279409 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 24 23:36:11.279430 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Apr 24 23:36:11.279675 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Apr 24 23:36:11.279705 kernel: printk: console [ttyS0] disabled Apr 24 23:36:11.279726 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Apr 24 23:36:11.279746 kernel: printk: console [ttyS0] enabled Apr 24 23:36:11.279765 kernel: printk: bootconsole [uart0] disabled Apr 24 23:36:11.279784 kernel: thunder_xcv, ver 1.0 Apr 24 23:36:11.279802 kernel: thunder_bgx, ver 1.0 Apr 24 23:36:11.279828 kernel: nicpf, ver 1.0 Apr 24 23:36:11.279847 kernel: nicvf, ver 1.0 Apr 24 23:36:11.280081 kernel: rtc-efi rtc-efi.0: registered as rtc0 Apr 24 23:36:11.280284 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-04-24T23:36:10 UTC (1777073770) Apr 24 23:36:11.280310 kernel: hid: raw HID events driver (C) Jiri Kosina Apr 24 23:36:11.280330 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Apr 24 23:36:11.280378 kernel: watchdog: Delayed init of the lockup detector failed: -19 Apr 24 23:36:11.280404 kernel: watchdog: Hard watchdog permanently disabled Apr 24 23:36:11.280436 kernel: NET: Registered PF_INET6 protocol family Apr 24 23:36:11.280456 kernel: Segment Routing with IPv6 Apr 24 23:36:11.280475 kernel: In-situ OAM (IOAM) with IPv6 Apr 24 23:36:11.280494 kernel: NET: Registered PF_PACKET protocol family Apr 24 23:36:11.280513 kernel: Key type dns_resolver registered Apr 24 23:36:11.280533 kernel: registered taskstats version 1 Apr 24 23:36:11.280552 kernel: Loading compiled-in X.509 certificates Apr 24 23:36:11.280572 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 96a6e7da7ac9a3ef656057ccd8e13f251b310c24' Apr 24 23:36:11.280591 kernel: Key type .fscrypt registered Apr 24 23:36:11.280615 kernel: Key type fscrypt-provisioning registered Apr 24 23:36:11.280634 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 24 23:36:11.280653 kernel: ima: Allocated hash algorithm: sha1 Apr 24 23:36:11.280672 kernel: ima: No architecture policies found Apr 24 23:36:11.280692 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Apr 24 23:36:11.280711 kernel: clk: Disabling unused clocks Apr 24 23:36:11.280730 kernel: Freeing unused kernel memory: 39424K Apr 24 23:36:11.280749 kernel: Run /init as init process Apr 24 23:36:11.280768 kernel: with arguments: Apr 24 23:36:11.280791 kernel: /init Apr 24 23:36:11.280810 kernel: with environment: Apr 24 23:36:11.280829 kernel: HOME=/ Apr 24 23:36:11.280848 kernel: TERM=linux Apr 24 23:36:11.280872 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 24 23:36:11.280896 systemd[1]: Detected virtualization amazon. Apr 24 23:36:11.280918 systemd[1]: Detected architecture arm64. Apr 24 23:36:11.280938 systemd[1]: Running in initrd. Apr 24 23:36:11.280963 systemd[1]: No hostname configured, using default hostname. Apr 24 23:36:11.280983 systemd[1]: Hostname set to . Apr 24 23:36:11.281005 systemd[1]: Initializing machine ID from VM UUID. Apr 24 23:36:11.281025 systemd[1]: Queued start job for default target initrd.target. Apr 24 23:36:11.281046 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 24 23:36:11.281067 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 24 23:36:11.281090 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 24 23:36:11.281112 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 24 23:36:11.281138 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 24 23:36:11.281159 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 24 23:36:11.281183 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 24 23:36:11.281204 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 24 23:36:11.281225 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 24 23:36:11.281246 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 24 23:36:11.281271 systemd[1]: Reached target paths.target - Path Units. Apr 24 23:36:11.281292 systemd[1]: Reached target slices.target - Slice Units. Apr 24 23:36:11.281313 systemd[1]: Reached target swap.target - Swaps. Apr 24 23:36:11.281335 systemd[1]: Reached target timers.target - Timer Units. Apr 24 23:36:11.283821 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 24 23:36:11.283856 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 24 23:36:11.283878 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 24 23:36:11.283899 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 24 23:36:11.283921 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 24 23:36:11.283952 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 24 23:36:11.283974 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 24 23:36:11.283995 systemd[1]: Reached target sockets.target - Socket Units. Apr 24 23:36:11.284016 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 24 23:36:11.284037 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 24 23:36:11.284057 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 24 23:36:11.284078 systemd[1]: Starting systemd-fsck-usr.service... Apr 24 23:36:11.284100 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 24 23:36:11.284121 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 24 23:36:11.284146 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 24 23:36:11.284168 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 24 23:36:11.284188 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 24 23:36:11.284209 systemd[1]: Finished systemd-fsck-usr.service. Apr 24 23:36:11.284232 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 24 23:36:11.284302 systemd-journald[250]: Collecting audit messages is disabled. Apr 24 23:36:11.284376 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 24 23:36:11.284404 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 24 23:36:11.284431 kernel: Bridge firewalling registered Apr 24 23:36:11.284451 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 24 23:36:11.284472 systemd-journald[250]: Journal started Apr 24 23:36:11.284511 systemd-journald[250]: Runtime Journal (/run/log/journal/ec269bb60af2a6e1d006d09d6ba5a4ed) is 8.0M, max 75.3M, 67.3M free. Apr 24 23:36:11.228459 systemd-modules-load[252]: Inserted module 'overlay' Apr 24 23:36:11.278466 systemd-modules-load[252]: Inserted module 'br_netfilter' Apr 24 23:36:11.289364 systemd[1]: Started systemd-journald.service - Journal Service. Apr 24 23:36:11.290847 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 24 23:36:11.292501 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 24 23:36:11.299468 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 24 23:36:11.304316 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 24 23:36:11.309991 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 24 23:36:11.358744 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 24 23:36:11.359964 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 24 23:36:11.361791 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 24 23:36:11.399658 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 24 23:36:11.412651 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 24 23:36:11.420926 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 24 23:36:11.461650 dracut-cmdline[291]: dracut-dracut-053 Apr 24 23:36:11.469568 dracut-cmdline[291]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=63304dd98a277d4592d17e0085ae3f91ca70cc8ec6dedfdd357a1e9755f9a8b3 Apr 24 23:36:11.499309 systemd-resolved[285]: Positive Trust Anchors: Apr 24 23:36:11.499344 systemd-resolved[285]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 24 23:36:11.499445 systemd-resolved[285]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 24 23:36:11.651396 kernel: SCSI subsystem initialized Apr 24 23:36:11.659395 kernel: Loading iSCSI transport class v2.0-870. Apr 24 23:36:11.672459 kernel: iscsi: registered transport (tcp) Apr 24 23:36:11.695025 kernel: iscsi: registered transport (qla4xxx) Apr 24 23:36:11.695120 kernel: QLogic iSCSI HBA Driver Apr 24 23:36:11.762428 kernel: random: crng init done Apr 24 23:36:11.763056 systemd-resolved[285]: Defaulting to hostname 'linux'. Apr 24 23:36:11.769758 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 24 23:36:11.775771 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 24 23:36:11.798416 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 24 23:36:11.816334 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 24 23:36:11.847668 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 24 23:36:11.847757 kernel: device-mapper: uevent: version 1.0.3 Apr 24 23:36:11.849657 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 24 23:36:11.916411 kernel: raid6: neonx8 gen() 6726 MB/s Apr 24 23:36:11.933390 kernel: raid6: neonx4 gen() 6596 MB/s Apr 24 23:36:11.950385 kernel: raid6: neonx2 gen() 5485 MB/s Apr 24 23:36:11.967387 kernel: raid6: neonx1 gen() 3971 MB/s Apr 24 23:36:11.984385 kernel: raid6: int64x8 gen() 3823 MB/s Apr 24 23:36:12.001385 kernel: raid6: int64x4 gen() 3726 MB/s Apr 24 23:36:12.018384 kernel: raid6: int64x2 gen() 3607 MB/s Apr 24 23:36:12.036434 kernel: raid6: int64x1 gen() 2761 MB/s Apr 24 23:36:12.036479 kernel: raid6: using algorithm neonx8 gen() 6726 MB/s Apr 24 23:36:12.055410 kernel: raid6: .... xor() 4786 MB/s, rmw enabled Apr 24 23:36:12.055446 kernel: raid6: using neon recovery algorithm Apr 24 23:36:12.063390 kernel: xor: measuring software checksum speed Apr 24 23:36:12.065638 kernel: 8regs : 10268 MB/sec Apr 24 23:36:12.065672 kernel: 32regs : 11913 MB/sec Apr 24 23:36:12.066940 kernel: arm64_neon : 9561 MB/sec Apr 24 23:36:12.066973 kernel: xor: using function: 32regs (11913 MB/sec) Apr 24 23:36:12.151407 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 24 23:36:12.171406 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 24 23:36:12.187753 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 24 23:36:12.230965 systemd-udevd[472]: Using default interface naming scheme 'v255'. Apr 24 23:36:12.239816 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 24 23:36:12.256608 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 24 23:36:12.284956 dracut-pre-trigger[475]: rd.md=0: removing MD RAID activation Apr 24 23:36:12.342754 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 24 23:36:12.354661 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 24 23:36:12.481414 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 24 23:36:12.495658 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 24 23:36:12.530776 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 24 23:36:12.537208 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 24 23:36:12.544150 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 24 23:36:12.548331 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 24 23:36:12.571708 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 24 23:36:12.617791 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 24 23:36:12.698376 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Apr 24 23:36:12.698440 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Apr 24 23:36:12.702409 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 24 23:36:12.703323 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 24 23:36:12.719030 kernel: ena 0000:00:05.0: ENA device version: 0.10 Apr 24 23:36:12.719384 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Apr 24 23:36:12.710600 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 24 23:36:12.711298 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 24 23:36:12.712631 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 24 23:36:12.730196 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 24 23:36:12.744604 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 24 23:36:12.755388 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80110000, mac addr 06:d8:b6:3e:fb:33 Apr 24 23:36:12.758376 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Apr 24 23:36:12.759061 (udev-worker)[528]: Network interface NamePolicy= disabled on kernel command line. Apr 24 23:36:12.763654 kernel: nvme nvme0: pci function 0000:00:04.0 Apr 24 23:36:12.778389 kernel: nvme nvme0: 2/0/0 default/read/poll queues Apr 24 23:36:12.787982 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 24 23:36:12.794948 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 24 23:36:12.794985 kernel: GPT:9289727 != 33554431 Apr 24 23:36:12.795010 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 24 23:36:12.795035 kernel: GPT:9289727 != 33554431 Apr 24 23:36:12.795059 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 24 23:36:12.795083 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 24 23:36:12.806614 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 24 23:36:12.864278 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 24 23:36:12.894899 kernel: BTRFS: device fsid 5f4cf890-f9e2-4e04-aa84-1bcfb6e5643e devid 1 transid 36 /dev/nvme0n1p3 scanned by (udev-worker) (519) Apr 24 23:36:12.916411 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/nvme0n1p6 scanned by (udev-worker) (523) Apr 24 23:36:12.969071 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Apr 24 23:36:13.019618 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Apr 24 23:36:13.025586 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Apr 24 23:36:13.046515 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Apr 24 23:36:13.076944 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Apr 24 23:36:13.093630 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 24 23:36:13.107425 disk-uuid[664]: Primary Header is updated. Apr 24 23:36:13.107425 disk-uuid[664]: Secondary Entries is updated. Apr 24 23:36:13.107425 disk-uuid[664]: Secondary Header is updated. Apr 24 23:36:13.119443 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 24 23:36:13.137394 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 24 23:36:13.145420 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 24 23:36:14.150383 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 24 23:36:14.151985 disk-uuid[665]: The operation has completed successfully. Apr 24 23:36:14.346644 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 24 23:36:14.346850 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 24 23:36:14.390700 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 24 23:36:14.411753 sh[1008]: Success Apr 24 23:36:14.442379 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Apr 24 23:36:14.568831 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 24 23:36:14.577458 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 24 23:36:14.589592 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 24 23:36:14.643521 kernel: BTRFS info (device dm-0): first mount of filesystem 5f4cf890-f9e2-4e04-aa84-1bcfb6e5643e Apr 24 23:36:14.643599 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Apr 24 23:36:14.643627 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 24 23:36:14.646928 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 24 23:36:14.646964 kernel: BTRFS info (device dm-0): using free space tree Apr 24 23:36:14.672392 kernel: BTRFS info (device dm-0): enabling ssd optimizations Apr 24 23:36:14.687267 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 24 23:36:14.690522 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 24 23:36:14.702794 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 24 23:36:14.709632 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 24 23:36:14.750395 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 7d1fb622-285b-4375-96d6-a0d989283452 Apr 24 23:36:14.750460 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Apr 24 23:36:14.752042 kernel: BTRFS info (device nvme0n1p6): using free space tree Apr 24 23:36:14.770572 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Apr 24 23:36:14.790769 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 24 23:36:14.799415 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 7d1fb622-285b-4375-96d6-a0d989283452 Apr 24 23:36:14.811640 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 24 23:36:14.822679 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 24 23:36:14.905653 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 24 23:36:14.921673 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 24 23:36:14.977680 systemd-networkd[1200]: lo: Link UP Apr 24 23:36:14.977697 systemd-networkd[1200]: lo: Gained carrier Apr 24 23:36:14.982401 systemd-networkd[1200]: Enumeration completed Apr 24 23:36:14.983744 systemd-networkd[1200]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 24 23:36:14.983752 systemd-networkd[1200]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 24 23:36:14.984533 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 24 23:36:14.993727 systemd[1]: Reached target network.target - Network. Apr 24 23:36:15.005621 systemd-networkd[1200]: eth0: Link UP Apr 24 23:36:15.005636 systemd-networkd[1200]: eth0: Gained carrier Apr 24 23:36:15.005663 systemd-networkd[1200]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 24 23:36:15.028499 systemd-networkd[1200]: eth0: DHCPv4 address 172.31.17.112/20, gateway 172.31.16.1 acquired from 172.31.16.1 Apr 24 23:36:15.198883 ignition[1137]: Ignition 2.19.0 Apr 24 23:36:15.198922 ignition[1137]: Stage: fetch-offline Apr 24 23:36:15.203224 ignition[1137]: no configs at "/usr/lib/ignition/base.d" Apr 24 23:36:15.203268 ignition[1137]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 24 23:36:15.208127 ignition[1137]: Ignition finished successfully Apr 24 23:36:15.211387 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 24 23:36:15.229410 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 24 23:36:15.252963 ignition[1210]: Ignition 2.19.0 Apr 24 23:36:15.252983 ignition[1210]: Stage: fetch Apr 24 23:36:15.254102 ignition[1210]: no configs at "/usr/lib/ignition/base.d" Apr 24 23:36:15.254128 ignition[1210]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 24 23:36:15.254308 ignition[1210]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 24 23:36:15.274009 ignition[1210]: PUT result: OK Apr 24 23:36:15.277417 ignition[1210]: parsed url from cmdline: "" Apr 24 23:36:15.277443 ignition[1210]: no config URL provided Apr 24 23:36:15.277464 ignition[1210]: reading system config file "/usr/lib/ignition/user.ign" Apr 24 23:36:15.277494 ignition[1210]: no config at "/usr/lib/ignition/user.ign" Apr 24 23:36:15.277530 ignition[1210]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 24 23:36:15.281844 ignition[1210]: PUT result: OK Apr 24 23:36:15.281929 ignition[1210]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Apr 24 23:36:15.290987 ignition[1210]: GET result: OK Apr 24 23:36:15.291236 ignition[1210]: parsing config with SHA512: 58c65f2c0ceae97414f3edfb15e3b16af03c8dc2dfd9890152b62cf0f21e8fd00a58c7583541513adca0d81feae1e548d03f1c841a4959ff098595181b2d7e1d Apr 24 23:36:15.301131 unknown[1210]: fetched base config from "system" Apr 24 23:36:15.301164 unknown[1210]: fetched base config from "system" Apr 24 23:36:15.301180 unknown[1210]: fetched user config from "aws" Apr 24 23:36:15.307787 ignition[1210]: fetch: fetch complete Apr 24 23:36:15.307802 ignition[1210]: fetch: fetch passed Apr 24 23:36:15.307928 ignition[1210]: Ignition finished successfully Apr 24 23:36:15.316703 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 24 23:36:15.327710 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 24 23:36:15.357544 ignition[1216]: Ignition 2.19.0 Apr 24 23:36:15.357572 ignition[1216]: Stage: kargs Apr 24 23:36:15.359416 ignition[1216]: no configs at "/usr/lib/ignition/base.d" Apr 24 23:36:15.359444 ignition[1216]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 24 23:36:15.360027 ignition[1216]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 24 23:36:15.368206 ignition[1216]: PUT result: OK Apr 24 23:36:15.372906 ignition[1216]: kargs: kargs passed Apr 24 23:36:15.373007 ignition[1216]: Ignition finished successfully Apr 24 23:36:15.380337 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 24 23:36:15.391659 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 24 23:36:15.418426 ignition[1223]: Ignition 2.19.0 Apr 24 23:36:15.418931 ignition[1223]: Stage: disks Apr 24 23:36:15.419646 ignition[1223]: no configs at "/usr/lib/ignition/base.d" Apr 24 23:36:15.419672 ignition[1223]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 24 23:36:15.419852 ignition[1223]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 24 23:36:15.429122 ignition[1223]: PUT result: OK Apr 24 23:36:15.433596 ignition[1223]: disks: disks passed Apr 24 23:36:15.433703 ignition[1223]: Ignition finished successfully Apr 24 23:36:15.436744 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 24 23:36:15.444573 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 24 23:36:15.447203 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 24 23:36:15.449969 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 24 23:36:15.452663 systemd[1]: Reached target sysinit.target - System Initialization. Apr 24 23:36:15.455033 systemd[1]: Reached target basic.target - Basic System. Apr 24 23:36:15.474624 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 24 23:36:15.519237 systemd-fsck[1231]: ROOT: clean, 14/553520 files, 52654/553472 blocks Apr 24 23:36:15.526210 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 24 23:36:15.540098 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 24 23:36:15.632394 kernel: EXT4-fs (nvme0n1p9): mounted filesystem edaa698b-3baa-4242-8691-64cb9f35f18f r/w with ordered data mode. Quota mode: none. Apr 24 23:36:15.634067 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 24 23:36:15.638580 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 24 23:36:15.658657 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 24 23:36:15.668112 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 24 23:36:15.672807 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 24 23:36:15.672920 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 24 23:36:15.672975 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 24 23:36:15.698382 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/nvme0n1p6 scanned by mount (1250) Apr 24 23:36:15.706194 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 7d1fb622-285b-4375-96d6-a0d989283452 Apr 24 23:36:15.706276 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Apr 24 23:36:15.707728 kernel: BTRFS info (device nvme0n1p6): using free space tree Apr 24 23:36:15.707732 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 24 23:36:15.719803 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 24 23:36:15.729402 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Apr 24 23:36:15.732510 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 24 23:36:15.866447 initrd-setup-root[1276]: cut: /sysroot/etc/passwd: No such file or directory Apr 24 23:36:15.875733 initrd-setup-root[1283]: cut: /sysroot/etc/group: No such file or directory Apr 24 23:36:15.885199 initrd-setup-root[1290]: cut: /sysroot/etc/shadow: No such file or directory Apr 24 23:36:15.894551 initrd-setup-root[1297]: cut: /sysroot/etc/gshadow: No such file or directory Apr 24 23:36:16.099932 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 24 23:36:16.110665 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 24 23:36:16.114677 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 24 23:36:16.142086 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 24 23:36:16.145176 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 7d1fb622-285b-4375-96d6-a0d989283452 Apr 24 23:36:16.178327 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 24 23:36:16.193608 ignition[1365]: INFO : Ignition 2.19.0 Apr 24 23:36:16.193608 ignition[1365]: INFO : Stage: mount Apr 24 23:36:16.203946 ignition[1365]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 24 23:36:16.203946 ignition[1365]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 24 23:36:16.203946 ignition[1365]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 24 23:36:16.211944 ignition[1365]: INFO : PUT result: OK Apr 24 23:36:16.216559 ignition[1365]: INFO : mount: mount passed Apr 24 23:36:16.216559 ignition[1365]: INFO : Ignition finished successfully Apr 24 23:36:16.223746 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 24 23:36:16.239564 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 24 23:36:16.270850 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 24 23:36:16.293423 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 scanned by mount (1376) Apr 24 23:36:16.297997 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 7d1fb622-285b-4375-96d6-a0d989283452 Apr 24 23:36:16.298079 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Apr 24 23:36:16.298121 kernel: BTRFS info (device nvme0n1p6): using free space tree Apr 24 23:36:16.304400 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Apr 24 23:36:16.309100 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 24 23:36:16.352781 ignition[1393]: INFO : Ignition 2.19.0 Apr 24 23:36:16.352781 ignition[1393]: INFO : Stage: files Apr 24 23:36:16.358259 ignition[1393]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 24 23:36:16.358259 ignition[1393]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 24 23:36:16.358259 ignition[1393]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 24 23:36:16.358259 ignition[1393]: INFO : PUT result: OK Apr 24 23:36:16.369804 ignition[1393]: DEBUG : files: compiled without relabeling support, skipping Apr 24 23:36:16.372739 ignition[1393]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 24 23:36:16.375905 ignition[1393]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 24 23:36:16.384490 ignition[1393]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 24 23:36:16.391555 ignition[1393]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 24 23:36:16.395503 unknown[1393]: wrote ssh authorized keys file for user: core Apr 24 23:36:16.398128 ignition[1393]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 24 23:36:16.403649 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Apr 24 23:36:16.403649 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Apr 24 23:36:16.466506 systemd-networkd[1200]: eth0: Gained IPv6LL Apr 24 23:36:16.492755 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 24 23:36:16.696269 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Apr 24 23:36:16.696269 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 24 23:36:16.696269 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 24 23:36:16.696269 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 24 23:36:16.714430 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 24 23:36:16.714430 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 24 23:36:16.714430 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 24 23:36:16.714430 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 24 23:36:16.714430 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 24 23:36:16.714430 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 24 23:36:16.714430 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 24 23:36:16.714430 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Apr 24 23:36:16.714430 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Apr 24 23:36:16.714430 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Apr 24 23:36:16.714430 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-arm64.raw: attempt #1 Apr 24 23:36:17.190510 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 24 23:36:17.588084 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Apr 24 23:36:17.588084 ignition[1393]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 24 23:36:17.596251 ignition[1393]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 24 23:36:17.596251 ignition[1393]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 24 23:36:17.596251 ignition[1393]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 24 23:36:17.596251 ignition[1393]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Apr 24 23:36:17.596251 ignition[1393]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Apr 24 23:36:17.596251 ignition[1393]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 24 23:36:17.596251 ignition[1393]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 24 23:36:17.596251 ignition[1393]: INFO : files: files passed Apr 24 23:36:17.596251 ignition[1393]: INFO : Ignition finished successfully Apr 24 23:36:17.610425 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 24 23:36:17.630878 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 24 23:36:17.639727 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 24 23:36:17.643778 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 24 23:36:17.644002 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 24 23:36:17.683752 initrd-setup-root-after-ignition[1422]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 24 23:36:17.683752 initrd-setup-root-after-ignition[1422]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 24 23:36:17.692725 initrd-setup-root-after-ignition[1426]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 24 23:36:17.698250 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 24 23:36:17.705069 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 24 23:36:17.715817 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 24 23:36:17.773452 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 24 23:36:17.774237 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 24 23:36:17.782751 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 24 23:36:17.785570 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 24 23:36:17.788136 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 24 23:36:17.804707 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 24 23:36:17.837790 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 24 23:36:17.860850 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 24 23:36:17.892095 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 24 23:36:17.893448 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 24 23:36:17.906464 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 24 23:36:17.909185 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 24 23:36:17.912717 systemd[1]: Stopped target timers.target - Timer Units. Apr 24 23:36:17.924400 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 24 23:36:17.924551 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 24 23:36:17.932543 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 24 23:36:17.934879 systemd[1]: Stopped target basic.target - Basic System. Apr 24 23:36:17.937020 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 24 23:36:17.939568 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 24 23:36:17.942271 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 24 23:36:17.944907 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 24 23:36:17.947427 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 24 23:36:17.950223 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 24 23:36:17.952721 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 24 23:36:17.955093 systemd[1]: Stopped target swap.target - Swaps. Apr 24 23:36:17.957079 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 24 23:36:17.957210 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 24 23:36:17.972701 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 24 23:36:17.993883 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 24 23:36:17.997049 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 24 23:36:17.999748 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 24 23:36:18.003922 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 24 23:36:18.004047 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 24 23:36:18.016492 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 24 23:36:18.016609 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 24 23:36:18.019593 systemd[1]: ignition-files.service: Deactivated successfully. Apr 24 23:36:18.019701 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 24 23:36:18.036694 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 24 23:36:18.037665 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 24 23:36:18.038212 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 24 23:36:18.053292 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 24 23:36:18.059544 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 24 23:36:18.059695 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 24 23:36:18.065875 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 24 23:36:18.066009 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 24 23:36:18.092282 ignition[1447]: INFO : Ignition 2.19.0 Apr 24 23:36:18.094610 ignition[1447]: INFO : Stage: umount Apr 24 23:36:18.097421 ignition[1447]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 24 23:36:18.097421 ignition[1447]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 24 23:36:18.097421 ignition[1447]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 24 23:36:18.105720 ignition[1447]: INFO : PUT result: OK Apr 24 23:36:18.110647 ignition[1447]: INFO : umount: umount passed Apr 24 23:36:18.112678 ignition[1447]: INFO : Ignition finished successfully Apr 24 23:36:18.117182 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 24 23:36:18.120765 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 24 23:36:18.124578 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 24 23:36:18.124686 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 24 23:36:18.138990 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 24 23:36:18.139422 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 24 23:36:18.146733 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 24 23:36:18.146859 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 24 23:36:18.149294 systemd[1]: Stopped target network.target - Network. Apr 24 23:36:18.151679 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 24 23:36:18.151828 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 24 23:36:18.154710 systemd[1]: Stopped target paths.target - Path Units. Apr 24 23:36:18.158866 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 24 23:36:18.163606 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 24 23:36:18.167780 systemd[1]: Stopped target slices.target - Slice Units. Apr 24 23:36:18.170083 systemd[1]: Stopped target sockets.target - Socket Units. Apr 24 23:36:18.172834 systemd[1]: iscsid.socket: Deactivated successfully. Apr 24 23:36:18.173027 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 24 23:36:18.182055 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 24 23:36:18.182161 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 24 23:36:18.203569 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 24 23:36:18.203694 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 24 23:36:18.206156 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 24 23:36:18.206265 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 24 23:36:18.209851 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 24 23:36:18.216685 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 24 23:36:18.229011 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 24 23:36:18.230467 systemd-networkd[1200]: eth0: DHCPv6 lease lost Apr 24 23:36:18.240306 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 24 23:36:18.240628 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 24 23:36:18.252963 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 24 23:36:18.255520 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 24 23:36:18.259725 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 24 23:36:18.259841 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 24 23:36:18.280599 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 24 23:36:18.282822 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 24 23:36:18.282948 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 24 23:36:18.288406 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 24 23:36:18.291478 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 24 23:36:18.300164 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 24 23:36:18.300366 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 24 23:36:18.310150 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 24 23:36:18.310253 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 24 23:36:18.320805 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 24 23:36:18.347553 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 24 23:36:18.347917 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 24 23:36:18.356578 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 24 23:36:18.356797 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 24 23:36:18.359838 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 24 23:36:18.360774 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 24 23:36:18.366695 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 24 23:36:18.366825 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 24 23:36:18.377333 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 24 23:36:18.377444 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 24 23:36:18.380414 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 24 23:36:18.380509 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 24 23:36:18.383205 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 24 23:36:18.383297 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 24 23:36:18.400150 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 24 23:36:18.400383 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 24 23:36:18.408833 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 24 23:36:18.408933 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 24 23:36:18.428769 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 24 23:36:18.437578 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 24 23:36:18.437705 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 24 23:36:18.440683 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 24 23:36:18.440785 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 24 23:36:18.449099 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 24 23:36:18.451985 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 24 23:36:18.459044 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 24 23:36:18.483744 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 24 23:36:18.502734 systemd[1]: Switching root. Apr 24 23:36:18.546439 systemd-journald[250]: Journal stopped Apr 24 23:36:20.567505 systemd-journald[250]: Received SIGTERM from PID 1 (systemd). Apr 24 23:36:20.567636 kernel: SELinux: policy capability network_peer_controls=1 Apr 24 23:36:20.567686 kernel: SELinux: policy capability open_perms=1 Apr 24 23:36:20.567720 kernel: SELinux: policy capability extended_socket_class=1 Apr 24 23:36:20.567758 kernel: SELinux: policy capability always_check_network=0 Apr 24 23:36:20.567787 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 24 23:36:20.567818 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 24 23:36:20.567858 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 24 23:36:20.567888 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 24 23:36:20.567919 kernel: audit: type=1403 audit(1777073778.788:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 24 23:36:20.567952 systemd[1]: Successfully loaded SELinux policy in 53.243ms. Apr 24 23:36:20.567996 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 24.775ms. Apr 24 23:36:20.568034 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 24 23:36:20.568067 systemd[1]: Detected virtualization amazon. Apr 24 23:36:20.568099 systemd[1]: Detected architecture arm64. Apr 24 23:36:20.568131 systemd[1]: Detected first boot. Apr 24 23:36:20.568162 systemd[1]: Initializing machine ID from VM UUID. Apr 24 23:36:20.568195 zram_generator::config[1488]: No configuration found. Apr 24 23:36:20.568228 systemd[1]: Populated /etc with preset unit settings. Apr 24 23:36:20.568260 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 24 23:36:20.568289 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 24 23:36:20.568324 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 24 23:36:20.572891 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 24 23:36:20.572953 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 24 23:36:20.572987 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 24 23:36:20.573018 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 24 23:36:20.573053 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 24 23:36:20.573085 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 24 23:36:20.573120 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 24 23:36:20.573160 systemd[1]: Created slice user.slice - User and Session Slice. Apr 24 23:36:20.573193 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 24 23:36:20.573226 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 24 23:36:20.573256 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 24 23:36:20.573289 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 24 23:36:20.573322 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 24 23:36:20.573374 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 24 23:36:20.573411 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 24 23:36:20.573445 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 24 23:36:20.573480 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 24 23:36:20.573511 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 24 23:36:20.573541 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 24 23:36:20.573573 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 24 23:36:20.573602 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 24 23:36:20.573635 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 24 23:36:20.573664 systemd[1]: Reached target slices.target - Slice Units. Apr 24 23:36:20.573695 systemd[1]: Reached target swap.target - Swaps. Apr 24 23:36:20.573728 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 24 23:36:20.573761 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 24 23:36:20.573794 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 24 23:36:20.573823 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 24 23:36:20.573856 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 24 23:36:20.573885 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 24 23:36:20.573917 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 24 23:36:20.573947 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 24 23:36:20.573977 systemd[1]: Mounting media.mount - External Media Directory... Apr 24 23:36:20.574010 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 24 23:36:20.574040 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 24 23:36:20.574070 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 24 23:36:20.574104 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 24 23:36:20.574134 systemd[1]: Reached target machines.target - Containers. Apr 24 23:36:20.574164 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 24 23:36:20.574194 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 24 23:36:20.574223 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 24 23:36:20.574252 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 24 23:36:20.574286 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 24 23:36:20.574318 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 24 23:36:20.576386 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 24 23:36:20.576449 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 24 23:36:20.576481 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 24 23:36:20.576515 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 24 23:36:20.576545 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 24 23:36:20.576577 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 24 23:36:20.576615 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 24 23:36:20.576646 systemd[1]: Stopped systemd-fsck-usr.service. Apr 24 23:36:20.576678 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 24 23:36:20.576707 kernel: ACPI: bus type drm_connector registered Apr 24 23:36:20.576750 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 24 23:36:20.576779 kernel: fuse: init (API version 7.39) Apr 24 23:36:20.576808 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 24 23:36:20.576839 kernel: loop: module loaded Apr 24 23:36:20.576867 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 24 23:36:20.576904 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 24 23:36:20.576935 systemd[1]: verity-setup.service: Deactivated successfully. Apr 24 23:36:20.576967 systemd[1]: Stopped verity-setup.service. Apr 24 23:36:20.576996 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 24 23:36:20.577028 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 24 23:36:20.577057 systemd[1]: Mounted media.mount - External Media Directory. Apr 24 23:36:20.577087 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 24 23:36:20.577119 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 24 23:36:20.577153 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 24 23:36:20.577183 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 24 23:36:20.577213 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 24 23:36:20.577243 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 24 23:36:20.577272 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 24 23:36:20.577301 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 24 23:36:20.577335 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 24 23:36:20.577387 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 24 23:36:20.577420 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 24 23:36:20.577453 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 24 23:36:20.577483 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 24 23:36:20.577513 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 24 23:36:20.577545 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 24 23:36:20.577577 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 24 23:36:20.577611 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 24 23:36:20.577644 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 24 23:36:20.577674 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 24 23:36:20.577707 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 24 23:36:20.577740 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 24 23:36:20.577771 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 24 23:36:20.577849 systemd-journald[1569]: Collecting audit messages is disabled. Apr 24 23:36:20.577914 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 24 23:36:20.577945 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 24 23:36:20.577976 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 24 23:36:20.578005 systemd-journald[1569]: Journal started Apr 24 23:36:20.578057 systemd-journald[1569]: Runtime Journal (/run/log/journal/ec269bb60af2a6e1d006d09d6ba5a4ed) is 8.0M, max 75.3M, 67.3M free. Apr 24 23:36:19.881179 systemd[1]: Queued start job for default target multi-user.target. Apr 24 23:36:19.905821 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Apr 24 23:36:19.906747 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 24 23:36:20.598875 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 24 23:36:20.610002 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 24 23:36:20.620410 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 24 23:36:20.626374 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 24 23:36:20.635475 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 24 23:36:20.652548 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 24 23:36:20.652639 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 24 23:36:20.667880 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 24 23:36:20.677763 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 24 23:36:20.687844 systemd[1]: Started systemd-journald.service - Journal Service. Apr 24 23:36:20.700639 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 24 23:36:20.704042 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 24 23:36:20.707286 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 24 23:36:20.711517 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 24 23:36:20.735583 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 24 23:36:20.798197 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 24 23:36:20.820409 kernel: loop0: detected capacity change from 0 to 114432 Apr 24 23:36:20.815715 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 24 23:36:20.826727 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 24 23:36:20.847667 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 24 23:36:20.871454 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 24 23:36:20.891783 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 24 23:36:20.885346 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 24 23:36:20.893212 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 24 23:36:20.898028 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 24 23:36:20.919304 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 24 23:36:20.932393 systemd-journald[1569]: Time spent on flushing to /var/log/journal/ec269bb60af2a6e1d006d09d6ba5a4ed is 72.641ms for 909 entries. Apr 24 23:36:20.932393 systemd-journald[1569]: System Journal (/var/log/journal/ec269bb60af2a6e1d006d09d6ba5a4ed) is 8.0M, max 195.6M, 187.6M free. Apr 24 23:36:21.037543 systemd-journald[1569]: Received client request to flush runtime journal. Apr 24 23:36:21.037662 kernel: loop1: detected capacity change from 0 to 209336 Apr 24 23:36:21.005402 udevadm[1632]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Apr 24 23:36:21.017467 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 24 23:36:21.029788 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 24 23:36:21.045483 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 24 23:36:21.107653 kernel: loop2: detected capacity change from 0 to 52536 Apr 24 23:36:21.130191 systemd-tmpfiles[1635]: ACLs are not supported, ignoring. Apr 24 23:36:21.130226 systemd-tmpfiles[1635]: ACLs are not supported, ignoring. Apr 24 23:36:21.150886 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 24 23:36:21.223390 kernel: loop3: detected capacity change from 0 to 114328 Apr 24 23:36:21.278081 kernel: loop4: detected capacity change from 0 to 114432 Apr 24 23:36:21.304346 kernel: loop5: detected capacity change from 0 to 209336 Apr 24 23:36:21.339406 kernel: loop6: detected capacity change from 0 to 52536 Apr 24 23:36:21.367412 kernel: loop7: detected capacity change from 0 to 114328 Apr 24 23:36:21.390909 (sd-merge)[1642]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Apr 24 23:36:21.392119 (sd-merge)[1642]: Merged extensions into '/usr'. Apr 24 23:36:21.408590 systemd[1]: Reloading requested from client PID 1599 ('systemd-sysext') (unit systemd-sysext.service)... Apr 24 23:36:21.408896 systemd[1]: Reloading... Apr 24 23:36:21.692399 ldconfig[1595]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 24 23:36:21.738408 zram_generator::config[1680]: No configuration found. Apr 24 23:36:21.942766 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 24 23:36:22.072323 systemd[1]: Reloading finished in 662 ms. Apr 24 23:36:22.117456 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 24 23:36:22.121030 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 24 23:36:22.124560 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 24 23:36:22.144724 systemd[1]: Starting ensure-sysext.service... Apr 24 23:36:22.159856 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 24 23:36:22.168762 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 24 23:36:22.192462 systemd[1]: Reloading requested from client PID 1721 ('systemctl') (unit ensure-sysext.service)... Apr 24 23:36:22.192502 systemd[1]: Reloading... Apr 24 23:36:22.213695 systemd-tmpfiles[1722]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 24 23:36:22.215112 systemd-tmpfiles[1722]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 24 23:36:22.217603 systemd-tmpfiles[1722]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 24 23:36:22.218450 systemd-tmpfiles[1722]: ACLs are not supported, ignoring. Apr 24 23:36:22.218925 systemd-tmpfiles[1722]: ACLs are not supported, ignoring. Apr 24 23:36:22.227301 systemd-tmpfiles[1722]: Detected autofs mount point /boot during canonicalization of boot. Apr 24 23:36:22.227735 systemd-tmpfiles[1722]: Skipping /boot Apr 24 23:36:22.255756 systemd-tmpfiles[1722]: Detected autofs mount point /boot during canonicalization of boot. Apr 24 23:36:22.255781 systemd-tmpfiles[1722]: Skipping /boot Apr 24 23:36:22.311652 systemd-udevd[1723]: Using default interface naming scheme 'v255'. Apr 24 23:36:22.430138 zram_generator::config[1759]: No configuration found. Apr 24 23:36:22.602514 (udev-worker)[1799]: Network interface NamePolicy= disabled on kernel command line. Apr 24 23:36:22.822413 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 24 23:36:22.859447 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 36 scanned by (udev-worker) (1800) Apr 24 23:36:22.979840 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 24 23:36:22.982204 systemd[1]: Reloading finished in 789 ms. Apr 24 23:36:23.026532 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 24 23:36:23.063473 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 24 23:36:23.120473 systemd[1]: Finished ensure-sysext.service. Apr 24 23:36:23.162649 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 24 23:36:23.179868 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Apr 24 23:36:23.197748 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 24 23:36:23.206397 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 24 23:36:23.210905 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 24 23:36:23.220675 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 24 23:36:23.226717 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 24 23:36:23.233685 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 24 23:36:23.239715 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 24 23:36:23.245601 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 24 23:36:23.251622 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 24 23:36:23.265825 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 24 23:36:23.281663 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 24 23:36:23.292767 lvm[1921]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 24 23:36:23.301793 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 24 23:36:23.314564 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 24 23:36:23.317098 systemd[1]: Reached target time-set.target - System Time Set. Apr 24 23:36:23.325654 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 24 23:36:23.332004 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 24 23:36:23.414194 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 24 23:36:23.417168 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 24 23:36:23.417585 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 24 23:36:23.440628 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 24 23:36:23.443735 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 24 23:36:23.449937 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 24 23:36:23.458561 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 24 23:36:23.458999 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 24 23:36:23.466009 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 24 23:36:23.475684 augenrules[1948]: No rules Apr 24 23:36:23.471216 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 24 23:36:23.471682 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 24 23:36:23.476124 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 24 23:36:23.481581 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 24 23:36:23.491685 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 24 23:36:23.492102 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 24 23:36:23.499675 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 24 23:36:23.528774 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 24 23:36:23.548851 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 24 23:36:23.557424 lvm[1950]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 24 23:36:23.562724 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 24 23:36:23.588965 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 24 23:36:23.592535 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 24 23:36:23.611062 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 24 23:36:23.638252 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 24 23:36:23.656541 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 24 23:36:23.701039 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 24 23:36:23.781627 systemd-networkd[1934]: lo: Link UP Apr 24 23:36:23.781652 systemd-networkd[1934]: lo: Gained carrier Apr 24 23:36:23.784628 systemd-networkd[1934]: Enumeration completed Apr 24 23:36:23.784826 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 24 23:36:23.793886 systemd-resolved[1935]: Positive Trust Anchors: Apr 24 23:36:23.794436 systemd-resolved[1935]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 24 23:36:23.794718 systemd-resolved[1935]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 24 23:36:23.796485 systemd-networkd[1934]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 24 23:36:23.796506 systemd-networkd[1934]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 24 23:36:23.802757 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 24 23:36:23.808556 systemd-networkd[1934]: eth0: Link UP Apr 24 23:36:23.809740 systemd-networkd[1934]: eth0: Gained carrier Apr 24 23:36:23.809789 systemd-networkd[1934]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 24 23:36:23.812622 systemd-resolved[1935]: Defaulting to hostname 'linux'. Apr 24 23:36:23.816235 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 24 23:36:23.819465 systemd[1]: Reached target network.target - Network. Apr 24 23:36:23.821527 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 24 23:36:23.824613 systemd[1]: Reached target sysinit.target - System Initialization. Apr 24 23:36:23.827198 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 24 23:36:23.830124 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 24 23:36:23.833423 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 24 23:36:23.836295 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 24 23:36:23.839324 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 24 23:36:23.841126 systemd-networkd[1934]: eth0: DHCPv4 address 172.31.17.112/20, gateway 172.31.16.1 acquired from 172.31.16.1 Apr 24 23:36:23.842680 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 24 23:36:23.842739 systemd[1]: Reached target paths.target - Path Units. Apr 24 23:36:23.844904 systemd[1]: Reached target timers.target - Timer Units. Apr 24 23:36:23.848105 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 24 23:36:23.855211 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 24 23:36:23.870775 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 24 23:36:23.874470 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 24 23:36:23.877097 systemd[1]: Reached target sockets.target - Socket Units. Apr 24 23:36:23.879343 systemd[1]: Reached target basic.target - Basic System. Apr 24 23:36:23.881516 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 24 23:36:23.881576 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 24 23:36:23.883824 systemd[1]: Starting containerd.service - containerd container runtime... Apr 24 23:36:23.893119 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Apr 24 23:36:23.907929 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 24 23:36:23.915607 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 24 23:36:23.923615 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 24 23:36:23.928265 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 24 23:36:23.931624 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 24 23:36:23.943250 systemd[1]: Started ntpd.service - Network Time Service. Apr 24 23:36:23.955572 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 24 23:36:23.970193 systemd[1]: Starting setup-oem.service - Setup OEM... Apr 24 23:36:23.978716 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 24 23:36:23.989000 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 24 23:36:24.001764 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 24 23:36:24.008255 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 24 23:36:24.010550 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 24 23:36:24.014756 systemd[1]: Starting update-engine.service - Update Engine... Apr 24 23:36:24.021775 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 24 23:36:24.094247 jq[1984]: false Apr 24 23:36:24.115930 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 24 23:36:24.116301 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 24 23:36:24.117061 systemd[1]: motdgen.service: Deactivated successfully. Apr 24 23:36:24.118450 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 24 23:36:24.140339 extend-filesystems[1985]: Found loop4 Apr 24 23:36:24.148114 extend-filesystems[1985]: Found loop5 Apr 24 23:36:24.148114 extend-filesystems[1985]: Found loop6 Apr 24 23:36:24.148114 extend-filesystems[1985]: Found loop7 Apr 24 23:36:24.148114 extend-filesystems[1985]: Found nvme0n1 Apr 24 23:36:24.148114 extend-filesystems[1985]: Found nvme0n1p1 Apr 24 23:36:24.148114 extend-filesystems[1985]: Found nvme0n1p2 Apr 24 23:36:24.204167 jq[1996]: true Apr 24 23:36:24.184983 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 24 23:36:24.221404 extend-filesystems[1985]: Found nvme0n1p3 Apr 24 23:36:24.221404 extend-filesystems[1985]: Found usr Apr 24 23:36:24.221404 extend-filesystems[1985]: Found nvme0n1p4 Apr 24 23:36:24.221404 extend-filesystems[1985]: Found nvme0n1p6 Apr 24 23:36:24.221404 extend-filesystems[1985]: Found nvme0n1p7 Apr 24 23:36:24.221404 extend-filesystems[1985]: Found nvme0n1p9 Apr 24 23:36:24.221404 extend-filesystems[1985]: Checking size of /dev/nvme0n1p9 Apr 24 23:36:24.187028 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 24 23:36:24.215776 ntpd[1987]: ntpd 4.2.8p17@1.4004-o Fri Apr 24 21:50:58 UTC 2026 (1): Starting Apr 24 23:36:24.292268 ntpd[1987]: 24 Apr 23:36:24 ntpd[1987]: ntpd 4.2.8p17@1.4004-o Fri Apr 24 21:50:58 UTC 2026 (1): Starting Apr 24 23:36:24.292268 ntpd[1987]: 24 Apr 23:36:24 ntpd[1987]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Apr 24 23:36:24.292268 ntpd[1987]: 24 Apr 23:36:24 ntpd[1987]: ---------------------------------------------------- Apr 24 23:36:24.292268 ntpd[1987]: 24 Apr 23:36:24 ntpd[1987]: ntp-4 is maintained by Network Time Foundation, Apr 24 23:36:24.292268 ntpd[1987]: 24 Apr 23:36:24 ntpd[1987]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Apr 24 23:36:24.292268 ntpd[1987]: 24 Apr 23:36:24 ntpd[1987]: corporation. Support and training for ntp-4 are Apr 24 23:36:24.292268 ntpd[1987]: 24 Apr 23:36:24 ntpd[1987]: available at https://www.nwtime.org/support Apr 24 23:36:24.292268 ntpd[1987]: 24 Apr 23:36:24 ntpd[1987]: ---------------------------------------------------- Apr 24 23:36:24.292268 ntpd[1987]: 24 Apr 23:36:24 ntpd[1987]: proto: precision = 0.096 usec (-23) Apr 24 23:36:24.292268 ntpd[1987]: 24 Apr 23:36:24 ntpd[1987]: basedate set to 2026-04-12 Apr 24 23:36:24.292268 ntpd[1987]: 24 Apr 23:36:24 ntpd[1987]: gps base set to 2026-04-12 (week 2414) Apr 24 23:36:24.292268 ntpd[1987]: 24 Apr 23:36:24 ntpd[1987]: Listen and drop on 0 v6wildcard [::]:123 Apr 24 23:36:24.292268 ntpd[1987]: 24 Apr 23:36:24 ntpd[1987]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Apr 24 23:36:24.292268 ntpd[1987]: 24 Apr 23:36:24 ntpd[1987]: Listen normally on 2 lo 127.0.0.1:123 Apr 24 23:36:24.292268 ntpd[1987]: 24 Apr 23:36:24 ntpd[1987]: Listen normally on 3 eth0 172.31.17.112:123 Apr 24 23:36:24.292268 ntpd[1987]: 24 Apr 23:36:24 ntpd[1987]: Listen normally on 4 lo [::1]:123 Apr 24 23:36:24.292268 ntpd[1987]: 24 Apr 23:36:24 ntpd[1987]: bind(21) AF_INET6 fe80::4d8:b6ff:fe3e:fb33%2#123 flags 0x11 failed: Cannot assign requested address Apr 24 23:36:24.292268 ntpd[1987]: 24 Apr 23:36:24 ntpd[1987]: unable to create socket on eth0 (5) for fe80::4d8:b6ff:fe3e:fb33%2#123 Apr 24 23:36:24.292268 ntpd[1987]: 24 Apr 23:36:24 ntpd[1987]: failed to init interface for address fe80::4d8:b6ff:fe3e:fb33%2 Apr 24 23:36:24.292268 ntpd[1987]: 24 Apr 23:36:24 ntpd[1987]: Listening on routing socket on fd #21 for interface updates Apr 24 23:36:24.292268 ntpd[1987]: 24 Apr 23:36:24 ntpd[1987]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 24 23:36:24.292268 ntpd[1987]: 24 Apr 23:36:24 ntpd[1987]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 24 23:36:24.259089 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 24 23:36:24.215824 ntpd[1987]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Apr 24 23:36:24.270256 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 24 23:36:24.215876 ntpd[1987]: ---------------------------------------------------- Apr 24 23:36:24.270312 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 24 23:36:24.215898 ntpd[1987]: ntp-4 is maintained by Network Time Foundation, Apr 24 23:36:24.276923 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 24 23:36:24.215919 ntpd[1987]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Apr 24 23:36:24.276965 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 24 23:36:24.215938 ntpd[1987]: corporation. Support and training for ntp-4 are Apr 24 23:36:24.215957 ntpd[1987]: available at https://www.nwtime.org/support Apr 24 23:36:24.215975 ntpd[1987]: ---------------------------------------------------- Apr 24 23:36:24.228176 ntpd[1987]: proto: precision = 0.096 usec (-23) Apr 24 23:36:24.231675 ntpd[1987]: basedate set to 2026-04-12 Apr 24 23:36:24.231708 ntpd[1987]: gps base set to 2026-04-12 (week 2414) Apr 24 23:36:24.238367 ntpd[1987]: Listen and drop on 0 v6wildcard [::]:123 Apr 24 23:36:24.238458 ntpd[1987]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Apr 24 23:36:24.240596 ntpd[1987]: Listen normally on 2 lo 127.0.0.1:123 Apr 24 23:36:24.240669 ntpd[1987]: Listen normally on 3 eth0 172.31.17.112:123 Apr 24 23:36:24.240738 ntpd[1987]: Listen normally on 4 lo [::1]:123 Apr 24 23:36:24.240821 ntpd[1987]: bind(21) AF_INET6 fe80::4d8:b6ff:fe3e:fb33%2#123 flags 0x11 failed: Cannot assign requested address Apr 24 23:36:24.240861 ntpd[1987]: unable to create socket on eth0 (5) for fe80::4d8:b6ff:fe3e:fb33%2#123 Apr 24 23:36:24.240890 ntpd[1987]: failed to init interface for address fe80::4d8:b6ff:fe3e:fb33%2 Apr 24 23:36:24.240946 ntpd[1987]: Listening on routing socket on fd #21 for interface updates Apr 24 23:36:24.258438 dbus-daemon[1983]: [system] SELinux support is enabled Apr 24 23:36:24.282709 ntpd[1987]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 24 23:36:24.282785 ntpd[1987]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 24 23:36:24.293521 dbus-daemon[1983]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1934 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Apr 24 23:36:24.304228 update_engine[1995]: I20260424 23:36:24.302527 1995 main.cc:92] Flatcar Update Engine starting Apr 24 23:36:24.303891 (ntainerd)[2018]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 24 23:36:24.311240 update_engine[1995]: I20260424 23:36:24.311167 1995 update_check_scheduler.cc:74] Next update check in 8m59s Apr 24 23:36:24.312851 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Apr 24 23:36:24.318738 systemd[1]: Started update-engine.service - Update Engine. Apr 24 23:36:24.333646 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 24 23:36:24.339061 tar[1998]: linux-arm64/LICENSE Apr 24 23:36:24.341422 tar[1998]: linux-arm64/helm Apr 24 23:36:24.350785 extend-filesystems[1985]: Resized partition /dev/nvme0n1p9 Apr 24 23:36:24.363292 jq[2021]: true Apr 24 23:36:24.371136 extend-filesystems[2032]: resize2fs 1.47.1 (20-May-2024) Apr 24 23:36:24.392890 systemd-logind[1993]: Watching system buttons on /dev/input/event0 (Power Button) Apr 24 23:36:24.396746 systemd-logind[1993]: Watching system buttons on /dev/input/event1 (Sleep Button) Apr 24 23:36:24.402602 systemd-logind[1993]: New seat seat0. Apr 24 23:36:24.407533 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 3587067 blocks Apr 24 23:36:24.455090 systemd[1]: Started systemd-logind.service - User Login Management. Apr 24 23:36:24.500346 systemd[1]: Finished setup-oem.service - Setup OEM. Apr 24 23:36:24.569605 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 3587067 Apr 24 23:36:24.587410 extend-filesystems[2032]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Apr 24 23:36:24.587410 extend-filesystems[2032]: old_desc_blocks = 1, new_desc_blocks = 2 Apr 24 23:36:24.587410 extend-filesystems[2032]: The filesystem on /dev/nvme0n1p9 is now 3587067 (4k) blocks long. Apr 24 23:36:24.594011 extend-filesystems[1985]: Resized filesystem in /dev/nvme0n1p9 Apr 24 23:36:24.589664 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 24 23:36:24.590874 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 24 23:36:24.638798 coreos-metadata[1982]: Apr 24 23:36:24.637 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Apr 24 23:36:24.645385 coreos-metadata[1982]: Apr 24 23:36:24.643 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Apr 24 23:36:24.646544 coreos-metadata[1982]: Apr 24 23:36:24.646 INFO Fetch successful Apr 24 23:36:24.647101 coreos-metadata[1982]: Apr 24 23:36:24.646 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Apr 24 23:36:24.651552 coreos-metadata[1982]: Apr 24 23:36:24.647 INFO Fetch successful Apr 24 23:36:24.652368 coreos-metadata[1982]: Apr 24 23:36:24.651 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Apr 24 23:36:24.652368 coreos-metadata[1982]: Apr 24 23:36:24.652 INFO Fetch successful Apr 24 23:36:24.652368 coreos-metadata[1982]: Apr 24 23:36:24.652 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Apr 24 23:36:24.652368 coreos-metadata[1982]: Apr 24 23:36:24.652 INFO Fetch successful Apr 24 23:36:24.652368 coreos-metadata[1982]: Apr 24 23:36:24.652 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Apr 24 23:36:24.655385 coreos-metadata[1982]: Apr 24 23:36:24.653 INFO Fetch failed with 404: resource not found Apr 24 23:36:24.655385 coreos-metadata[1982]: Apr 24 23:36:24.653 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Apr 24 23:36:24.657549 coreos-metadata[1982]: Apr 24 23:36:24.657 INFO Fetch successful Apr 24 23:36:24.657549 coreos-metadata[1982]: Apr 24 23:36:24.657 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Apr 24 23:36:24.659632 coreos-metadata[1982]: Apr 24 23:36:24.658 INFO Fetch successful Apr 24 23:36:24.659632 coreos-metadata[1982]: Apr 24 23:36:24.659 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Apr 24 23:36:24.660824 coreos-metadata[1982]: Apr 24 23:36:24.660 INFO Fetch successful Apr 24 23:36:24.665327 coreos-metadata[1982]: Apr 24 23:36:24.660 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Apr 24 23:36:24.665327 coreos-metadata[1982]: Apr 24 23:36:24.665 INFO Fetch successful Apr 24 23:36:24.665327 coreos-metadata[1982]: Apr 24 23:36:24.665 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Apr 24 23:36:24.666413 coreos-metadata[1982]: Apr 24 23:36:24.666 INFO Fetch successful Apr 24 23:36:24.713136 bash[2060]: Updated "/home/core/.ssh/authorized_keys" Apr 24 23:36:24.721440 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 24 23:36:24.773861 systemd[1]: Starting sshkeys.service... Apr 24 23:36:24.886379 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 36 scanned by (udev-worker) (1790) Apr 24 23:36:24.895583 dbus-daemon[1983]: [system] Successfully activated service 'org.freedesktop.hostname1' Apr 24 23:36:24.900552 dbus-daemon[1983]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=2026 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Apr 24 23:36:24.912793 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Apr 24 23:36:24.922927 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Apr 24 23:36:24.931473 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Apr 24 23:36:24.937250 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Apr 24 23:36:24.958040 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 24 23:36:24.972010 systemd[1]: Starting polkit.service - Authorization Manager... Apr 24 23:36:25.064477 polkitd[2104]: Started polkitd version 121 Apr 24 23:36:25.099718 sshd_keygen[2007]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 24 23:36:25.106091 containerd[2018]: time="2026-04-24T23:36:25.105937532Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 24 23:36:25.126957 polkitd[2104]: Loading rules from directory /etc/polkit-1/rules.d Apr 24 23:36:25.135765 polkitd[2104]: Loading rules from directory /usr/share/polkit-1/rules.d Apr 24 23:36:25.137590 polkitd[2104]: Finished loading, compiling and executing 2 rules Apr 24 23:36:25.142767 dbus-daemon[1983]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Apr 24 23:36:25.143104 systemd[1]: Started polkit.service - Authorization Manager. Apr 24 23:36:25.147717 polkitd[2104]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Apr 24 23:36:25.219773 ntpd[1987]: bind(24) AF_INET6 fe80::4d8:b6ff:fe3e:fb33%2#123 flags 0x11 failed: Cannot assign requested address Apr 24 23:36:25.221843 ntpd[1987]: 24 Apr 23:36:25 ntpd[1987]: bind(24) AF_INET6 fe80::4d8:b6ff:fe3e:fb33%2#123 flags 0x11 failed: Cannot assign requested address Apr 24 23:36:25.221843 ntpd[1987]: 24 Apr 23:36:25 ntpd[1987]: unable to create socket on eth0 (6) for fe80::4d8:b6ff:fe3e:fb33%2#123 Apr 24 23:36:25.221843 ntpd[1987]: 24 Apr 23:36:25 ntpd[1987]: failed to init interface for address fe80::4d8:b6ff:fe3e:fb33%2 Apr 24 23:36:25.219845 ntpd[1987]: unable to create socket on eth0 (6) for fe80::4d8:b6ff:fe3e:fb33%2#123 Apr 24 23:36:25.219875 ntpd[1987]: failed to init interface for address fe80::4d8:b6ff:fe3e:fb33%2 Apr 24 23:36:25.234914 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 24 23:36:25.273678 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 24 23:36:25.294916 systemd-resolved[1935]: System hostname changed to 'ip-172-31-17-112'. Apr 24 23:36:25.295433 systemd-hostnamed[2026]: Hostname set to (transient) Apr 24 23:36:25.303297 systemd[1]: issuegen.service: Deactivated successfully. Apr 24 23:36:25.306456 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 24 23:36:25.326802 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 24 23:36:25.354436 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 24 23:36:25.362596 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 24 23:36:25.376391 containerd[2018]: time="2026-04-24T23:36:25.367445493Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 24 23:36:25.369871 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 24 23:36:25.373068 systemd[1]: Reached target getty.target - Login Prompts. Apr 24 23:36:25.391409 containerd[2018]: time="2026-04-24T23:36:25.389425077Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 24 23:36:25.391409 containerd[2018]: time="2026-04-24T23:36:25.389495037Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 24 23:36:25.391409 containerd[2018]: time="2026-04-24T23:36:25.389530233Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 24 23:36:25.391409 containerd[2018]: time="2026-04-24T23:36:25.389846373Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 24 23:36:25.391409 containerd[2018]: time="2026-04-24T23:36:25.389886369Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 24 23:36:25.391409 containerd[2018]: time="2026-04-24T23:36:25.390019629Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 24 23:36:25.391409 containerd[2018]: time="2026-04-24T23:36:25.390049917Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 24 23:36:25.391409 containerd[2018]: time="2026-04-24T23:36:25.390385869Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 24 23:36:25.391409 containerd[2018]: time="2026-04-24T23:36:25.390429921Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 24 23:36:25.391409 containerd[2018]: time="2026-04-24T23:36:25.390467637Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 24 23:36:25.391409 containerd[2018]: time="2026-04-24T23:36:25.390492201Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 24 23:36:25.389606 locksmithd[2030]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 24 23:36:25.392500 containerd[2018]: time="2026-04-24T23:36:25.390693489Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 24 23:36:25.392500 containerd[2018]: time="2026-04-24T23:36:25.391177917Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 24 23:36:25.404661 containerd[2018]: time="2026-04-24T23:36:25.399059398Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 24 23:36:25.404661 containerd[2018]: time="2026-04-24T23:36:25.399147370Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 24 23:36:25.404661 containerd[2018]: time="2026-04-24T23:36:25.399585946Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 24 23:36:25.404661 containerd[2018]: time="2026-04-24T23:36:25.400919602Z" level=info msg="metadata content store policy set" policy=shared Apr 24 23:36:25.420380 containerd[2018]: time="2026-04-24T23:36:25.417189574Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 24 23:36:25.420380 containerd[2018]: time="2026-04-24T23:36:25.417404602Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 24 23:36:25.420380 containerd[2018]: time="2026-04-24T23:36:25.417447586Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 24 23:36:25.420380 containerd[2018]: time="2026-04-24T23:36:25.417484354Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 24 23:36:25.420380 containerd[2018]: time="2026-04-24T23:36:25.417516826Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 24 23:36:25.420380 containerd[2018]: time="2026-04-24T23:36:25.417794722Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 24 23:36:25.420380 containerd[2018]: time="2026-04-24T23:36:25.418194394Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 24 23:36:25.420380 containerd[2018]: time="2026-04-24T23:36:25.418486486Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 24 23:36:25.420380 containerd[2018]: time="2026-04-24T23:36:25.418541542Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 24 23:36:25.420380 containerd[2018]: time="2026-04-24T23:36:25.418589998Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 24 23:36:25.420380 containerd[2018]: time="2026-04-24T23:36:25.418624774Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 24 23:36:25.420380 containerd[2018]: time="2026-04-24T23:36:25.418658494Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 24 23:36:25.420380 containerd[2018]: time="2026-04-24T23:36:25.418691038Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 24 23:36:25.420380 containerd[2018]: time="2026-04-24T23:36:25.418726138Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 24 23:36:25.421045 containerd[2018]: time="2026-04-24T23:36:25.418760626Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 24 23:36:25.421045 containerd[2018]: time="2026-04-24T23:36:25.418793890Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 24 23:36:25.421045 containerd[2018]: time="2026-04-24T23:36:25.418824190Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 24 23:36:25.421045 containerd[2018]: time="2026-04-24T23:36:25.418854874Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 24 23:36:25.421045 containerd[2018]: time="2026-04-24T23:36:25.418900618Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 24 23:36:25.421045 containerd[2018]: time="2026-04-24T23:36:25.418933054Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 24 23:36:25.421045 containerd[2018]: time="2026-04-24T23:36:25.418963222Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 24 23:36:25.421045 containerd[2018]: time="2026-04-24T23:36:25.418996942Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 24 23:36:25.421045 containerd[2018]: time="2026-04-24T23:36:25.419046418Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 24 23:36:25.421045 containerd[2018]: time="2026-04-24T23:36:25.419084902Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 24 23:36:25.421045 containerd[2018]: time="2026-04-24T23:36:25.419114578Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 24 23:36:25.421045 containerd[2018]: time="2026-04-24T23:36:25.419146054Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 24 23:36:25.421045 containerd[2018]: time="2026-04-24T23:36:25.419181706Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 24 23:36:25.421045 containerd[2018]: time="2026-04-24T23:36:25.419219110Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 24 23:36:25.421679 containerd[2018]: time="2026-04-24T23:36:25.419261182Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 24 23:36:25.421679 containerd[2018]: time="2026-04-24T23:36:25.419292454Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 24 23:36:25.421679 containerd[2018]: time="2026-04-24T23:36:25.419322082Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 24 23:36:25.426694 systemd-networkd[1934]: eth0: Gained IPv6LL Apr 24 23:36:25.429891 containerd[2018]: time="2026-04-24T23:36:25.426689278Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 24 23:36:25.429891 containerd[2018]: time="2026-04-24T23:36:25.426810238Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 24 23:36:25.429891 containerd[2018]: time="2026-04-24T23:36:25.426877366Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 24 23:36:25.429891 containerd[2018]: time="2026-04-24T23:36:25.426960418Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 24 23:36:25.434395 containerd[2018]: time="2026-04-24T23:36:25.427335022Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 24 23:36:25.434395 containerd[2018]: time="2026-04-24T23:36:25.432478042Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 24 23:36:25.434395 containerd[2018]: time="2026-04-24T23:36:25.432572530Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 24 23:36:25.434395 containerd[2018]: time="2026-04-24T23:36:25.432639574Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 24 23:36:25.435379 containerd[2018]: time="2026-04-24T23:36:25.432678778Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 24 23:36:25.435379 containerd[2018]: time="2026-04-24T23:36:25.434819314Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 24 23:36:25.435379 containerd[2018]: time="2026-04-24T23:36:25.434858698Z" level=info msg="NRI interface is disabled by configuration." Apr 24 23:36:25.435379 containerd[2018]: time="2026-04-24T23:36:25.434915674Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 24 23:36:25.442420 containerd[2018]: time="2026-04-24T23:36:25.440421922Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 24 23:36:25.442420 containerd[2018]: time="2026-04-24T23:36:25.441727438Z" level=info msg="Connect containerd service" Apr 24 23:36:25.442420 containerd[2018]: time="2026-04-24T23:36:25.441853390Z" level=info msg="using legacy CRI server" Apr 24 23:36:25.442420 containerd[2018]: time="2026-04-24T23:36:25.441920314Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 24 23:36:25.442420 containerd[2018]: time="2026-04-24T23:36:25.442177642Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 24 23:36:25.447735 coreos-metadata[2097]: Apr 24 23:36:25.445 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Apr 24 23:36:25.447735 coreos-metadata[2097]: Apr 24 23:36:25.445 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Apr 24 23:36:25.447735 coreos-metadata[2097]: Apr 24 23:36:25.446 INFO Fetch successful Apr 24 23:36:25.447735 coreos-metadata[2097]: Apr 24 23:36:25.446 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Apr 24 23:36:25.447735 coreos-metadata[2097]: Apr 24 23:36:25.447 INFO Fetch successful Apr 24 23:36:25.453573 unknown[2097]: wrote ssh authorized keys file for user: core Apr 24 23:36:25.457265 containerd[2018]: time="2026-04-24T23:36:25.457206670Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 24 23:36:25.458580 containerd[2018]: time="2026-04-24T23:36:25.458490622Z" level=info msg="Start subscribing containerd event" Apr 24 23:36:25.463009 containerd[2018]: time="2026-04-24T23:36:25.461177206Z" level=info msg="Start recovering state" Apr 24 23:36:25.463009 containerd[2018]: time="2026-04-24T23:36:25.461417242Z" level=info msg="Start event monitor" Apr 24 23:36:25.463009 containerd[2018]: time="2026-04-24T23:36:25.461455486Z" level=info msg="Start snapshots syncer" Apr 24 23:36:25.463009 containerd[2018]: time="2026-04-24T23:36:25.461479054Z" level=info msg="Start cni network conf syncer for default" Apr 24 23:36:25.463009 containerd[2018]: time="2026-04-24T23:36:25.461499382Z" level=info msg="Start streaming server" Apr 24 23:36:25.465396 containerd[2018]: time="2026-04-24T23:36:25.463775506Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 24 23:36:25.465396 containerd[2018]: time="2026-04-24T23:36:25.463955854Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 24 23:36:25.465396 containerd[2018]: time="2026-04-24T23:36:25.464075086Z" level=info msg="containerd successfully booted in 0.361523s" Apr 24 23:36:25.499934 systemd[1]: Started containerd.service - containerd container runtime. Apr 24 23:36:25.503848 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 24 23:36:25.555347 update-ssh-keys[2198]: Updated "/home/core/.ssh/authorized_keys" Apr 24 23:36:25.566396 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Apr 24 23:36:25.580475 systemd[1]: Finished sshkeys.service. Apr 24 23:36:25.584950 systemd[1]: Reached target network-online.target - Network is Online. Apr 24 23:36:25.599873 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Apr 24 23:36:25.615683 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 23:36:25.624382 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 24 23:36:25.750789 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 24 23:36:25.774253 amazon-ssm-agent[2203]: Initializing new seelog logger Apr 24 23:36:25.774888 amazon-ssm-agent[2203]: New Seelog Logger Creation Complete Apr 24 23:36:25.774888 amazon-ssm-agent[2203]: 2026/04/24 23:36:25 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 24 23:36:25.774888 amazon-ssm-agent[2203]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 24 23:36:25.775678 amazon-ssm-agent[2203]: 2026/04/24 23:36:25 processing appconfig overrides Apr 24 23:36:25.776426 amazon-ssm-agent[2203]: 2026/04/24 23:36:25 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 24 23:36:25.776426 amazon-ssm-agent[2203]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 24 23:36:25.776594 amazon-ssm-agent[2203]: 2026/04/24 23:36:25 processing appconfig overrides Apr 24 23:36:25.777746 amazon-ssm-agent[2203]: 2026/04/24 23:36:25 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 24 23:36:25.777746 amazon-ssm-agent[2203]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 24 23:36:25.777746 amazon-ssm-agent[2203]: 2026/04/24 23:36:25 processing appconfig overrides Apr 24 23:36:25.778437 amazon-ssm-agent[2203]: 2026-04-24 23:36:25 INFO Proxy environment variables: Apr 24 23:36:25.784480 amazon-ssm-agent[2203]: 2026/04/24 23:36:25 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 24 23:36:25.784480 amazon-ssm-agent[2203]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 24 23:36:25.785126 amazon-ssm-agent[2203]: 2026/04/24 23:36:25 processing appconfig overrides Apr 24 23:36:25.879143 amazon-ssm-agent[2203]: 2026-04-24 23:36:25 INFO https_proxy: Apr 24 23:36:25.978684 amazon-ssm-agent[2203]: 2026-04-24 23:36:25 INFO http_proxy: Apr 24 23:36:26.076436 amazon-ssm-agent[2203]: 2026-04-24 23:36:25 INFO no_proxy: Apr 24 23:36:26.177245 amazon-ssm-agent[2203]: 2026-04-24 23:36:25 INFO Checking if agent identity type OnPrem can be assumed Apr 24 23:36:26.273497 amazon-ssm-agent[2203]: 2026-04-24 23:36:25 INFO Checking if agent identity type EC2 can be assumed Apr 24 23:36:26.313991 tar[1998]: linux-arm64/README.md Apr 24 23:36:26.339286 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 24 23:36:26.372118 amazon-ssm-agent[2203]: 2026-04-24 23:36:25 INFO Agent will take identity from EC2 Apr 24 23:36:26.405028 amazon-ssm-agent[2203]: 2026-04-24 23:36:25 INFO [amazon-ssm-agent] using named pipe channel for IPC Apr 24 23:36:26.405434 amazon-ssm-agent[2203]: 2026-04-24 23:36:25 INFO [amazon-ssm-agent] using named pipe channel for IPC Apr 24 23:36:26.405696 amazon-ssm-agent[2203]: 2026-04-24 23:36:25 INFO [amazon-ssm-agent] using named pipe channel for IPC Apr 24 23:36:26.405696 amazon-ssm-agent[2203]: 2026-04-24 23:36:25 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Apr 24 23:36:26.405696 amazon-ssm-agent[2203]: 2026-04-24 23:36:25 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Apr 24 23:36:26.405696 amazon-ssm-agent[2203]: 2026-04-24 23:36:25 INFO [amazon-ssm-agent] Starting Core Agent Apr 24 23:36:26.405696 amazon-ssm-agent[2203]: 2026-04-24 23:36:25 INFO [amazon-ssm-agent] registrar detected. Attempting registration Apr 24 23:36:26.405696 amazon-ssm-agent[2203]: 2026-04-24 23:36:25 INFO [Registrar] Starting registrar module Apr 24 23:36:26.406321 amazon-ssm-agent[2203]: 2026-04-24 23:36:25 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Apr 24 23:36:26.406321 amazon-ssm-agent[2203]: 2026-04-24 23:36:26 INFO [EC2Identity] EC2 registration was successful. Apr 24 23:36:26.406321 amazon-ssm-agent[2203]: 2026-04-24 23:36:26 INFO [CredentialRefresher] credentialRefresher has started Apr 24 23:36:26.406321 amazon-ssm-agent[2203]: 2026-04-24 23:36:26 INFO [CredentialRefresher] Starting credentials refresher loop Apr 24 23:36:26.406321 amazon-ssm-agent[2203]: 2026-04-24 23:36:26 INFO EC2RoleProvider Successfully connected with instance profile role credentials Apr 24 23:36:26.471560 amazon-ssm-agent[2203]: 2026-04-24 23:36:26 INFO [CredentialRefresher] Next credential rotation will be in 31.71663832765 minutes Apr 24 23:36:27.190005 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 24 23:36:27.204870 systemd[1]: Started sshd@0-172.31.17.112:22-20.229.252.112:60376.service - OpenSSH per-connection server daemon (20.229.252.112:60376). Apr 24 23:36:27.438410 amazon-ssm-agent[2203]: 2026-04-24 23:36:27 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Apr 24 23:36:27.537458 amazon-ssm-agent[2203]: 2026-04-24 23:36:27 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2228) started Apr 24 23:36:27.639506 amazon-ssm-agent[2203]: 2026-04-24 23:36:27 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Apr 24 23:36:27.831565 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 23:36:27.838916 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 24 23:36:27.846649 systemd[1]: Startup finished in 1.175s (kernel) + 7.972s (initrd) + 9.111s (userspace) = 18.259s. Apr 24 23:36:27.852003 (kubelet)[2242]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 24 23:36:28.217049 ntpd[1987]: Listen normally on 7 eth0 [fe80::4d8:b6ff:fe3e:fb33%2]:123 Apr 24 23:36:28.217627 ntpd[1987]: 24 Apr 23:36:28 ntpd[1987]: Listen normally on 7 eth0 [fe80::4d8:b6ff:fe3e:fb33%2]:123 Apr 24 23:36:28.258715 sshd[2225]: Accepted publickey for core from 20.229.252.112 port 60376 ssh2: RSA SHA256:EpOBCscCvamodiF49drNiIRDMxdv0LtYbixE7WaoRrA Apr 24 23:36:28.262862 sshd[2225]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:36:28.290483 systemd-logind[1993]: New session 1 of user core. Apr 24 23:36:28.292775 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 24 23:36:28.302897 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 24 23:36:28.335506 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 24 23:36:28.346058 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 24 23:36:28.366324 (systemd)[2253]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 24 23:36:28.616572 systemd[2253]: Queued start job for default target default.target. Apr 24 23:36:28.628443 systemd[2253]: Created slice app.slice - User Application Slice. Apr 24 23:36:28.628519 systemd[2253]: Reached target paths.target - Paths. Apr 24 23:36:28.628553 systemd[2253]: Reached target timers.target - Timers. Apr 24 23:36:28.631568 systemd[2253]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 24 23:36:28.666487 systemd[2253]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 24 23:36:28.666987 systemd[2253]: Reached target sockets.target - Sockets. Apr 24 23:36:28.667207 systemd[2253]: Reached target basic.target - Basic System. Apr 24 23:36:28.667318 systemd[2253]: Reached target default.target - Main User Target. Apr 24 23:36:28.667430 systemd[2253]: Startup finished in 287ms. Apr 24 23:36:28.667911 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 24 23:36:28.677692 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 24 23:36:29.079696 kubelet[2242]: E0424 23:36:29.079603 2242 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 24 23:36:29.088940 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 24 23:36:29.090009 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 24 23:36:29.092821 systemd[1]: kubelet.service: Consumed 1.403s CPU time. Apr 24 23:36:29.412921 systemd[1]: Started sshd@1-172.31.17.112:22-20.229.252.112:60388.service - OpenSSH per-connection server daemon (20.229.252.112:60388). Apr 24 23:36:30.454403 sshd[2266]: Accepted publickey for core from 20.229.252.112 port 60388 ssh2: RSA SHA256:EpOBCscCvamodiF49drNiIRDMxdv0LtYbixE7WaoRrA Apr 24 23:36:30.456188 sshd[2266]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:36:30.465549 systemd-logind[1993]: New session 2 of user core. Apr 24 23:36:30.476744 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 24 23:36:31.167571 sshd[2266]: pam_unix(sshd:session): session closed for user core Apr 24 23:36:31.174700 systemd[1]: sshd@1-172.31.17.112:22-20.229.252.112:60388.service: Deactivated successfully. Apr 24 23:36:31.177760 systemd[1]: session-2.scope: Deactivated successfully. Apr 24 23:36:31.181792 systemd-logind[1993]: Session 2 logged out. Waiting for processes to exit. Apr 24 23:36:31.184243 systemd-logind[1993]: Removed session 2. Apr 24 23:36:30.959686 systemd-resolved[1935]: Clock change detected. Flushing caches. Apr 24 23:36:30.967235 systemd-journald[1569]: Time jumped backwards, rotating. Apr 24 23:36:31.099515 systemd[1]: Started sshd@2-172.31.17.112:22-20.229.252.112:60398.service - OpenSSH per-connection server daemon (20.229.252.112:60398). Apr 24 23:36:32.131678 sshd[2275]: Accepted publickey for core from 20.229.252.112 port 60398 ssh2: RSA SHA256:EpOBCscCvamodiF49drNiIRDMxdv0LtYbixE7WaoRrA Apr 24 23:36:32.133723 sshd[2275]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:36:32.144334 systemd-logind[1993]: New session 3 of user core. Apr 24 23:36:32.152335 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 24 23:36:32.836495 sshd[2275]: pam_unix(sshd:session): session closed for user core Apr 24 23:36:32.844895 systemd[1]: sshd@2-172.31.17.112:22-20.229.252.112:60398.service: Deactivated successfully. Apr 24 23:36:32.848204 systemd[1]: session-3.scope: Deactivated successfully. Apr 24 23:36:32.850313 systemd-logind[1993]: Session 3 logged out. Waiting for processes to exit. Apr 24 23:36:32.852953 systemd-logind[1993]: Removed session 3. Apr 24 23:36:33.018562 systemd[1]: Started sshd@3-172.31.17.112:22-20.229.252.112:60406.service - OpenSSH per-connection server daemon (20.229.252.112:60406). Apr 24 23:36:34.062274 sshd[2282]: Accepted publickey for core from 20.229.252.112 port 60406 ssh2: RSA SHA256:EpOBCscCvamodiF49drNiIRDMxdv0LtYbixE7WaoRrA Apr 24 23:36:34.065107 sshd[2282]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:36:34.074364 systemd-logind[1993]: New session 4 of user core. Apr 24 23:36:34.083293 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 24 23:36:34.775691 sshd[2282]: pam_unix(sshd:session): session closed for user core Apr 24 23:36:34.780672 systemd[1]: sshd@3-172.31.17.112:22-20.229.252.112:60406.service: Deactivated successfully. Apr 24 23:36:34.783222 systemd[1]: session-4.scope: Deactivated successfully. Apr 24 23:36:34.786189 systemd-logind[1993]: Session 4 logged out. Waiting for processes to exit. Apr 24 23:36:34.788416 systemd-logind[1993]: Removed session 4. Apr 24 23:36:34.961471 systemd[1]: Started sshd@4-172.31.17.112:22-20.229.252.112:60408.service - OpenSSH per-connection server daemon (20.229.252.112:60408). Apr 24 23:36:35.989317 sshd[2289]: Accepted publickey for core from 20.229.252.112 port 60408 ssh2: RSA SHA256:EpOBCscCvamodiF49drNiIRDMxdv0LtYbixE7WaoRrA Apr 24 23:36:35.992114 sshd[2289]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:36:36.001381 systemd-logind[1993]: New session 5 of user core. Apr 24 23:36:36.008644 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 24 23:36:36.638875 sudo[2292]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 24 23:36:36.639584 sudo[2292]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 24 23:36:36.656689 sudo[2292]: pam_unix(sudo:session): session closed for user root Apr 24 23:36:36.824107 sshd[2289]: pam_unix(sshd:session): session closed for user core Apr 24 23:36:36.829778 systemd[1]: sshd@4-172.31.17.112:22-20.229.252.112:60408.service: Deactivated successfully. Apr 24 23:36:36.833014 systemd[1]: session-5.scope: Deactivated successfully. Apr 24 23:36:36.836323 systemd-logind[1993]: Session 5 logged out. Waiting for processes to exit. Apr 24 23:36:36.838810 systemd-logind[1993]: Removed session 5. Apr 24 23:36:37.010521 systemd[1]: Started sshd@5-172.31.17.112:22-20.229.252.112:41344.service - OpenSSH per-connection server daemon (20.229.252.112:41344). Apr 24 23:36:38.051342 sshd[2297]: Accepted publickey for core from 20.229.252.112 port 41344 ssh2: RSA SHA256:EpOBCscCvamodiF49drNiIRDMxdv0LtYbixE7WaoRrA Apr 24 23:36:38.053096 sshd[2297]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:36:38.061417 systemd-logind[1993]: New session 6 of user core. Apr 24 23:36:38.071257 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 24 23:36:38.598644 sudo[2301]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 24 23:36:38.599892 sudo[2301]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 24 23:36:38.606512 sudo[2301]: pam_unix(sudo:session): session closed for user root Apr 24 23:36:38.616493 sudo[2300]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 24 23:36:38.617184 sudo[2300]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 24 23:36:38.638466 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 24 23:36:38.653764 auditctl[2304]: No rules Apr 24 23:36:38.654597 systemd[1]: audit-rules.service: Deactivated successfully. Apr 24 23:36:38.654950 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 24 23:36:38.663654 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 24 23:36:38.717856 augenrules[2322]: No rules Apr 24 23:36:38.720486 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 24 23:36:38.723276 sudo[2300]: pam_unix(sudo:session): session closed for user root Apr 24 23:36:38.890568 sshd[2297]: pam_unix(sshd:session): session closed for user core Apr 24 23:36:38.896878 systemd[1]: sshd@5-172.31.17.112:22-20.229.252.112:41344.service: Deactivated successfully. Apr 24 23:36:38.900637 systemd[1]: session-6.scope: Deactivated successfully. Apr 24 23:36:38.904395 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 24 23:36:38.910105 systemd-logind[1993]: Session 6 logged out. Waiting for processes to exit. Apr 24 23:36:38.913410 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 23:36:38.915630 systemd-logind[1993]: Removed session 6. Apr 24 23:36:39.069038 systemd[1]: Started sshd@6-172.31.17.112:22-20.229.252.112:41350.service - OpenSSH per-connection server daemon (20.229.252.112:41350). Apr 24 23:36:39.277407 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 23:36:39.283046 (kubelet)[2340]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 24 23:36:39.351720 kubelet[2340]: E0424 23:36:39.351613 2340 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 24 23:36:39.360113 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 24 23:36:39.360453 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 24 23:36:40.084009 sshd[2333]: Accepted publickey for core from 20.229.252.112 port 41350 ssh2: RSA SHA256:EpOBCscCvamodiF49drNiIRDMxdv0LtYbixE7WaoRrA Apr 24 23:36:40.086634 sshd[2333]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:36:40.095100 systemd-logind[1993]: New session 7 of user core. Apr 24 23:36:40.105255 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 24 23:36:40.615955 sudo[2348]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 24 23:36:40.616782 sudo[2348]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 24 23:36:41.115509 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 24 23:36:41.125510 (dockerd)[2364]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 24 23:36:41.549891 dockerd[2364]: time="2026-04-24T23:36:41.549730436Z" level=info msg="Starting up" Apr 24 23:36:41.753278 dockerd[2364]: time="2026-04-24T23:36:41.753189837Z" level=info msg="Loading containers: start." Apr 24 23:36:41.913038 kernel: Initializing XFRM netlink socket Apr 24 23:36:41.947439 (udev-worker)[2386]: Network interface NamePolicy= disabled on kernel command line. Apr 24 23:36:42.053751 systemd-networkd[1934]: docker0: Link UP Apr 24 23:36:42.084488 dockerd[2364]: time="2026-04-24T23:36:42.084414667Z" level=info msg="Loading containers: done." Apr 24 23:36:42.107422 dockerd[2364]: time="2026-04-24T23:36:42.107343115Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 24 23:36:42.107603 dockerd[2364]: time="2026-04-24T23:36:42.107500999Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 24 23:36:42.107728 dockerd[2364]: time="2026-04-24T23:36:42.107691463Z" level=info msg="Daemon has completed initialization" Apr 24 23:36:42.113241 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3549209673-merged.mount: Deactivated successfully. Apr 24 23:36:42.172393 dockerd[2364]: time="2026-04-24T23:36:42.171917407Z" level=info msg="API listen on /run/docker.sock" Apr 24 23:36:42.175206 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 24 23:36:43.238390 containerd[2018]: time="2026-04-24T23:36:43.238325720Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.11\"" Apr 24 23:36:43.882780 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3662504899.mount: Deactivated successfully. Apr 24 23:36:45.305021 containerd[2018]: time="2026-04-24T23:36:45.304243415Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:36:45.306295 containerd[2018]: time="2026-04-24T23:36:45.306230783Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.11: active requests=0, bytes read=27008787" Apr 24 23:36:45.307278 containerd[2018]: time="2026-04-24T23:36:45.307238867Z" level=info msg="ImageCreate event name:\"sha256:51b83c5cb2f791f72696c040be904535bad3c81a6ffc19a55013ac150a24d9b0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:36:45.313348 containerd[2018]: time="2026-04-24T23:36:45.313295363Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:18e9f2b6e4d67c24941e14b2d41ec0aa6e5f628e39f2ef2163e176de85bbe39e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:36:45.316423 containerd[2018]: time="2026-04-24T23:36:45.316344707Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.11\" with image id \"sha256:51b83c5cb2f791f72696c040be904535bad3c81a6ffc19a55013ac150a24d9b0\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:18e9f2b6e4d67c24941e14b2d41ec0aa6e5f628e39f2ef2163e176de85bbe39e\", size \"27005386\" in 2.077953287s" Apr 24 23:36:45.316595 containerd[2018]: time="2026-04-24T23:36:45.316420391Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.11\" returns image reference \"sha256:51b83c5cb2f791f72696c040be904535bad3c81a6ffc19a55013ac150a24d9b0\"" Apr 24 23:36:45.317780 containerd[2018]: time="2026-04-24T23:36:45.317721647Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.11\"" Apr 24 23:36:47.026779 containerd[2018]: time="2026-04-24T23:36:47.026714063Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:36:47.028562 containerd[2018]: time="2026-04-24T23:36:47.028492295Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.11: active requests=0, bytes read=23297774" Apr 24 23:36:47.031770 containerd[2018]: time="2026-04-24T23:36:47.031664699Z" level=info msg="ImageCreate event name:\"sha256:df8bcecad66863646fb4016494163838761da38376bae5a7592e04041db8489a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:36:47.041024 containerd[2018]: time="2026-04-24T23:36:47.039962099Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7579451c5b3c2715da4a263c5d80a3367a24fdc12e86fde6851674d567d1dfb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:36:47.042126 containerd[2018]: time="2026-04-24T23:36:47.042057947Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.11\" with image id \"sha256:df8bcecad66863646fb4016494163838761da38376bae5a7592e04041db8489a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7579451c5b3c2715da4a263c5d80a3367a24fdc12e86fde6851674d567d1dfb2\", size \"24804413\" in 1.724264912s" Apr 24 23:36:47.042333 containerd[2018]: time="2026-04-24T23:36:47.042298343Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.11\" returns image reference \"sha256:df8bcecad66863646fb4016494163838761da38376bae5a7592e04041db8489a\"" Apr 24 23:36:47.044742 containerd[2018]: time="2026-04-24T23:36:47.044650751Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.11\"" Apr 24 23:36:48.367164 containerd[2018]: time="2026-04-24T23:36:48.367096814Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:36:48.369404 containerd[2018]: time="2026-04-24T23:36:48.369333998Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.11: active requests=0, bytes read=18141358" Apr 24 23:36:48.370898 containerd[2018]: time="2026-04-24T23:36:48.369903974Z" level=info msg="ImageCreate event name:\"sha256:8c8e25fd00e5c108fb9ab5490c25bfaeb0231b1c59f749dab4f5300f1c49995b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:36:48.376229 containerd[2018]: time="2026-04-24T23:36:48.376172162Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:5506f0f94c4d9aeb071664893aabc12166bcb7f775008a6fff02d004e6091d28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:36:48.378936 containerd[2018]: time="2026-04-24T23:36:48.378876182Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.11\" with image id \"sha256:8c8e25fd00e5c108fb9ab5490c25bfaeb0231b1c59f749dab4f5300f1c49995b\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:5506f0f94c4d9aeb071664893aabc12166bcb7f775008a6fff02d004e6091d28\", size \"19648015\" in 1.334123407s" Apr 24 23:36:48.379200 containerd[2018]: time="2026-04-24T23:36:48.379156250Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.11\" returns image reference \"sha256:8c8e25fd00e5c108fb9ab5490c25bfaeb0231b1c59f749dab4f5300f1c49995b\"" Apr 24 23:36:48.380032 containerd[2018]: time="2026-04-24T23:36:48.379926338Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.11\"" Apr 24 23:36:49.567998 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 24 23:36:49.580500 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 23:36:49.856679 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2599925879.mount: Deactivated successfully. Apr 24 23:36:49.994471 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 23:36:50.004582 (kubelet)[2586]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 24 23:36:50.108894 kubelet[2586]: E0424 23:36:50.108215 2586 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 24 23:36:50.114908 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 24 23:36:50.115444 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 24 23:36:50.644078 containerd[2018]: time="2026-04-24T23:36:50.644006261Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:36:50.651309 containerd[2018]: time="2026-04-24T23:36:50.651231641Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.11: active requests=0, bytes read=28040508" Apr 24 23:36:50.653699 containerd[2018]: time="2026-04-24T23:36:50.652906517Z" level=info msg="ImageCreate event name:\"sha256:7ce14d6fb1e5134a578d2aaa327fd701273e3d222b9b8d88054dd86b87a7dc36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:36:50.661567 containerd[2018]: time="2026-04-24T23:36:50.661497665Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8d18637b5c5f58a4ca0163d3cf184e53d4c522963c242860562be7cb25e9303e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:36:50.663350 containerd[2018]: time="2026-04-24T23:36:50.663195569Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.11\" with image id \"sha256:7ce14d6fb1e5134a578d2aaa327fd701273e3d222b9b8d88054dd86b87a7dc36\", repo tag \"registry.k8s.io/kube-proxy:v1.33.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:8d18637b5c5f58a4ca0163d3cf184e53d4c522963c242860562be7cb25e9303e\", size \"28039527\" in 2.283185939s" Apr 24 23:36:50.663350 containerd[2018]: time="2026-04-24T23:36:50.663268301Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.11\" returns image reference \"sha256:7ce14d6fb1e5134a578d2aaa327fd701273e3d222b9b8d88054dd86b87a7dc36\"" Apr 24 23:36:50.664022 containerd[2018]: time="2026-04-24T23:36:50.663910973Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Apr 24 23:36:51.267569 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount980415492.mount: Deactivated successfully. Apr 24 23:36:52.500371 containerd[2018]: time="2026-04-24T23:36:52.500289894Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:36:52.502872 containerd[2018]: time="2026-04-24T23:36:52.502765135Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152117" Apr 24 23:36:52.505024 containerd[2018]: time="2026-04-24T23:36:52.503795455Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:36:52.512869 containerd[2018]: time="2026-04-24T23:36:52.512777587Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:36:52.516047 containerd[2018]: time="2026-04-24T23:36:52.515931847Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.851921766s" Apr 24 23:36:52.516192 containerd[2018]: time="2026-04-24T23:36:52.516044695Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Apr 24 23:36:52.516999 containerd[2018]: time="2026-04-24T23:36:52.516764395Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Apr 24 23:36:53.005469 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2725963416.mount: Deactivated successfully. Apr 24 23:36:53.014721 containerd[2018]: time="2026-04-24T23:36:53.012901469Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:36:53.014721 containerd[2018]: time="2026-04-24T23:36:53.014637353Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Apr 24 23:36:53.015808 containerd[2018]: time="2026-04-24T23:36:53.015717293Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:36:53.021048 containerd[2018]: time="2026-04-24T23:36:53.020462561Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:36:53.023496 containerd[2018]: time="2026-04-24T23:36:53.023064929Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 506.225546ms" Apr 24 23:36:53.023496 containerd[2018]: time="2026-04-24T23:36:53.023145557Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Apr 24 23:36:53.024643 containerd[2018]: time="2026-04-24T23:36:53.024585245Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Apr 24 23:36:53.541950 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3959704721.mount: Deactivated successfully. Apr 24 23:36:54.947993 containerd[2018]: time="2026-04-24T23:36:54.947894087Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:36:54.950426 containerd[2018]: time="2026-04-24T23:36:54.950339375Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=21886366" Apr 24 23:36:54.953704 containerd[2018]: time="2026-04-24T23:36:54.952576307Z" level=info msg="ImageCreate event name:\"sha256:1211402d28f5813ed906916bfcdd0a7404c2f9048ef5bb54387a6745bc410eca\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:36:54.962454 containerd[2018]: time="2026-04-24T23:36:54.962388035Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:36:54.967496 containerd[2018]: time="2026-04-24T23:36:54.967441679Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:1211402d28f5813ed906916bfcdd0a7404c2f9048ef5bb54387a6745bc410eca\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"21882972\" in 1.942613278s" Apr 24 23:36:54.967666 containerd[2018]: time="2026-04-24T23:36:54.967637471Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:1211402d28f5813ed906916bfcdd0a7404c2f9048ef5bb54387a6745bc410eca\"" Apr 24 23:36:55.074432 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Apr 24 23:37:00.317770 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Apr 24 23:37:00.330609 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 23:37:00.708541 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 23:37:00.722785 (kubelet)[2746]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 24 23:37:00.801320 kubelet[2746]: E0424 23:37:00.801258 2746 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 24 23:37:00.806523 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 24 23:37:00.807630 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 24 23:37:02.803659 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 23:37:02.814481 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 23:37:02.878472 systemd[1]: Reloading requested from client PID 2760 ('systemctl') (unit session-7.scope)... Apr 24 23:37:02.878505 systemd[1]: Reloading... Apr 24 23:37:03.156074 zram_generator::config[2810]: No configuration found. Apr 24 23:37:03.372594 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 24 23:37:03.547789 systemd[1]: Reloading finished in 668 ms. Apr 24 23:37:03.654510 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 23:37:03.663531 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 23:37:03.668901 systemd[1]: kubelet.service: Deactivated successfully. Apr 24 23:37:03.669495 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 23:37:03.677570 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 23:37:04.000244 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 23:37:04.011282 (kubelet)[2865]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 24 23:37:04.087386 kubelet[2865]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 24 23:37:04.087854 kubelet[2865]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 24 23:37:04.087854 kubelet[2865]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 24 23:37:04.087854 kubelet[2865]: I0424 23:37:04.087584 2865 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 24 23:37:04.847129 kubelet[2865]: I0424 23:37:04.847061 2865 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 24 23:37:04.847129 kubelet[2865]: I0424 23:37:04.847109 2865 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 24 23:37:04.847512 kubelet[2865]: I0424 23:37:04.847468 2865 server.go:956] "Client rotation is on, will bootstrap in background" Apr 24 23:37:04.893610 kubelet[2865]: I0424 23:37:04.893549 2865 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 24 23:37:04.894009 kubelet[2865]: E0424 23:37:04.893575 2865 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.17.112:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.17.112:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 24 23:37:04.911733 kubelet[2865]: E0424 23:37:04.911639 2865 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 24 23:37:04.911733 kubelet[2865]: I0424 23:37:04.911721 2865 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 24 23:37:04.918788 kubelet[2865]: I0424 23:37:04.918320 2865 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 24 23:37:04.922377 kubelet[2865]: I0424 23:37:04.922316 2865 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 24 23:37:04.922781 kubelet[2865]: I0424 23:37:04.922509 2865 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-17-112","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 24 23:37:04.923042 kubelet[2865]: I0424 23:37:04.923021 2865 topology_manager.go:138] "Creating topology manager with none policy" Apr 24 23:37:04.923147 kubelet[2865]: I0424 23:37:04.923130 2865 container_manager_linux.go:303] "Creating device plugin manager" Apr 24 23:37:04.923576 kubelet[2865]: I0424 23:37:04.923555 2865 state_mem.go:36] "Initialized new in-memory state store" Apr 24 23:37:04.930039 kubelet[2865]: I0424 23:37:04.930003 2865 kubelet.go:480] "Attempting to sync node with API server" Apr 24 23:37:04.930366 kubelet[2865]: I0424 23:37:04.930203 2865 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 24 23:37:04.930366 kubelet[2865]: I0424 23:37:04.930267 2865 kubelet.go:386] "Adding apiserver pod source" Apr 24 23:37:04.933138 kubelet[2865]: I0424 23:37:04.932578 2865 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 24 23:37:04.938600 kubelet[2865]: E0424 23:37:04.937304 2865 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.17.112:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-17-112&limit=500&resourceVersion=0\": dial tcp 172.31.17.112:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 24 23:37:04.938600 kubelet[2865]: E0424 23:37:04.938278 2865 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.17.112:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.17.112:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 24 23:37:04.938966 kubelet[2865]: I0424 23:37:04.938935 2865 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 24 23:37:04.940263 kubelet[2865]: I0424 23:37:04.940229 2865 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 24 23:37:04.940661 kubelet[2865]: W0424 23:37:04.940637 2865 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 24 23:37:04.945710 kubelet[2865]: I0424 23:37:04.945677 2865 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 24 23:37:04.945887 kubelet[2865]: I0424 23:37:04.945868 2865 server.go:1289] "Started kubelet" Apr 24 23:37:04.949704 kubelet[2865]: I0424 23:37:04.949663 2865 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 24 23:37:04.959856 kubelet[2865]: I0424 23:37:04.958520 2865 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 24 23:37:04.960180 kubelet[2865]: I0424 23:37:04.960139 2865 server.go:317] "Adding debug handlers to kubelet server" Apr 24 23:37:04.966680 kubelet[2865]: I0424 23:37:04.966576 2865 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 24 23:37:04.967081 kubelet[2865]: I0424 23:37:04.966964 2865 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 24 23:37:04.967190 kubelet[2865]: I0424 23:37:04.966963 2865 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 24 23:37:04.967709 kubelet[2865]: E0424 23:37:04.967673 2865 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-17-112\" not found" Apr 24 23:37:04.968648 kubelet[2865]: I0424 23:37:04.968619 2865 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 24 23:37:04.968895 kubelet[2865]: I0424 23:37:04.968876 2865 reconciler.go:26] "Reconciler: start to sync state" Apr 24 23:37:04.971597 kubelet[2865]: E0424 23:37:04.971556 2865 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.17.112:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.17.112:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 24 23:37:04.971779 kubelet[2865]: I0424 23:37:04.971713 2865 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 24 23:37:04.973646 kubelet[2865]: E0424 23:37:04.973570 2865 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.17.112:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-112?timeout=10s\": dial tcp 172.31.17.112:6443: connect: connection refused" interval="200ms" Apr 24 23:37:04.974600 kubelet[2865]: E0424 23:37:04.971819 2865 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.17.112:6443/api/v1/namespaces/default/events\": dial tcp 172.31.17.112:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-17-112.18a96f3a7b8a6de4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-17-112,UID:ip-172-31-17-112,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-17-112,},FirstTimestamp:2026-04-24 23:37:04.9458273 +0000 UTC m=+0.921748721,LastTimestamp:2026-04-24 23:37:04.9458273 +0000 UTC m=+0.921748721,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-17-112,}" Apr 24 23:37:04.974600 kubelet[2865]: I0424 23:37:04.974460 2865 factory.go:223] Registration of the systemd container factory successfully Apr 24 23:37:04.974872 kubelet[2865]: I0424 23:37:04.974751 2865 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 24 23:37:04.977030 kubelet[2865]: E0424 23:37:04.976950 2865 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 24 23:37:04.979136 kubelet[2865]: I0424 23:37:04.979093 2865 factory.go:223] Registration of the containerd container factory successfully Apr 24 23:37:05.015620 kubelet[2865]: I0424 23:37:05.015362 2865 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 24 23:37:05.019619 kubelet[2865]: I0424 23:37:05.019579 2865 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 24 23:37:05.019829 kubelet[2865]: I0424 23:37:05.019808 2865 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 24 23:37:05.019948 kubelet[2865]: I0424 23:37:05.019927 2865 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 24 23:37:05.020187 kubelet[2865]: I0424 23:37:05.020165 2865 kubelet.go:2436] "Starting kubelet main sync loop" Apr 24 23:37:05.020714 kubelet[2865]: E0424 23:37:05.020321 2865 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 24 23:37:05.024008 kubelet[2865]: I0424 23:37:05.023923 2865 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 24 23:37:05.024008 kubelet[2865]: I0424 23:37:05.024053 2865 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 24 23:37:05.024719 kubelet[2865]: I0424 23:37:05.024378 2865 state_mem.go:36] "Initialized new in-memory state store" Apr 24 23:37:05.029170 kubelet[2865]: I0424 23:37:05.029110 2865 policy_none.go:49] "None policy: Start" Apr 24 23:37:05.029170 kubelet[2865]: I0424 23:37:05.029154 2865 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 24 23:37:05.029170 kubelet[2865]: I0424 23:37:05.029180 2865 state_mem.go:35] "Initializing new in-memory state store" Apr 24 23:37:05.029526 kubelet[2865]: E0424 23:37:05.029492 2865 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.17.112:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.17.112:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 24 23:37:05.039150 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 24 23:37:05.053909 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 24 23:37:05.062397 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 24 23:37:05.068655 kubelet[2865]: E0424 23:37:05.068594 2865 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-17-112\" not found" Apr 24 23:37:05.074517 kubelet[2865]: E0424 23:37:05.073613 2865 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 24 23:37:05.074517 kubelet[2865]: I0424 23:37:05.073891 2865 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 24 23:37:05.074517 kubelet[2865]: I0424 23:37:05.073913 2865 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 24 23:37:05.076784 kubelet[2865]: I0424 23:37:05.076737 2865 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 24 23:37:05.078296 kubelet[2865]: E0424 23:37:05.078236 2865 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 24 23:37:05.078451 kubelet[2865]: E0424 23:37:05.078309 2865 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-17-112\" not found" Apr 24 23:37:05.141010 systemd[1]: Created slice kubepods-burstable-pod0ec2c9d71fc7ae04b272133e67bdc2e2.slice - libcontainer container kubepods-burstable-pod0ec2c9d71fc7ae04b272133e67bdc2e2.slice. Apr 24 23:37:05.157772 kubelet[2865]: E0424 23:37:05.157703 2865 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-112\" not found" node="ip-172-31-17-112" Apr 24 23:37:05.164761 systemd[1]: Created slice kubepods-burstable-podc53ffdff240cf9083ab37d67046f69a5.slice - libcontainer container kubepods-burstable-podc53ffdff240cf9083ab37d67046f69a5.slice. Apr 24 23:37:05.170369 kubelet[2865]: I0424 23:37:05.170329 2865 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0ec2c9d71fc7ae04b272133e67bdc2e2-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-17-112\" (UID: \"0ec2c9d71fc7ae04b272133e67bdc2e2\") " pod="kube-system/kube-apiserver-ip-172-31-17-112" Apr 24 23:37:05.170593 kubelet[2865]: I0424 23:37:05.170566 2865 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c53ffdff240cf9083ab37d67046f69a5-ca-certs\") pod \"kube-controller-manager-ip-172-31-17-112\" (UID: \"c53ffdff240cf9083ab37d67046f69a5\") " pod="kube-system/kube-controller-manager-ip-172-31-17-112" Apr 24 23:37:05.171537 kubelet[2865]: I0424 23:37:05.171057 2865 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c53ffdff240cf9083ab37d67046f69a5-kubeconfig\") pod \"kube-controller-manager-ip-172-31-17-112\" (UID: \"c53ffdff240cf9083ab37d67046f69a5\") " pod="kube-system/kube-controller-manager-ip-172-31-17-112" Apr 24 23:37:05.171537 kubelet[2865]: I0424 23:37:05.171138 2865 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c53ffdff240cf9083ab37d67046f69a5-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-17-112\" (UID: \"c53ffdff240cf9083ab37d67046f69a5\") " pod="kube-system/kube-controller-manager-ip-172-31-17-112" Apr 24 23:37:05.171537 kubelet[2865]: I0424 23:37:05.171198 2865 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c53ffdff240cf9083ab37d67046f69a5-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-17-112\" (UID: \"c53ffdff240cf9083ab37d67046f69a5\") " pod="kube-system/kube-controller-manager-ip-172-31-17-112" Apr 24 23:37:05.171537 kubelet[2865]: I0424 23:37:05.171241 2865 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c53ffdff240cf9083ab37d67046f69a5-k8s-certs\") pod \"kube-controller-manager-ip-172-31-17-112\" (UID: \"c53ffdff240cf9083ab37d67046f69a5\") " pod="kube-system/kube-controller-manager-ip-172-31-17-112" Apr 24 23:37:05.171537 kubelet[2865]: I0424 23:37:05.171284 2865 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2f8a0be7a9a6de502017042f5a8b885b-kubeconfig\") pod \"kube-scheduler-ip-172-31-17-112\" (UID: \"2f8a0be7a9a6de502017042f5a8b885b\") " pod="kube-system/kube-scheduler-ip-172-31-17-112" Apr 24 23:37:05.171858 kubelet[2865]: I0424 23:37:05.171320 2865 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0ec2c9d71fc7ae04b272133e67bdc2e2-ca-certs\") pod \"kube-apiserver-ip-172-31-17-112\" (UID: \"0ec2c9d71fc7ae04b272133e67bdc2e2\") " pod="kube-system/kube-apiserver-ip-172-31-17-112" Apr 24 23:37:05.171858 kubelet[2865]: I0424 23:37:05.171355 2865 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0ec2c9d71fc7ae04b272133e67bdc2e2-k8s-certs\") pod \"kube-apiserver-ip-172-31-17-112\" (UID: \"0ec2c9d71fc7ae04b272133e67bdc2e2\") " pod="kube-system/kube-apiserver-ip-172-31-17-112" Apr 24 23:37:05.174054 kubelet[2865]: E0424 23:37:05.173574 2865 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-112\" not found" node="ip-172-31-17-112" Apr 24 23:37:05.175737 kubelet[2865]: E0424 23:37:05.175652 2865 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.17.112:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-112?timeout=10s\": dial tcp 172.31.17.112:6443: connect: connection refused" interval="400ms" Apr 24 23:37:05.177882 kubelet[2865]: I0424 23:37:05.177346 2865 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-17-112" Apr 24 23:37:05.179017 kubelet[2865]: E0424 23:37:05.178221 2865 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.17.112:6443/api/v1/nodes\": dial tcp 172.31.17.112:6443: connect: connection refused" node="ip-172-31-17-112" Apr 24 23:37:05.181765 systemd[1]: Created slice kubepods-burstable-pod2f8a0be7a9a6de502017042f5a8b885b.slice - libcontainer container kubepods-burstable-pod2f8a0be7a9a6de502017042f5a8b885b.slice. Apr 24 23:37:05.185855 kubelet[2865]: E0424 23:37:05.185787 2865 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-112\" not found" node="ip-172-31-17-112" Apr 24 23:37:05.380764 kubelet[2865]: I0424 23:37:05.380723 2865 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-17-112" Apr 24 23:37:05.381275 kubelet[2865]: E0424 23:37:05.381227 2865 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.17.112:6443/api/v1/nodes\": dial tcp 172.31.17.112:6443: connect: connection refused" node="ip-172-31-17-112" Apr 24 23:37:05.460573 containerd[2018]: time="2026-04-24T23:37:05.459960739Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-17-112,Uid:0ec2c9d71fc7ae04b272133e67bdc2e2,Namespace:kube-system,Attempt:0,}" Apr 24 23:37:05.477279 containerd[2018]: time="2026-04-24T23:37:05.477206107Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-17-112,Uid:c53ffdff240cf9083ab37d67046f69a5,Namespace:kube-system,Attempt:0,}" Apr 24 23:37:05.488191 containerd[2018]: time="2026-04-24T23:37:05.487728727Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-17-112,Uid:2f8a0be7a9a6de502017042f5a8b885b,Namespace:kube-system,Attempt:0,}" Apr 24 23:37:05.576474 kubelet[2865]: E0424 23:37:05.576267 2865 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.17.112:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-112?timeout=10s\": dial tcp 172.31.17.112:6443: connect: connection refused" interval="800ms" Apr 24 23:37:05.784210 kubelet[2865]: I0424 23:37:05.783482 2865 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-17-112" Apr 24 23:37:05.784210 kubelet[2865]: E0424 23:37:05.783958 2865 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.17.112:6443/api/v1/nodes\": dial tcp 172.31.17.112:6443: connect: connection refused" node="ip-172-31-17-112" Apr 24 23:37:05.996629 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount425460181.mount: Deactivated successfully. Apr 24 23:37:06.016048 containerd[2018]: time="2026-04-24T23:37:06.015952938Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 24 23:37:06.018304 containerd[2018]: time="2026-04-24T23:37:06.018227382Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 24 23:37:06.020693 containerd[2018]: time="2026-04-24T23:37:06.020632254Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Apr 24 23:37:06.022236 containerd[2018]: time="2026-04-24T23:37:06.022168710Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 24 23:37:06.024280 containerd[2018]: time="2026-04-24T23:37:06.024214506Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 24 23:37:06.027220 containerd[2018]: time="2026-04-24T23:37:06.026957766Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 24 23:37:06.028870 containerd[2018]: time="2026-04-24T23:37:06.028760586Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 24 23:37:06.034565 containerd[2018]: time="2026-04-24T23:37:06.034362834Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 24 23:37:06.044033 containerd[2018]: time="2026-04-24T23:37:06.043357626Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 566.035479ms" Apr 24 23:37:06.047514 containerd[2018]: time="2026-04-24T23:37:06.047426922Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 587.327031ms" Apr 24 23:37:06.053899 containerd[2018]: time="2026-04-24T23:37:06.053829546Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 565.990131ms" Apr 24 23:37:06.266531 containerd[2018]: time="2026-04-24T23:37:06.265791643Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:37:06.266531 containerd[2018]: time="2026-04-24T23:37:06.265887631Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:37:06.266531 containerd[2018]: time="2026-04-24T23:37:06.265914595Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:37:06.266832 containerd[2018]: time="2026-04-24T23:37:06.266087023Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:37:06.280153 containerd[2018]: time="2026-04-24T23:37:06.279999727Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:37:06.280344 containerd[2018]: time="2026-04-24T23:37:06.280124263Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:37:06.280344 containerd[2018]: time="2026-04-24T23:37:06.280162831Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:37:06.280535 containerd[2018]: time="2026-04-24T23:37:06.280317859Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:37:06.282931 containerd[2018]: time="2026-04-24T23:37:06.282540163Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:37:06.282931 containerd[2018]: time="2026-04-24T23:37:06.282625507Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:37:06.282931 containerd[2018]: time="2026-04-24T23:37:06.282650887Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:37:06.282931 containerd[2018]: time="2026-04-24T23:37:06.282795187Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:37:06.321460 systemd[1]: Started cri-containerd-47d67b62e9a44ca3e0b811a9b5c6210763abb30df173fc6ec214932fcff7ffa2.scope - libcontainer container 47d67b62e9a44ca3e0b811a9b5c6210763abb30df173fc6ec214932fcff7ffa2. Apr 24 23:37:06.347506 kubelet[2865]: E0424 23:37:06.347335 2865 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.17.112:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.17.112:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 24 23:37:06.357156 systemd[1]: Started cri-containerd-58f0145028fe60de8adab4836f16a1d510f7679bc6c8781c0391da4d0229fccf.scope - libcontainer container 58f0145028fe60de8adab4836f16a1d510f7679bc6c8781c0391da4d0229fccf. Apr 24 23:37:06.372259 systemd[1]: Started cri-containerd-77f140b117ab17d491d5faa12cd21f06ba499d05c69886434476862034c916b2.scope - libcontainer container 77f140b117ab17d491d5faa12cd21f06ba499d05c69886434476862034c916b2. Apr 24 23:37:06.378762 kubelet[2865]: E0424 23:37:06.378494 2865 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.17.112:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-112?timeout=10s\": dial tcp 172.31.17.112:6443: connect: connection refused" interval="1.6s" Apr 24 23:37:06.455084 containerd[2018]: time="2026-04-24T23:37:06.454738760Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-17-112,Uid:0ec2c9d71fc7ae04b272133e67bdc2e2,Namespace:kube-system,Attempt:0,} returns sandbox id \"58f0145028fe60de8adab4836f16a1d510f7679bc6c8781c0391da4d0229fccf\"" Apr 24 23:37:06.475429 kubelet[2865]: E0424 23:37:06.474330 2865 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.17.112:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.17.112:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 24 23:37:06.475600 containerd[2018]: time="2026-04-24T23:37:06.474738548Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-17-112,Uid:c53ffdff240cf9083ab37d67046f69a5,Namespace:kube-system,Attempt:0,} returns sandbox id \"47d67b62e9a44ca3e0b811a9b5c6210763abb30df173fc6ec214932fcff7ffa2\"" Apr 24 23:37:06.480052 containerd[2018]: time="2026-04-24T23:37:06.478625696Z" level=info msg="CreateContainer within sandbox \"58f0145028fe60de8adab4836f16a1d510f7679bc6c8781c0391da4d0229fccf\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 24 23:37:06.490076 containerd[2018]: time="2026-04-24T23:37:06.490024544Z" level=info msg="CreateContainer within sandbox \"47d67b62e9a44ca3e0b811a9b5c6210763abb30df173fc6ec214932fcff7ffa2\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 24 23:37:06.502689 kubelet[2865]: E0424 23:37:06.502634 2865 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.17.112:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-17-112&limit=500&resourceVersion=0\": dial tcp 172.31.17.112:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 24 23:37:06.510470 containerd[2018]: time="2026-04-24T23:37:06.510323444Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-17-112,Uid:2f8a0be7a9a6de502017042f5a8b885b,Namespace:kube-system,Attempt:0,} returns sandbox id \"77f140b117ab17d491d5faa12cd21f06ba499d05c69886434476862034c916b2\"" Apr 24 23:37:06.518571 containerd[2018]: time="2026-04-24T23:37:06.518396360Z" level=info msg="CreateContainer within sandbox \"77f140b117ab17d491d5faa12cd21f06ba499d05c69886434476862034c916b2\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 24 23:37:06.520660 containerd[2018]: time="2026-04-24T23:37:06.520309256Z" level=info msg="CreateContainer within sandbox \"58f0145028fe60de8adab4836f16a1d510f7679bc6c8781c0391da4d0229fccf\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"fee0456342b7017f5b509f447566c5965aff29b1467f538991b45496af9942c4\"" Apr 24 23:37:06.521692 containerd[2018]: time="2026-04-24T23:37:06.521421164Z" level=info msg="StartContainer for \"fee0456342b7017f5b509f447566c5965aff29b1467f538991b45496af9942c4\"" Apr 24 23:37:06.534663 kubelet[2865]: E0424 23:37:06.534603 2865 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.17.112:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.17.112:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 24 23:37:06.547012 containerd[2018]: time="2026-04-24T23:37:06.546735920Z" level=info msg="CreateContainer within sandbox \"47d67b62e9a44ca3e0b811a9b5c6210763abb30df173fc6ec214932fcff7ffa2\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"09accce6b992f139a83c2643c6cf0d294c7db0f283a994abadad8d7275e1f098\"" Apr 24 23:37:06.547744 containerd[2018]: time="2026-04-24T23:37:06.547683308Z" level=info msg="StartContainer for \"09accce6b992f139a83c2643c6cf0d294c7db0f283a994abadad8d7275e1f098\"" Apr 24 23:37:06.558300 containerd[2018]: time="2026-04-24T23:37:06.558104708Z" level=info msg="CreateContainer within sandbox \"77f140b117ab17d491d5faa12cd21f06ba499d05c69886434476862034c916b2\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"72dcb187c0e097d7d92d767bf005140cf4d6d9be4631ebdb5f82de8574429ed9\"" Apr 24 23:37:06.559014 containerd[2018]: time="2026-04-24T23:37:06.558922556Z" level=info msg="StartContainer for \"72dcb187c0e097d7d92d767bf005140cf4d6d9be4631ebdb5f82de8574429ed9\"" Apr 24 23:37:06.588053 kubelet[2865]: I0424 23:37:06.587878 2865 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-17-112" Apr 24 23:37:06.590673 kubelet[2865]: E0424 23:37:06.590616 2865 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.17.112:6443/api/v1/nodes\": dial tcp 172.31.17.112:6443: connect: connection refused" node="ip-172-31-17-112" Apr 24 23:37:06.594328 systemd[1]: Started cri-containerd-fee0456342b7017f5b509f447566c5965aff29b1467f538991b45496af9942c4.scope - libcontainer container fee0456342b7017f5b509f447566c5965aff29b1467f538991b45496af9942c4. Apr 24 23:37:06.632285 systemd[1]: Started cri-containerd-09accce6b992f139a83c2643c6cf0d294c7db0f283a994abadad8d7275e1f098.scope - libcontainer container 09accce6b992f139a83c2643c6cf0d294c7db0f283a994abadad8d7275e1f098. Apr 24 23:37:06.655724 systemd[1]: Started cri-containerd-72dcb187c0e097d7d92d767bf005140cf4d6d9be4631ebdb5f82de8574429ed9.scope - libcontainer container 72dcb187c0e097d7d92d767bf005140cf4d6d9be4631ebdb5f82de8574429ed9. Apr 24 23:37:06.746428 containerd[2018]: time="2026-04-24T23:37:06.746095041Z" level=info msg="StartContainer for \"fee0456342b7017f5b509f447566c5965aff29b1467f538991b45496af9942c4\" returns successfully" Apr 24 23:37:06.755349 containerd[2018]: time="2026-04-24T23:37:06.755260845Z" level=info msg="StartContainer for \"09accce6b992f139a83c2643c6cf0d294c7db0f283a994abadad8d7275e1f098\" returns successfully" Apr 24 23:37:06.801395 containerd[2018]: time="2026-04-24T23:37:06.801232726Z" level=info msg="StartContainer for \"72dcb187c0e097d7d92d767bf005140cf4d6d9be4631ebdb5f82de8574429ed9\" returns successfully" Apr 24 23:37:07.052321 kubelet[2865]: E0424 23:37:07.052179 2865 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-112\" not found" node="ip-172-31-17-112" Apr 24 23:37:07.054870 kubelet[2865]: E0424 23:37:07.054811 2865 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-112\" not found" node="ip-172-31-17-112" Apr 24 23:37:07.060270 kubelet[2865]: E0424 23:37:07.060223 2865 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-112\" not found" node="ip-172-31-17-112" Apr 24 23:37:08.066301 kubelet[2865]: E0424 23:37:08.066236 2865 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-112\" not found" node="ip-172-31-17-112" Apr 24 23:37:08.071879 kubelet[2865]: E0424 23:37:08.071825 2865 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-112\" not found" node="ip-172-31-17-112" Apr 24 23:37:08.072658 kubelet[2865]: E0424 23:37:08.072606 2865 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-112\" not found" node="ip-172-31-17-112" Apr 24 23:37:08.194757 kubelet[2865]: I0424 23:37:08.194668 2865 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-17-112" Apr 24 23:37:09.334016 update_engine[1995]: I20260424 23:37:09.332034 1995 update_attempter.cc:509] Updating boot flags... Apr 24 23:37:09.483045 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 36 scanned by (udev-worker) (3157) Apr 24 23:37:09.933141 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 36 scanned by (udev-worker) (3157) Apr 24 23:37:10.221265 kubelet[2865]: E0424 23:37:10.221135 2865 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-112\" not found" node="ip-172-31-17-112" Apr 24 23:37:10.615645 kubelet[2865]: E0424 23:37:10.615559 2865 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-17-112\" not found" node="ip-172-31-17-112" Apr 24 23:37:10.640631 kubelet[2865]: I0424 23:37:10.640548 2865 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-17-112" Apr 24 23:37:10.669497 kubelet[2865]: I0424 23:37:10.669429 2865 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-17-112" Apr 24 23:37:10.731697 kubelet[2865]: E0424 23:37:10.731498 2865 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-17-112.18a96f3a7b8a6de4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-17-112,UID:ip-172-31-17-112,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-17-112,},FirstTimestamp:2026-04-24 23:37:04.9458273 +0000 UTC m=+0.921748721,LastTimestamp:2026-04-24 23:37:04.9458273 +0000 UTC m=+0.921748721,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-17-112,}" Apr 24 23:37:10.736709 kubelet[2865]: E0424 23:37:10.736638 2865 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-17-112\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-17-112" Apr 24 23:37:10.736709 kubelet[2865]: I0424 23:37:10.736686 2865 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-17-112" Apr 24 23:37:10.744931 kubelet[2865]: E0424 23:37:10.744873 2865 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-17-112\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-17-112" Apr 24 23:37:10.744931 kubelet[2865]: I0424 23:37:10.744921 2865 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-17-112" Apr 24 23:37:10.748888 kubelet[2865]: E0424 23:37:10.748821 2865 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-17-112\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-17-112" Apr 24 23:37:10.942673 kubelet[2865]: I0424 23:37:10.942330 2865 apiserver.go:52] "Watching apiserver" Apr 24 23:37:10.969597 kubelet[2865]: I0424 23:37:10.969539 2865 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 24 23:37:13.014689 systemd[1]: Reloading requested from client PID 3326 ('systemctl') (unit session-7.scope)... Apr 24 23:37:13.014723 systemd[1]: Reloading... Apr 24 23:37:13.214033 zram_generator::config[3369]: No configuration found. Apr 24 23:37:13.538398 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 24 23:37:13.760930 systemd[1]: Reloading finished in 745 ms. Apr 24 23:37:13.853354 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 23:37:13.869776 systemd[1]: kubelet.service: Deactivated successfully. Apr 24 23:37:13.870484 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 23:37:13.870581 systemd[1]: kubelet.service: Consumed 1.742s CPU time, 129.9M memory peak, 0B memory swap peak. Apr 24 23:37:13.881697 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 23:37:14.246721 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 23:37:14.267291 (kubelet)[3426]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 24 23:37:14.383555 kubelet[3426]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 24 23:37:14.383555 kubelet[3426]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 24 23:37:14.383555 kubelet[3426]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 24 23:37:14.384229 kubelet[3426]: I0424 23:37:14.383647 3426 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 24 23:37:14.397897 kubelet[3426]: I0424 23:37:14.397814 3426 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 24 23:37:14.397897 kubelet[3426]: I0424 23:37:14.397871 3426 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 24 23:37:14.398784 kubelet[3426]: I0424 23:37:14.398373 3426 server.go:956] "Client rotation is on, will bootstrap in background" Apr 24 23:37:14.402288 kubelet[3426]: I0424 23:37:14.400958 3426 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 24 23:37:14.406691 kubelet[3426]: I0424 23:37:14.405945 3426 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 24 23:37:14.424606 kubelet[3426]: E0424 23:37:14.424540 3426 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 24 23:37:14.424606 kubelet[3426]: I0424 23:37:14.424605 3426 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 24 23:37:14.432519 kubelet[3426]: I0424 23:37:14.432091 3426 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 24 23:37:14.432732 kubelet[3426]: I0424 23:37:14.432671 3426 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 24 23:37:14.432998 kubelet[3426]: I0424 23:37:14.432732 3426 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-17-112","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 24 23:37:14.433232 kubelet[3426]: I0424 23:37:14.433040 3426 topology_manager.go:138] "Creating topology manager with none policy" Apr 24 23:37:14.433232 kubelet[3426]: I0424 23:37:14.433065 3426 container_manager_linux.go:303] "Creating device plugin manager" Apr 24 23:37:14.433232 kubelet[3426]: I0424 23:37:14.433153 3426 state_mem.go:36] "Initialized new in-memory state store" Apr 24 23:37:14.434354 kubelet[3426]: I0424 23:37:14.433471 3426 kubelet.go:480] "Attempting to sync node with API server" Apr 24 23:37:14.434354 kubelet[3426]: I0424 23:37:14.434183 3426 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 24 23:37:14.434354 kubelet[3426]: I0424 23:37:14.434347 3426 kubelet.go:386] "Adding apiserver pod source" Apr 24 23:37:14.434606 kubelet[3426]: I0424 23:37:14.434380 3426 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 24 23:37:14.439760 kubelet[3426]: I0424 23:37:14.439608 3426 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 24 23:37:14.444175 kubelet[3426]: I0424 23:37:14.443671 3426 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 24 23:37:14.459706 kubelet[3426]: I0424 23:37:14.455957 3426 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 24 23:37:14.459706 kubelet[3426]: I0424 23:37:14.456056 3426 server.go:1289] "Started kubelet" Apr 24 23:37:14.463010 kubelet[3426]: I0424 23:37:14.462272 3426 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 24 23:37:14.471665 kubelet[3426]: I0424 23:37:14.470943 3426 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 24 23:37:14.479011 kubelet[3426]: I0424 23:37:14.477244 3426 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 24 23:37:14.479011 kubelet[3426]: I0424 23:37:14.477735 3426 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 24 23:37:14.485727 kubelet[3426]: I0424 23:37:14.482776 3426 server.go:317] "Adding debug handlers to kubelet server" Apr 24 23:37:14.502364 kubelet[3426]: I0424 23:37:14.487285 3426 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 24 23:37:14.507662 kubelet[3426]: I0424 23:37:14.490162 3426 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 24 23:37:14.508658 kubelet[3426]: I0424 23:37:14.490184 3426 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 24 23:37:14.508812 kubelet[3426]: E0424 23:37:14.490471 3426 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-17-112\" not found" Apr 24 23:37:14.509157 kubelet[3426]: I0424 23:37:14.509132 3426 reconciler.go:26] "Reconciler: start to sync state" Apr 24 23:37:14.532017 kubelet[3426]: I0424 23:37:14.531907 3426 factory.go:223] Registration of the systemd container factory successfully Apr 24 23:37:14.535293 kubelet[3426]: I0424 23:37:14.535241 3426 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 24 23:37:14.582964 kubelet[3426]: I0424 23:37:14.582680 3426 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 24 23:37:14.586431 kubelet[3426]: I0424 23:37:14.586156 3426 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 24 23:37:14.586431 kubelet[3426]: I0424 23:37:14.586219 3426 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 24 23:37:14.586431 kubelet[3426]: I0424 23:37:14.586255 3426 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 24 23:37:14.586431 kubelet[3426]: I0424 23:37:14.586271 3426 kubelet.go:2436] "Starting kubelet main sync loop" Apr 24 23:37:14.587565 kubelet[3426]: E0424 23:37:14.586840 3426 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 24 23:37:14.593060 kubelet[3426]: I0424 23:37:14.591603 3426 factory.go:223] Registration of the containerd container factory successfully Apr 24 23:37:14.634296 kubelet[3426]: E0424 23:37:14.634133 3426 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 24 23:37:14.687308 kubelet[3426]: E0424 23:37:14.687225 3426 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 24 23:37:14.744074 kubelet[3426]: I0424 23:37:14.743841 3426 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 24 23:37:14.744074 kubelet[3426]: I0424 23:37:14.743874 3426 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 24 23:37:14.744074 kubelet[3426]: I0424 23:37:14.743913 3426 state_mem.go:36] "Initialized new in-memory state store" Apr 24 23:37:14.745414 kubelet[3426]: I0424 23:37:14.744848 3426 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 24 23:37:14.745414 kubelet[3426]: I0424 23:37:14.744902 3426 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 24 23:37:14.745414 kubelet[3426]: I0424 23:37:14.744940 3426 policy_none.go:49] "None policy: Start" Apr 24 23:37:14.745414 kubelet[3426]: I0424 23:37:14.744960 3426 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 24 23:37:14.745414 kubelet[3426]: I0424 23:37:14.745064 3426 state_mem.go:35] "Initializing new in-memory state store" Apr 24 23:37:14.745414 kubelet[3426]: I0424 23:37:14.745279 3426 state_mem.go:75] "Updated machine memory state" Apr 24 23:37:14.759147 kubelet[3426]: E0424 23:37:14.758606 3426 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 24 23:37:14.762352 kubelet[3426]: I0424 23:37:14.762151 3426 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 24 23:37:14.762352 kubelet[3426]: I0424 23:37:14.762203 3426 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 24 23:37:14.764601 kubelet[3426]: I0424 23:37:14.763343 3426 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 24 23:37:14.770545 kubelet[3426]: E0424 23:37:14.770040 3426 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 24 23:37:14.883304 kubelet[3426]: I0424 23:37:14.883249 3426 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-17-112" Apr 24 23:37:14.890500 kubelet[3426]: I0424 23:37:14.890388 3426 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-17-112" Apr 24 23:37:14.891535 kubelet[3426]: I0424 23:37:14.891491 3426 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-17-112" Apr 24 23:37:14.893684 kubelet[3426]: I0424 23:37:14.893110 3426 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-17-112" Apr 24 23:37:14.914833 kubelet[3426]: I0424 23:37:14.914374 3426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0ec2c9d71fc7ae04b272133e67bdc2e2-ca-certs\") pod \"kube-apiserver-ip-172-31-17-112\" (UID: \"0ec2c9d71fc7ae04b272133e67bdc2e2\") " pod="kube-system/kube-apiserver-ip-172-31-17-112" Apr 24 23:37:14.914833 kubelet[3426]: I0424 23:37:14.914440 3426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0ec2c9d71fc7ae04b272133e67bdc2e2-k8s-certs\") pod \"kube-apiserver-ip-172-31-17-112\" (UID: \"0ec2c9d71fc7ae04b272133e67bdc2e2\") " pod="kube-system/kube-apiserver-ip-172-31-17-112" Apr 24 23:37:14.914833 kubelet[3426]: I0424 23:37:14.914488 3426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2f8a0be7a9a6de502017042f5a8b885b-kubeconfig\") pod \"kube-scheduler-ip-172-31-17-112\" (UID: \"2f8a0be7a9a6de502017042f5a8b885b\") " pod="kube-system/kube-scheduler-ip-172-31-17-112" Apr 24 23:37:14.914833 kubelet[3426]: I0424 23:37:14.914531 3426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0ec2c9d71fc7ae04b272133e67bdc2e2-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-17-112\" (UID: \"0ec2c9d71fc7ae04b272133e67bdc2e2\") " pod="kube-system/kube-apiserver-ip-172-31-17-112" Apr 24 23:37:14.914833 kubelet[3426]: I0424 23:37:14.914572 3426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c53ffdff240cf9083ab37d67046f69a5-ca-certs\") pod \"kube-controller-manager-ip-172-31-17-112\" (UID: \"c53ffdff240cf9083ab37d67046f69a5\") " pod="kube-system/kube-controller-manager-ip-172-31-17-112" Apr 24 23:37:14.915250 kubelet[3426]: I0424 23:37:14.914609 3426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c53ffdff240cf9083ab37d67046f69a5-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-17-112\" (UID: \"c53ffdff240cf9083ab37d67046f69a5\") " pod="kube-system/kube-controller-manager-ip-172-31-17-112" Apr 24 23:37:14.915250 kubelet[3426]: I0424 23:37:14.914646 3426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c53ffdff240cf9083ab37d67046f69a5-k8s-certs\") pod \"kube-controller-manager-ip-172-31-17-112\" (UID: \"c53ffdff240cf9083ab37d67046f69a5\") " pod="kube-system/kube-controller-manager-ip-172-31-17-112" Apr 24 23:37:14.915250 kubelet[3426]: I0424 23:37:14.914683 3426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c53ffdff240cf9083ab37d67046f69a5-kubeconfig\") pod \"kube-controller-manager-ip-172-31-17-112\" (UID: \"c53ffdff240cf9083ab37d67046f69a5\") " pod="kube-system/kube-controller-manager-ip-172-31-17-112" Apr 24 23:37:14.915250 kubelet[3426]: I0424 23:37:14.914735 3426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c53ffdff240cf9083ab37d67046f69a5-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-17-112\" (UID: \"c53ffdff240cf9083ab37d67046f69a5\") " pod="kube-system/kube-controller-manager-ip-172-31-17-112" Apr 24 23:37:14.918353 kubelet[3426]: I0424 23:37:14.917253 3426 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-17-112" Apr 24 23:37:14.918353 kubelet[3426]: I0424 23:37:14.917422 3426 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-17-112" Apr 24 23:37:15.456625 kubelet[3426]: I0424 23:37:15.456561 3426 apiserver.go:52] "Watching apiserver" Apr 24 23:37:15.510495 kubelet[3426]: I0424 23:37:15.510402 3426 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 24 23:37:15.749529 kubelet[3426]: I0424 23:37:15.749189 3426 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-17-112" podStartSLOduration=1.749144586 podStartE2EDuration="1.749144586s" podCreationTimestamp="2026-04-24 23:37:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 23:37:15.749051526 +0000 UTC m=+1.473634652" watchObservedRunningTime="2026-04-24 23:37:15.749144586 +0000 UTC m=+1.473727688" Apr 24 23:37:15.803012 kubelet[3426]: I0424 23:37:15.802890 3426 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-17-112" podStartSLOduration=1.802841574 podStartE2EDuration="1.802841574s" podCreationTimestamp="2026-04-24 23:37:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 23:37:15.775106886 +0000 UTC m=+1.499690024" watchObservedRunningTime="2026-04-24 23:37:15.802841574 +0000 UTC m=+1.527424700" Apr 24 23:37:18.303474 kubelet[3426]: I0424 23:37:18.303253 3426 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 24 23:37:18.304196 containerd[2018]: time="2026-04-24T23:37:18.303812299Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 24 23:37:18.304786 kubelet[3426]: I0424 23:37:18.304236 3426 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 24 23:37:19.054765 kubelet[3426]: I0424 23:37:19.053185 3426 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-17-112" podStartSLOduration=5.053161482 podStartE2EDuration="5.053161482s" podCreationTimestamp="2026-04-24 23:37:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 23:37:15.805548774 +0000 UTC m=+1.530131888" watchObservedRunningTime="2026-04-24 23:37:19.053161482 +0000 UTC m=+4.777744596" Apr 24 23:37:19.079284 systemd[1]: Created slice kubepods-besteffort-pod0c44b61d_8b44_4a07_ba27_bb9b3470f07d.slice - libcontainer container kubepods-besteffort-pod0c44b61d_8b44_4a07_ba27_bb9b3470f07d.slice. Apr 24 23:37:19.144028 kubelet[3426]: I0424 23:37:19.143470 3426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdcxg\" (UniqueName: \"kubernetes.io/projected/0c44b61d-8b44-4a07-ba27-bb9b3470f07d-kube-api-access-qdcxg\") pod \"kube-proxy-rfhxj\" (UID: \"0c44b61d-8b44-4a07-ba27-bb9b3470f07d\") " pod="kube-system/kube-proxy-rfhxj" Apr 24 23:37:19.144028 kubelet[3426]: I0424 23:37:19.143540 3426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0c44b61d-8b44-4a07-ba27-bb9b3470f07d-kube-proxy\") pod \"kube-proxy-rfhxj\" (UID: \"0c44b61d-8b44-4a07-ba27-bb9b3470f07d\") " pod="kube-system/kube-proxy-rfhxj" Apr 24 23:37:19.144028 kubelet[3426]: I0424 23:37:19.143584 3426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0c44b61d-8b44-4a07-ba27-bb9b3470f07d-xtables-lock\") pod \"kube-proxy-rfhxj\" (UID: \"0c44b61d-8b44-4a07-ba27-bb9b3470f07d\") " pod="kube-system/kube-proxy-rfhxj" Apr 24 23:37:19.144028 kubelet[3426]: I0424 23:37:19.143623 3426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0c44b61d-8b44-4a07-ba27-bb9b3470f07d-lib-modules\") pod \"kube-proxy-rfhxj\" (UID: \"0c44b61d-8b44-4a07-ba27-bb9b3470f07d\") " pod="kube-system/kube-proxy-rfhxj" Apr 24 23:37:19.279104 systemd[1]: Created slice kubepods-besteffort-podef1bf254_2056_4b33_94d3_655f39098b76.slice - libcontainer container kubepods-besteffort-podef1bf254_2056_4b33_94d3_655f39098b76.slice. Apr 24 23:37:19.345526 kubelet[3426]: I0424 23:37:19.345445 3426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6kf6\" (UniqueName: \"kubernetes.io/projected/ef1bf254-2056-4b33-94d3-655f39098b76-kube-api-access-v6kf6\") pod \"tigera-operator-6bf85f8dd-w2bs4\" (UID: \"ef1bf254-2056-4b33-94d3-655f39098b76\") " pod="tigera-operator/tigera-operator-6bf85f8dd-w2bs4" Apr 24 23:37:19.346166 kubelet[3426]: I0424 23:37:19.345541 3426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ef1bf254-2056-4b33-94d3-655f39098b76-var-lib-calico\") pod \"tigera-operator-6bf85f8dd-w2bs4\" (UID: \"ef1bf254-2056-4b33-94d3-655f39098b76\") " pod="tigera-operator/tigera-operator-6bf85f8dd-w2bs4" Apr 24 23:37:19.406941 containerd[2018]: time="2026-04-24T23:37:19.406334912Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rfhxj,Uid:0c44b61d-8b44-4a07-ba27-bb9b3470f07d,Namespace:kube-system,Attempt:0,}" Apr 24 23:37:19.445844 containerd[2018]: time="2026-04-24T23:37:19.445633580Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:37:19.446474 containerd[2018]: time="2026-04-24T23:37:19.446218568Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:37:19.448040 containerd[2018]: time="2026-04-24T23:37:19.446430704Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:37:19.449579 containerd[2018]: time="2026-04-24T23:37:19.449474108Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:37:19.516342 systemd[1]: Started cri-containerd-9c852444e94412d8611f98ae18ff1d3b5426a0d940085ce75d811d8ed225049c.scope - libcontainer container 9c852444e94412d8611f98ae18ff1d3b5426a0d940085ce75d811d8ed225049c. Apr 24 23:37:19.565593 containerd[2018]: time="2026-04-24T23:37:19.565535037Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rfhxj,Uid:0c44b61d-8b44-4a07-ba27-bb9b3470f07d,Namespace:kube-system,Attempt:0,} returns sandbox id \"9c852444e94412d8611f98ae18ff1d3b5426a0d940085ce75d811d8ed225049c\"" Apr 24 23:37:19.580478 containerd[2018]: time="2026-04-24T23:37:19.580402425Z" level=info msg="CreateContainer within sandbox \"9c852444e94412d8611f98ae18ff1d3b5426a0d940085ce75d811d8ed225049c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 24 23:37:19.595100 containerd[2018]: time="2026-04-24T23:37:19.595035921Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-w2bs4,Uid:ef1bf254-2056-4b33-94d3-655f39098b76,Namespace:tigera-operator,Attempt:0,}" Apr 24 23:37:19.606478 containerd[2018]: time="2026-04-24T23:37:19.606131625Z" level=info msg="CreateContainer within sandbox \"9c852444e94412d8611f98ae18ff1d3b5426a0d940085ce75d811d8ed225049c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ec22408f668842f092824298228bd80c6ba2cedfabd2b32de0a322b98075bead\"" Apr 24 23:37:19.610724 containerd[2018]: time="2026-04-24T23:37:19.610574877Z" level=info msg="StartContainer for \"ec22408f668842f092824298228bd80c6ba2cedfabd2b32de0a322b98075bead\"" Apr 24 23:37:19.649821 containerd[2018]: time="2026-04-24T23:37:19.649304361Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:37:19.649821 containerd[2018]: time="2026-04-24T23:37:19.649430313Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:37:19.649821 containerd[2018]: time="2026-04-24T23:37:19.649458033Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:37:19.649821 containerd[2018]: time="2026-04-24T23:37:19.649635129Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:37:19.678205 systemd[1]: Started cri-containerd-ec22408f668842f092824298228bd80c6ba2cedfabd2b32de0a322b98075bead.scope - libcontainer container ec22408f668842f092824298228bd80c6ba2cedfabd2b32de0a322b98075bead. Apr 24 23:37:19.712054 systemd[1]: Started cri-containerd-9aebab10ce49c6508e9d79420da5c206e9390eba8670b3cb5bc4ee1bf04b1575.scope - libcontainer container 9aebab10ce49c6508e9d79420da5c206e9390eba8670b3cb5bc4ee1bf04b1575. Apr 24 23:37:19.795457 containerd[2018]: time="2026-04-24T23:37:19.795393034Z" level=info msg="StartContainer for \"ec22408f668842f092824298228bd80c6ba2cedfabd2b32de0a322b98075bead\" returns successfully" Apr 24 23:37:19.854779 containerd[2018]: time="2026-04-24T23:37:19.854661262Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-w2bs4,Uid:ef1bf254-2056-4b33-94d3-655f39098b76,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"9aebab10ce49c6508e9d79420da5c206e9390eba8670b3cb5bc4ee1bf04b1575\"" Apr 24 23:37:19.861182 containerd[2018]: time="2026-04-24T23:37:19.860556142Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Apr 24 23:37:20.744031 kubelet[3426]: I0424 23:37:20.742648 3426 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-rfhxj" podStartSLOduration=1.742626047 podStartE2EDuration="1.742626047s" podCreationTimestamp="2026-04-24 23:37:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 23:37:20.742416035 +0000 UTC m=+6.466999209" watchObservedRunningTime="2026-04-24 23:37:20.742626047 +0000 UTC m=+6.467209161" Apr 24 23:37:20.948923 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4193212597.mount: Deactivated successfully. Apr 24 23:37:22.815010 containerd[2018]: time="2026-04-24T23:37:22.813553561Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:37:22.815561 containerd[2018]: time="2026-04-24T23:37:22.815349061Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=25071565" Apr 24 23:37:22.815824 containerd[2018]: time="2026-04-24T23:37:22.815784109Z" level=info msg="ImageCreate event name:\"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:37:22.820302 containerd[2018]: time="2026-04-24T23:37:22.820234789Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:37:22.822276 containerd[2018]: time="2026-04-24T23:37:22.822219661Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"25067560\" in 2.959031715s" Apr 24 23:37:22.822436 containerd[2018]: time="2026-04-24T23:37:22.822403105Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\"" Apr 24 23:37:22.831460 containerd[2018]: time="2026-04-24T23:37:22.831406729Z" level=info msg="CreateContainer within sandbox \"9aebab10ce49c6508e9d79420da5c206e9390eba8670b3cb5bc4ee1bf04b1575\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Apr 24 23:37:22.848118 containerd[2018]: time="2026-04-24T23:37:22.848048617Z" level=info msg="CreateContainer within sandbox \"9aebab10ce49c6508e9d79420da5c206e9390eba8670b3cb5bc4ee1bf04b1575\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"e2cdf3673e8b223a96ffc7f78bf8ab500b0b5386125147fd63efcc9e7ac7fdd8\"" Apr 24 23:37:22.854257 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1031855253.mount: Deactivated successfully. Apr 24 23:37:22.857743 containerd[2018]: time="2026-04-24T23:37:22.854296093Z" level=info msg="StartContainer for \"e2cdf3673e8b223a96ffc7f78bf8ab500b0b5386125147fd63efcc9e7ac7fdd8\"" Apr 24 23:37:22.919308 systemd[1]: Started cri-containerd-e2cdf3673e8b223a96ffc7f78bf8ab500b0b5386125147fd63efcc9e7ac7fdd8.scope - libcontainer container e2cdf3673e8b223a96ffc7f78bf8ab500b0b5386125147fd63efcc9e7ac7fdd8. Apr 24 23:37:22.973189 containerd[2018]: time="2026-04-24T23:37:22.973034438Z" level=info msg="StartContainer for \"e2cdf3673e8b223a96ffc7f78bf8ab500b0b5386125147fd63efcc9e7ac7fdd8\" returns successfully" Apr 24 23:37:23.757034 kubelet[3426]: I0424 23:37:23.755534 3426 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6bf85f8dd-w2bs4" podStartSLOduration=1.791229011 podStartE2EDuration="4.755513774s" podCreationTimestamp="2026-04-24 23:37:19 +0000 UTC" firstStartedPulling="2026-04-24 23:37:19.859636858 +0000 UTC m=+5.584219984" lastFinishedPulling="2026-04-24 23:37:22.823921645 +0000 UTC m=+8.548504747" observedRunningTime="2026-04-24 23:37:23.75527531 +0000 UTC m=+9.479858424" watchObservedRunningTime="2026-04-24 23:37:23.755513774 +0000 UTC m=+9.480096876" Apr 24 23:37:31.694500 sudo[2348]: pam_unix(sudo:session): session closed for user root Apr 24 23:37:31.859457 sshd[2333]: pam_unix(sshd:session): session closed for user core Apr 24 23:37:31.868967 systemd[1]: sshd@6-172.31.17.112:22-20.229.252.112:41350.service: Deactivated successfully. Apr 24 23:37:31.879570 systemd[1]: session-7.scope: Deactivated successfully. Apr 24 23:37:31.881077 systemd[1]: session-7.scope: Consumed 11.763s CPU time, 151.3M memory peak, 0B memory swap peak. Apr 24 23:37:31.882075 systemd-logind[1993]: Session 7 logged out. Waiting for processes to exit. Apr 24 23:37:31.886279 systemd-logind[1993]: Removed session 7. Apr 24 23:37:43.078794 systemd[1]: Created slice kubepods-besteffort-pod1c3acbb7_f50a_468d_8a23_d0c2c376aedb.slice - libcontainer container kubepods-besteffort-pod1c3acbb7_f50a_468d_8a23_d0c2c376aedb.slice. Apr 24 23:37:43.113014 kubelet[3426]: I0424 23:37:43.111357 3426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1c3acbb7-f50a-468d-8a23-d0c2c376aedb-tigera-ca-bundle\") pod \"calico-typha-dfd6d48b-x8bd7\" (UID: \"1c3acbb7-f50a-468d-8a23-d0c2c376aedb\") " pod="calico-system/calico-typha-dfd6d48b-x8bd7" Apr 24 23:37:43.113014 kubelet[3426]: I0424 23:37:43.111433 3426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqp4n\" (UniqueName: \"kubernetes.io/projected/1c3acbb7-f50a-468d-8a23-d0c2c376aedb-kube-api-access-fqp4n\") pod \"calico-typha-dfd6d48b-x8bd7\" (UID: \"1c3acbb7-f50a-468d-8a23-d0c2c376aedb\") " pod="calico-system/calico-typha-dfd6d48b-x8bd7" Apr 24 23:37:43.113014 kubelet[3426]: I0424 23:37:43.111521 3426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/1c3acbb7-f50a-468d-8a23-d0c2c376aedb-typha-certs\") pod \"calico-typha-dfd6d48b-x8bd7\" (UID: \"1c3acbb7-f50a-468d-8a23-d0c2c376aedb\") " pod="calico-system/calico-typha-dfd6d48b-x8bd7" Apr 24 23:37:43.280598 systemd[1]: Created slice kubepods-besteffort-podf99d6bcd_238c_4b28_87eb_eb94a883ce20.slice - libcontainer container kubepods-besteffort-podf99d6bcd_238c_4b28_87eb_eb94a883ce20.slice. Apr 24 23:37:43.314544 kubelet[3426]: I0424 23:37:43.313761 3426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/f99d6bcd-238c-4b28-87eb-eb94a883ce20-nodeproc\") pod \"calico-node-pjgsk\" (UID: \"f99d6bcd-238c-4b28-87eb-eb94a883ce20\") " pod="calico-system/calico-node-pjgsk" Apr 24 23:37:43.314544 kubelet[3426]: I0424 23:37:43.313839 3426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/f99d6bcd-238c-4b28-87eb-eb94a883ce20-bpffs\") pod \"calico-node-pjgsk\" (UID: \"f99d6bcd-238c-4b28-87eb-eb94a883ce20\") " pod="calico-system/calico-node-pjgsk" Apr 24 23:37:43.314544 kubelet[3426]: I0424 23:37:43.313876 3426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f99d6bcd-238c-4b28-87eb-eb94a883ce20-tigera-ca-bundle\") pod \"calico-node-pjgsk\" (UID: \"f99d6bcd-238c-4b28-87eb-eb94a883ce20\") " pod="calico-system/calico-node-pjgsk" Apr 24 23:37:43.314544 kubelet[3426]: I0424 23:37:43.313918 3426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f99d6bcd-238c-4b28-87eb-eb94a883ce20-xtables-lock\") pod \"calico-node-pjgsk\" (UID: \"f99d6bcd-238c-4b28-87eb-eb94a883ce20\") " pod="calico-system/calico-node-pjgsk" Apr 24 23:37:43.314544 kubelet[3426]: I0424 23:37:43.313952 3426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/f99d6bcd-238c-4b28-87eb-eb94a883ce20-cni-bin-dir\") pod \"calico-node-pjgsk\" (UID: \"f99d6bcd-238c-4b28-87eb-eb94a883ce20\") " pod="calico-system/calico-node-pjgsk" Apr 24 23:37:43.315035 kubelet[3426]: I0424 23:37:43.314010 3426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/f99d6bcd-238c-4b28-87eb-eb94a883ce20-cni-net-dir\") pod \"calico-node-pjgsk\" (UID: \"f99d6bcd-238c-4b28-87eb-eb94a883ce20\") " pod="calico-system/calico-node-pjgsk" Apr 24 23:37:43.315035 kubelet[3426]: I0424 23:37:43.314058 3426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f99d6bcd-238c-4b28-87eb-eb94a883ce20-var-lib-calico\") pod \"calico-node-pjgsk\" (UID: \"f99d6bcd-238c-4b28-87eb-eb94a883ce20\") " pod="calico-system/calico-node-pjgsk" Apr 24 23:37:43.315035 kubelet[3426]: I0424 23:37:43.314093 3426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/f99d6bcd-238c-4b28-87eb-eb94a883ce20-var-run-calico\") pod \"calico-node-pjgsk\" (UID: \"f99d6bcd-238c-4b28-87eb-eb94a883ce20\") " pod="calico-system/calico-node-pjgsk" Apr 24 23:37:43.315035 kubelet[3426]: I0424 23:37:43.314132 3426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/f99d6bcd-238c-4b28-87eb-eb94a883ce20-policysync\") pod \"calico-node-pjgsk\" (UID: \"f99d6bcd-238c-4b28-87eb-eb94a883ce20\") " pod="calico-system/calico-node-pjgsk" Apr 24 23:37:43.315035 kubelet[3426]: I0424 23:37:43.314165 3426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/f99d6bcd-238c-4b28-87eb-eb94a883ce20-sys-fs\") pod \"calico-node-pjgsk\" (UID: \"f99d6bcd-238c-4b28-87eb-eb94a883ce20\") " pod="calico-system/calico-node-pjgsk" Apr 24 23:37:43.315310 kubelet[3426]: I0424 23:37:43.314206 3426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8sd5\" (UniqueName: \"kubernetes.io/projected/f99d6bcd-238c-4b28-87eb-eb94a883ce20-kube-api-access-w8sd5\") pod \"calico-node-pjgsk\" (UID: \"f99d6bcd-238c-4b28-87eb-eb94a883ce20\") " pod="calico-system/calico-node-pjgsk" Apr 24 23:37:43.315310 kubelet[3426]: I0424 23:37:43.314250 3426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/f99d6bcd-238c-4b28-87eb-eb94a883ce20-flexvol-driver-host\") pod \"calico-node-pjgsk\" (UID: \"f99d6bcd-238c-4b28-87eb-eb94a883ce20\") " pod="calico-system/calico-node-pjgsk" Apr 24 23:37:43.315310 kubelet[3426]: I0424 23:37:43.314283 3426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/f99d6bcd-238c-4b28-87eb-eb94a883ce20-node-certs\") pod \"calico-node-pjgsk\" (UID: \"f99d6bcd-238c-4b28-87eb-eb94a883ce20\") " pod="calico-system/calico-node-pjgsk" Apr 24 23:37:43.315310 kubelet[3426]: I0424 23:37:43.314319 3426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/f99d6bcd-238c-4b28-87eb-eb94a883ce20-cni-log-dir\") pod \"calico-node-pjgsk\" (UID: \"f99d6bcd-238c-4b28-87eb-eb94a883ce20\") " pod="calico-system/calico-node-pjgsk" Apr 24 23:37:43.315310 kubelet[3426]: I0424 23:37:43.314356 3426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f99d6bcd-238c-4b28-87eb-eb94a883ce20-lib-modules\") pod \"calico-node-pjgsk\" (UID: \"f99d6bcd-238c-4b28-87eb-eb94a883ce20\") " pod="calico-system/calico-node-pjgsk" Apr 24 23:37:43.390683 containerd[2018]: time="2026-04-24T23:37:43.390070567Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-dfd6d48b-x8bd7,Uid:1c3acbb7-f50a-468d-8a23-d0c2c376aedb,Namespace:calico-system,Attempt:0,}" Apr 24 23:37:43.404143 kubelet[3426]: E0424 23:37:43.403765 3426 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-d77dn" podUID="6eaa5414-466a-4dc1-ab02-5e5914bb112e" Apr 24 23:37:43.416773 kubelet[3426]: E0424 23:37:43.416733 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:43.421047 kubelet[3426]: W0424 23:37:43.417191 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:43.421047 kubelet[3426]: E0424 23:37:43.419039 3426 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:43.423202 kubelet[3426]: E0424 23:37:43.423164 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:43.423392 kubelet[3426]: W0424 23:37:43.423362 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:43.423532 kubelet[3426]: E0424 23:37:43.423490 3426 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:43.432293 kubelet[3426]: E0424 23:37:43.431925 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:43.432293 kubelet[3426]: W0424 23:37:43.431961 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:43.432293 kubelet[3426]: E0424 23:37:43.432082 3426 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:43.437032 kubelet[3426]: E0424 23:37:43.434697 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:43.437032 kubelet[3426]: W0424 23:37:43.435110 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:43.437032 kubelet[3426]: E0424 23:37:43.435150 3426 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:43.438273 kubelet[3426]: E0424 23:37:43.438089 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:43.438273 kubelet[3426]: W0424 23:37:43.438126 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:43.438273 kubelet[3426]: E0424 23:37:43.438197 3426 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:43.440471 kubelet[3426]: E0424 23:37:43.440340 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:43.440471 kubelet[3426]: W0424 23:37:43.440406 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:43.442122 kubelet[3426]: E0424 23:37:43.440440 3426 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:43.443066 kubelet[3426]: E0424 23:37:43.442801 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:43.443066 kubelet[3426]: W0424 23:37:43.442834 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:43.443066 kubelet[3426]: E0424 23:37:43.442893 3426 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:43.446139 kubelet[3426]: E0424 23:37:43.443919 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:43.446139 kubelet[3426]: W0424 23:37:43.443948 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:43.446139 kubelet[3426]: E0424 23:37:43.444033 3426 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:43.447495 kubelet[3426]: E0424 23:37:43.447165 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:43.447495 kubelet[3426]: W0424 23:37:43.447201 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:43.447495 kubelet[3426]: E0424 23:37:43.447266 3426 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:43.448663 kubelet[3426]: E0424 23:37:43.448347 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:43.448663 kubelet[3426]: W0424 23:37:43.448413 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:43.448663 kubelet[3426]: E0424 23:37:43.448450 3426 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:43.449426 kubelet[3426]: E0424 23:37:43.449397 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:43.449636 kubelet[3426]: W0424 23:37:43.449519 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:43.449636 kubelet[3426]: E0424 23:37:43.449591 3426 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:43.452215 kubelet[3426]: E0424 23:37:43.452108 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:43.452215 kubelet[3426]: W0424 23:37:43.452146 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:43.452215 kubelet[3426]: E0424 23:37:43.452180 3426 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:43.460286 kubelet[3426]: E0424 23:37:43.459961 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:43.460286 kubelet[3426]: W0424 23:37:43.460016 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:43.460286 kubelet[3426]: E0424 23:37:43.460047 3426 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:43.461858 kubelet[3426]: E0424 23:37:43.460795 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:43.461858 kubelet[3426]: W0424 23:37:43.460824 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:43.461858 kubelet[3426]: E0424 23:37:43.460854 3426 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:43.461858 kubelet[3426]: E0424 23:37:43.461209 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:43.461858 kubelet[3426]: W0424 23:37:43.461228 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:43.461858 kubelet[3426]: E0424 23:37:43.461249 3426 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:43.467530 kubelet[3426]: E0424 23:37:43.467022 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:43.467530 kubelet[3426]: W0424 23:37:43.467058 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:43.467530 kubelet[3426]: E0424 23:37:43.467091 3426 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:43.470065 kubelet[3426]: E0424 23:37:43.468509 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:43.470065 kubelet[3426]: W0424 23:37:43.468540 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:43.470065 kubelet[3426]: E0424 23:37:43.468571 3426 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:43.474654 kubelet[3426]: E0424 23:37:43.472384 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:43.474654 kubelet[3426]: W0424 23:37:43.472419 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:43.474654 kubelet[3426]: E0424 23:37:43.472450 3426 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:43.475916 kubelet[3426]: E0424 23:37:43.475876 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:43.476573 kubelet[3426]: W0424 23:37:43.476445 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:43.476573 kubelet[3426]: E0424 23:37:43.476489 3426 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:43.479875 kubelet[3426]: E0424 23:37:43.479638 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:43.479875 kubelet[3426]: W0424 23:37:43.479669 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:43.479875 kubelet[3426]: E0424 23:37:43.479701 3426 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:43.480690 kubelet[3426]: E0424 23:37:43.480452 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:43.480690 kubelet[3426]: W0424 23:37:43.480484 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:43.480690 kubelet[3426]: E0424 23:37:43.480515 3426 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:43.481420 kubelet[3426]: E0424 23:37:43.481219 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:43.481420 kubelet[3426]: W0424 23:37:43.481248 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:43.481420 kubelet[3426]: E0424 23:37:43.481277 3426 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:43.482221 kubelet[3426]: E0424 23:37:43.481936 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:43.482221 kubelet[3426]: W0424 23:37:43.481964 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:43.482221 kubelet[3426]: E0424 23:37:43.482017 3426 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:43.482839 kubelet[3426]: E0424 23:37:43.482676 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:43.482839 kubelet[3426]: W0424 23:37:43.482703 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:43.482839 kubelet[3426]: E0424 23:37:43.482733 3426 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:43.483587 kubelet[3426]: E0424 23:37:43.483485 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:43.483587 kubelet[3426]: W0424 23:37:43.483516 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:43.483587 kubelet[3426]: E0424 23:37:43.483547 3426 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:43.485022 kubelet[3426]: E0424 23:37:43.484836 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:43.485022 kubelet[3426]: W0424 23:37:43.484874 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:43.485022 kubelet[3426]: E0424 23:37:43.484908 3426 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:43.487366 kubelet[3426]: E0424 23:37:43.487179 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:43.487366 kubelet[3426]: W0424 23:37:43.487214 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:43.487366 kubelet[3426]: E0424 23:37:43.487247 3426 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:43.488210 kubelet[3426]: E0424 23:37:43.487914 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:43.488210 kubelet[3426]: W0424 23:37:43.487943 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:43.488210 kubelet[3426]: E0424 23:37:43.488059 3426 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:43.489199 kubelet[3426]: E0424 23:37:43.488861 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:43.489199 kubelet[3426]: W0424 23:37:43.488894 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:43.489199 kubelet[3426]: E0424 23:37:43.488937 3426 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:43.490126 kubelet[3426]: E0424 23:37:43.489746 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:43.490126 kubelet[3426]: W0424 23:37:43.489783 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:43.490126 kubelet[3426]: E0424 23:37:43.489815 3426 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:43.491594 kubelet[3426]: E0424 23:37:43.491350 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:43.491594 kubelet[3426]: W0424 23:37:43.491383 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:43.491594 kubelet[3426]: E0424 23:37:43.491416 3426 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:43.493244 kubelet[3426]: E0424 23:37:43.492256 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:43.493244 kubelet[3426]: W0424 23:37:43.492288 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:43.493244 kubelet[3426]: E0424 23:37:43.492317 3426 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:43.494121 kubelet[3426]: E0424 23:37:43.493821 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:43.494121 kubelet[3426]: W0424 23:37:43.493853 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:43.494121 kubelet[3426]: E0424 23:37:43.493883 3426 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:43.495454 kubelet[3426]: E0424 23:37:43.495173 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:43.495454 kubelet[3426]: W0424 23:37:43.495212 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:43.495454 kubelet[3426]: E0424 23:37:43.495244 3426 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:43.497893 kubelet[3426]: E0424 23:37:43.497490 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:43.497893 kubelet[3426]: W0424 23:37:43.497522 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:43.497893 kubelet[3426]: E0424 23:37:43.497553 3426 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:43.499029 kubelet[3426]: E0424 23:37:43.498307 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:43.499029 kubelet[3426]: W0424 23:37:43.498336 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:43.499029 kubelet[3426]: E0424 23:37:43.498363 3426 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:43.502525 kubelet[3426]: E0424 23:37:43.501938 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:43.502525 kubelet[3426]: W0424 23:37:43.502265 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:43.502525 kubelet[3426]: E0424 23:37:43.502307 3426 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:43.504402 kubelet[3426]: E0424 23:37:43.504144 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:43.504402 kubelet[3426]: W0424 23:37:43.504180 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:43.504402 kubelet[3426]: E0424 23:37:43.504259 3426 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:43.506748 kubelet[3426]: E0424 23:37:43.506479 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:43.506748 kubelet[3426]: W0424 23:37:43.506514 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:43.506748 kubelet[3426]: E0424 23:37:43.506568 3426 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:43.507739 kubelet[3426]: E0424 23:37:43.507495 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:43.507739 kubelet[3426]: W0424 23:37:43.507524 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:43.507739 kubelet[3426]: E0424 23:37:43.507574 3426 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:43.510462 kubelet[3426]: E0424 23:37:43.510431 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:43.510864 kubelet[3426]: W0424 23:37:43.510519 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:43.510864 kubelet[3426]: E0424 23:37:43.510552 3426 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:43.511358 kubelet[3426]: E0424 23:37:43.511316 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:43.511551 kubelet[3426]: W0424 23:37:43.511446 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:43.511551 kubelet[3426]: E0424 23:37:43.511471 3426 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:43.512355 kubelet[3426]: E0424 23:37:43.512196 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:43.512355 kubelet[3426]: W0424 23:37:43.512220 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:43.512355 kubelet[3426]: E0424 23:37:43.512276 3426 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:43.513462 kubelet[3426]: E0424 23:37:43.513138 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:43.513462 kubelet[3426]: W0424 23:37:43.513164 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:43.513462 kubelet[3426]: E0424 23:37:43.513215 3426 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:43.516198 kubelet[3426]: E0424 23:37:43.514186 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:43.516198 kubelet[3426]: W0424 23:37:43.514217 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:43.516198 kubelet[3426]: E0424 23:37:43.516068 3426 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:43.517274 kubelet[3426]: E0424 23:37:43.517033 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:43.517274 kubelet[3426]: W0424 23:37:43.517065 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:43.517274 kubelet[3426]: E0424 23:37:43.517094 3426 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:43.518027 kubelet[3426]: E0424 23:37:43.517791 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:43.518027 kubelet[3426]: W0424 23:37:43.517818 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:43.518027 kubelet[3426]: E0424 23:37:43.517845 3426 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:43.518882 kubelet[3426]: E0424 23:37:43.518626 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:43.518882 kubelet[3426]: W0424 23:37:43.518656 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:43.518882 kubelet[3426]: E0424 23:37:43.518686 3426 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:43.519701 kubelet[3426]: E0424 23:37:43.519496 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:43.519701 kubelet[3426]: W0424 23:37:43.519525 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:43.519701 kubelet[3426]: E0424 23:37:43.519561 3426 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:43.521462 kubelet[3426]: E0424 23:37:43.521267 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:43.521462 kubelet[3426]: W0424 23:37:43.521303 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:43.521462 kubelet[3426]: E0424 23:37:43.521336 3426 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:43.522394 kubelet[3426]: E0424 23:37:43.522129 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:43.522394 kubelet[3426]: W0424 23:37:43.522158 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:43.522394 kubelet[3426]: E0424 23:37:43.522230 3426 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:43.524578 kubelet[3426]: E0424 23:37:43.524309 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:43.524578 kubelet[3426]: W0424 23:37:43.524343 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:43.524578 kubelet[3426]: E0424 23:37:43.524377 3426 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:43.528018 containerd[2018]: time="2026-04-24T23:37:43.524864072Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:37:43.528018 containerd[2018]: time="2026-04-24T23:37:43.525020852Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:37:43.528018 containerd[2018]: time="2026-04-24T23:37:43.525065540Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:37:43.528018 containerd[2018]: time="2026-04-24T23:37:43.525273848Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:37:43.531054 kubelet[3426]: E0424 23:37:43.527952 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:43.531260 kubelet[3426]: W0424 23:37:43.531222 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:43.531379 kubelet[3426]: E0424 23:37:43.531354 3426 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:43.531964 kubelet[3426]: E0424 23:37:43.531928 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:43.532177 kubelet[3426]: W0424 23:37:43.532146 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:43.533046 kubelet[3426]: E0424 23:37:43.532326 3426 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:43.535600 kubelet[3426]: E0424 23:37:43.535351 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:43.535600 kubelet[3426]: W0424 23:37:43.535389 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:43.535600 kubelet[3426]: E0424 23:37:43.535423 3426 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:43.537188 kubelet[3426]: E0424 23:37:43.536521 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:43.537188 kubelet[3426]: W0424 23:37:43.536782 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:43.537188 kubelet[3426]: E0424 23:37:43.537034 3426 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:43.548020 kubelet[3426]: E0424 23:37:43.547209 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:43.548020 kubelet[3426]: W0424 23:37:43.547246 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:43.548020 kubelet[3426]: E0424 23:37:43.547279 3426 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:43.549388 kubelet[3426]: E0424 23:37:43.548572 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:43.549388 kubelet[3426]: W0424 23:37:43.548623 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:43.549388 kubelet[3426]: E0424 23:37:43.548657 3426 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:43.556205 kubelet[3426]: E0424 23:37:43.555904 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:43.556205 kubelet[3426]: W0424 23:37:43.555943 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:43.556205 kubelet[3426]: E0424 23:37:43.556006 3426 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:43.557573 kubelet[3426]: E0424 23:37:43.557346 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:43.557573 kubelet[3426]: W0424 23:37:43.557411 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:43.557573 kubelet[3426]: E0424 23:37:43.557448 3426 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:43.559174 kubelet[3426]: E0424 23:37:43.558956 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:43.559174 kubelet[3426]: W0424 23:37:43.559063 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:43.559174 kubelet[3426]: E0424 23:37:43.559121 3426 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:43.571659 kubelet[3426]: E0424 23:37:43.571592 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:43.571659 kubelet[3426]: W0424 23:37:43.571641 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:43.571886 kubelet[3426]: E0424 23:37:43.571682 3426 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:43.575904 kubelet[3426]: E0424 23:37:43.575825 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:43.575904 kubelet[3426]: W0424 23:37:43.575877 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:43.576168 kubelet[3426]: E0424 23:37:43.575913 3426 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:43.579620 kubelet[3426]: E0424 23:37:43.579557 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:43.579620 kubelet[3426]: W0424 23:37:43.579604 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:43.579827 kubelet[3426]: E0424 23:37:43.579642 3426 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:43.583789 kubelet[3426]: E0424 23:37:43.583170 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:43.583789 kubelet[3426]: W0424 23:37:43.583210 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:43.583789 kubelet[3426]: E0424 23:37:43.583464 3426 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:43.585725 kubelet[3426]: E0424 23:37:43.585277 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:43.585725 kubelet[3426]: W0424 23:37:43.585307 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:43.585725 kubelet[3426]: E0424 23:37:43.585340 3426 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:43.586779 kubelet[3426]: E0424 23:37:43.586734 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:43.586779 kubelet[3426]: W0424 23:37:43.586774 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:43.587339 kubelet[3426]: E0424 23:37:43.586808 3426 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:43.588514 kubelet[3426]: E0424 23:37:43.588227 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:43.588514 kubelet[3426]: W0424 23:37:43.588295 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:43.588514 kubelet[3426]: E0424 23:37:43.588335 3426 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:43.591840 kubelet[3426]: E0424 23:37:43.591668 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:43.591840 kubelet[3426]: W0424 23:37:43.591706 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:43.591840 kubelet[3426]: E0424 23:37:43.591738 3426 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:43.592880 kubelet[3426]: E0424 23:37:43.592688 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:43.592880 kubelet[3426]: W0424 23:37:43.592722 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:43.592880 kubelet[3426]: E0424 23:37:43.592755 3426 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:43.594013 kubelet[3426]: E0424 23:37:43.593668 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:43.594013 kubelet[3426]: W0424 23:37:43.593703 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:43.594013 kubelet[3426]: E0424 23:37:43.593736 3426 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:43.595854 kubelet[3426]: E0424 23:37:43.595696 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:43.595854 kubelet[3426]: W0424 23:37:43.595740 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:43.595854 kubelet[3426]: E0424 23:37:43.595774 3426 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:43.595854 kubelet[3426]: I0424 23:37:43.595819 3426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6eaa5414-466a-4dc1-ab02-5e5914bb112e-kubelet-dir\") pod \"csi-node-driver-d77dn\" (UID: \"6eaa5414-466a-4dc1-ab02-5e5914bb112e\") " pod="calico-system/csi-node-driver-d77dn" Apr 24 23:37:43.598016 containerd[2018]: time="2026-04-24T23:37:43.596567456Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-pjgsk,Uid:f99d6bcd-238c-4b28-87eb-eb94a883ce20,Namespace:calico-system,Attempt:0,}" Apr 24 23:37:43.598213 kubelet[3426]: E0424 23:37:43.597138 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:43.598213 kubelet[3426]: W0424 23:37:43.597172 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:43.598213 kubelet[3426]: E0424 23:37:43.597205 3426 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:43.598213 kubelet[3426]: I0424 23:37:43.597251 3426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/6eaa5414-466a-4dc1-ab02-5e5914bb112e-varrun\") pod \"csi-node-driver-d77dn\" (UID: \"6eaa5414-466a-4dc1-ab02-5e5914bb112e\") " pod="calico-system/csi-node-driver-d77dn" Apr 24 23:37:43.599588 kubelet[3426]: E0424 23:37:43.599221 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:43.599588 kubelet[3426]: W0424 23:37:43.599611 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:43.599588 kubelet[3426]: E0424 23:37:43.599649 3426 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:43.599588 kubelet[3426]: I0424 23:37:43.599710 3426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/6eaa5414-466a-4dc1-ab02-5e5914bb112e-registration-dir\") pod \"csi-node-driver-d77dn\" (UID: \"6eaa5414-466a-4dc1-ab02-5e5914bb112e\") " pod="calico-system/csi-node-driver-d77dn" Apr 24 23:37:43.601693 kubelet[3426]: E0424 23:37:43.601306 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:43.601693 kubelet[3426]: W0424 23:37:43.601341 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:43.601693 kubelet[3426]: E0424 23:37:43.601392 3426 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:43.605523 kubelet[3426]: E0424 23:37:43.605224 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:43.605523 kubelet[3426]: W0424 23:37:43.605275 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:43.605523 kubelet[3426]: E0424 23:37:43.605309 3426 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:43.607990 kubelet[3426]: E0424 23:37:43.607591 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:43.607990 kubelet[3426]: W0424 23:37:43.607626 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:43.607990 kubelet[3426]: E0424 23:37:43.607659 3426 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:43.611622 kubelet[3426]: E0424 23:37:43.611109 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:43.611622 kubelet[3426]: W0424 23:37:43.611145 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:43.611622 kubelet[3426]: E0424 23:37:43.611178 3426 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:43.611622 kubelet[3426]: I0424 23:37:43.611255 3426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99z2g\" (UniqueName: \"kubernetes.io/projected/6eaa5414-466a-4dc1-ab02-5e5914bb112e-kube-api-access-99z2g\") pod \"csi-node-driver-d77dn\" (UID: \"6eaa5414-466a-4dc1-ab02-5e5914bb112e\") " pod="calico-system/csi-node-driver-d77dn" Apr 24 23:37:43.614511 kubelet[3426]: E0424 23:37:43.614250 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:43.615274 kubelet[3426]: W0424 23:37:43.614851 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:43.615274 kubelet[3426]: E0424 23:37:43.614904 3426 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:43.617871 kubelet[3426]: E0424 23:37:43.617520 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:43.617871 kubelet[3426]: W0424 23:37:43.617568 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:43.617871 kubelet[3426]: E0424 23:37:43.617602 3426 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:43.620378 kubelet[3426]: E0424 23:37:43.620160 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:43.620378 kubelet[3426]: W0424 23:37:43.620193 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:43.620378 kubelet[3426]: E0424 23:37:43.620233 3426 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:43.623444 kubelet[3426]: E0424 23:37:43.623158 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:43.623444 kubelet[3426]: W0424 23:37:43.623190 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:43.623444 kubelet[3426]: E0424 23:37:43.623223 3426 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:43.626628 kubelet[3426]: E0424 23:37:43.625989 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:43.626628 kubelet[3426]: W0424 23:37:43.626026 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:43.626628 kubelet[3426]: E0424 23:37:43.626058 3426 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:43.629460 kubelet[3426]: E0424 23:37:43.629177 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:43.629460 kubelet[3426]: W0424 23:37:43.629213 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:43.629460 kubelet[3426]: E0424 23:37:43.629247 3426 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:43.629460 kubelet[3426]: I0424 23:37:43.629304 3426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/6eaa5414-466a-4dc1-ab02-5e5914bb112e-socket-dir\") pod \"csi-node-driver-d77dn\" (UID: \"6eaa5414-466a-4dc1-ab02-5e5914bb112e\") " pod="calico-system/csi-node-driver-d77dn" Apr 24 23:37:43.631028 kubelet[3426]: E0424 23:37:43.630160 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:43.631028 kubelet[3426]: W0424 23:37:43.630199 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:43.631028 kubelet[3426]: E0424 23:37:43.630229 3426 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:43.634784 kubelet[3426]: E0424 23:37:43.633927 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:43.634784 kubelet[3426]: W0424 23:37:43.633964 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:43.634784 kubelet[3426]: E0424 23:37:43.634051 3426 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:43.645320 systemd[1]: Started cri-containerd-7dcdfb12bc4c3a9fb3735bd2d9bfa72c0ffecf361c415b1f655630b208651e3f.scope - libcontainer container 7dcdfb12bc4c3a9fb3735bd2d9bfa72c0ffecf361c415b1f655630b208651e3f. Apr 24 23:37:43.708083 containerd[2018]: time="2026-04-24T23:37:43.706603617Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:37:43.708083 containerd[2018]: time="2026-04-24T23:37:43.706765653Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:37:43.708083 containerd[2018]: time="2026-04-24T23:37:43.706819461Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:37:43.710437 containerd[2018]: time="2026-04-24T23:37:43.708562221Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:37:43.731457 kubelet[3426]: E0424 23:37:43.731391 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:43.731457 kubelet[3426]: W0424 23:37:43.731431 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:43.731457 kubelet[3426]: E0424 23:37:43.731465 3426 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:43.733502 kubelet[3426]: E0424 23:37:43.733420 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:43.733502 kubelet[3426]: W0424 23:37:43.733466 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:43.733502 kubelet[3426]: E0424 23:37:43.733501 3426 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:43.734945 kubelet[3426]: E0424 23:37:43.734431 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:43.734945 kubelet[3426]: W0424 23:37:43.734476 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:43.734945 kubelet[3426]: E0424 23:37:43.734512 3426 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:43.736659 kubelet[3426]: E0424 23:37:43.736212 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:43.736659 kubelet[3426]: W0424 23:37:43.736270 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:43.736659 kubelet[3426]: E0424 23:37:43.736331 3426 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:43.741006 kubelet[3426]: E0424 23:37:43.739510 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:43.741006 kubelet[3426]: W0424 23:37:43.739546 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:43.741006 kubelet[3426]: E0424 23:37:43.739580 3426 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:43.742354 kubelet[3426]: E0424 23:37:43.742168 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:43.742354 kubelet[3426]: W0424 23:37:43.742204 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:43.745862 kubelet[3426]: E0424 23:37:43.745436 3426 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:43.748532 kubelet[3426]: E0424 23:37:43.748208 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:43.748532 kubelet[3426]: W0424 23:37:43.748262 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:43.748532 kubelet[3426]: E0424 23:37:43.748297 3426 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:43.750893 kubelet[3426]: E0424 23:37:43.750090 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:43.750893 kubelet[3426]: W0424 23:37:43.750125 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:43.750893 kubelet[3426]: E0424 23:37:43.750156 3426 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:43.751685 kubelet[3426]: E0424 23:37:43.751354 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:43.751904 kubelet[3426]: W0424 23:37:43.751752 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:43.751904 kubelet[3426]: E0424 23:37:43.751789 3426 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:43.758153 kubelet[3426]: E0424 23:37:43.757538 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:43.758153 kubelet[3426]: W0424 23:37:43.757572 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:43.758153 kubelet[3426]: E0424 23:37:43.757605 3426 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:43.762494 kubelet[3426]: E0424 23:37:43.762098 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:43.762868 kubelet[3426]: W0424 23:37:43.762673 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:43.762868 kubelet[3426]: E0424 23:37:43.762717 3426 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:43.763681 kubelet[3426]: E0424 23:37:43.763649 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:43.763832 kubelet[3426]: W0424 23:37:43.763806 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:43.765086 kubelet[3426]: E0424 23:37:43.764002 3426 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:43.768657 kubelet[3426]: E0424 23:37:43.768162 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:43.768657 kubelet[3426]: W0424 23:37:43.768199 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:43.768657 kubelet[3426]: E0424 23:37:43.768232 3426 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:43.769998 kubelet[3426]: E0424 23:37:43.769281 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:43.769998 kubelet[3426]: W0424 23:37:43.769311 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:43.769998 kubelet[3426]: E0424 23:37:43.769343 3426 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:43.771860 kubelet[3426]: E0424 23:37:43.770984 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:43.771860 kubelet[3426]: W0424 23:37:43.771017 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:43.771860 kubelet[3426]: E0424 23:37:43.771062 3426 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:43.773483 kubelet[3426]: E0424 23:37:43.773204 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:43.773483 kubelet[3426]: W0424 23:37:43.773238 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:43.773483 kubelet[3426]: E0424 23:37:43.773276 3426 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:43.773924 kubelet[3426]: E0424 23:37:43.773896 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:43.774242 kubelet[3426]: W0424 23:37:43.774030 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:43.774242 kubelet[3426]: E0424 23:37:43.774066 3426 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:43.774659 kubelet[3426]: E0424 23:37:43.774629 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:43.775172 kubelet[3426]: W0424 23:37:43.774821 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:43.775582 kubelet[3426]: E0424 23:37:43.775347 3426 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:43.777023 kubelet[3426]: E0424 23:37:43.775877 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:43.777455 kubelet[3426]: W0424 23:37:43.777193 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:43.777455 kubelet[3426]: E0424 23:37:43.777245 3426 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:43.777830 kubelet[3426]: E0424 23:37:43.777808 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:43.777929 kubelet[3426]: W0424 23:37:43.777906 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:43.778189 kubelet[3426]: E0424 23:37:43.778063 3426 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:43.778450 kubelet[3426]: E0424 23:37:43.778430 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:43.778555 kubelet[3426]: W0424 23:37:43.778533 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:43.778802 kubelet[3426]: E0424 23:37:43.778633 3426 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:43.779093 kubelet[3426]: E0424 23:37:43.779070 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:43.779202 kubelet[3426]: W0424 23:37:43.779178 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:43.781014 kubelet[3426]: E0424 23:37:43.779322 3426 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:43.781681 kubelet[3426]: E0424 23:37:43.781639 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:43.781809 kubelet[3426]: W0424 23:37:43.781784 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:43.781999 kubelet[3426]: E0424 23:37:43.781942 3426 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:43.782868 kubelet[3426]: E0424 23:37:43.782752 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:43.782868 kubelet[3426]: W0424 23:37:43.782803 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:43.782868 kubelet[3426]: E0424 23:37:43.782828 3426 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:43.784331 kubelet[3426]: E0424 23:37:43.784282 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:43.784331 kubelet[3426]: W0424 23:37:43.784320 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:43.784331 kubelet[3426]: E0424 23:37:43.784352 3426 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:43.803484 systemd[1]: Started cri-containerd-c1ff540a999b6409b87802697185f58606ce56846171241509849048f53f0932.scope - libcontainer container c1ff540a999b6409b87802697185f58606ce56846171241509849048f53f0932. Apr 24 23:37:43.858163 kubelet[3426]: E0424 23:37:43.858109 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:37:43.858163 kubelet[3426]: W0424 23:37:43.858150 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:37:43.858163 kubelet[3426]: E0424 23:37:43.858184 3426 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:37:43.998350 containerd[2018]: time="2026-04-24T23:37:43.998159626Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-pjgsk,Uid:f99d6bcd-238c-4b28-87eb-eb94a883ce20,Namespace:calico-system,Attempt:0,} returns sandbox id \"c1ff540a999b6409b87802697185f58606ce56846171241509849048f53f0932\"" Apr 24 23:37:44.007195 containerd[2018]: time="2026-04-24T23:37:44.006874194Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Apr 24 23:37:44.059229 containerd[2018]: time="2026-04-24T23:37:44.059131087Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-dfd6d48b-x8bd7,Uid:1c3acbb7-f50a-468d-8a23-d0c2c376aedb,Namespace:calico-system,Attempt:0,} returns sandbox id \"7dcdfb12bc4c3a9fb3735bd2d9bfa72c0ffecf361c415b1f655630b208651e3f\"" Apr 24 23:37:45.249461 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2526074951.mount: Deactivated successfully. Apr 24 23:37:45.390278 containerd[2018]: time="2026-04-24T23:37:45.390197445Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:37:45.393408 containerd[2018]: time="2026-04-24T23:37:45.393126273Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=5855345" Apr 24 23:37:45.395622 containerd[2018]: time="2026-04-24T23:37:45.395525553Z" level=info msg="ImageCreate event name:\"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:37:45.402646 containerd[2018]: time="2026-04-24T23:37:45.402123177Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:37:45.404400 containerd[2018]: time="2026-04-24T23:37:45.404304261Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"5855167\" in 1.397342875s" Apr 24 23:37:45.404555 containerd[2018]: time="2026-04-24T23:37:45.404522949Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\"" Apr 24 23:37:45.406523 containerd[2018]: time="2026-04-24T23:37:45.406461657Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Apr 24 23:37:45.414803 containerd[2018]: time="2026-04-24T23:37:45.414740865Z" level=info msg="CreateContainer within sandbox \"c1ff540a999b6409b87802697185f58606ce56846171241509849048f53f0932\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Apr 24 23:37:45.448954 containerd[2018]: time="2026-04-24T23:37:45.448866117Z" level=info msg="CreateContainer within sandbox \"c1ff540a999b6409b87802697185f58606ce56846171241509849048f53f0932\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"c069b823459a9d3b3f5f957be6e54ccc0f90473e8e2d721a59dbefa1a1bfef4a\"" Apr 24 23:37:45.451016 containerd[2018]: time="2026-04-24T23:37:45.449790214Z" level=info msg="StartContainer for \"c069b823459a9d3b3f5f957be6e54ccc0f90473e8e2d721a59dbefa1a1bfef4a\"" Apr 24 23:37:45.510604 systemd[1]: run-containerd-runc-k8s.io-c069b823459a9d3b3f5f957be6e54ccc0f90473e8e2d721a59dbefa1a1bfef4a-runc.D2cdCM.mount: Deactivated successfully. Apr 24 23:37:45.521338 systemd[1]: Started cri-containerd-c069b823459a9d3b3f5f957be6e54ccc0f90473e8e2d721a59dbefa1a1bfef4a.scope - libcontainer container c069b823459a9d3b3f5f957be6e54ccc0f90473e8e2d721a59dbefa1a1bfef4a. Apr 24 23:37:45.575031 containerd[2018]: time="2026-04-24T23:37:45.574921678Z" level=info msg="StartContainer for \"c069b823459a9d3b3f5f957be6e54ccc0f90473e8e2d721a59dbefa1a1bfef4a\" returns successfully" Apr 24 23:37:45.587204 kubelet[3426]: E0424 23:37:45.587118 3426 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-d77dn" podUID="6eaa5414-466a-4dc1-ab02-5e5914bb112e" Apr 24 23:37:45.610210 systemd[1]: cri-containerd-c069b823459a9d3b3f5f957be6e54ccc0f90473e8e2d721a59dbefa1a1bfef4a.scope: Deactivated successfully. Apr 24 23:37:45.814691 containerd[2018]: time="2026-04-24T23:37:45.814247507Z" level=info msg="shim disconnected" id=c069b823459a9d3b3f5f957be6e54ccc0f90473e8e2d721a59dbefa1a1bfef4a namespace=k8s.io Apr 24 23:37:45.814691 containerd[2018]: time="2026-04-24T23:37:45.814322651Z" level=warning msg="cleaning up after shim disconnected" id=c069b823459a9d3b3f5f957be6e54ccc0f90473e8e2d721a59dbefa1a1bfef4a namespace=k8s.io Apr 24 23:37:45.814691 containerd[2018]: time="2026-04-24T23:37:45.814344383Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 24 23:37:46.248674 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c069b823459a9d3b3f5f957be6e54ccc0f90473e8e2d721a59dbefa1a1bfef4a-rootfs.mount: Deactivated successfully. Apr 24 23:37:47.440942 containerd[2018]: time="2026-04-24T23:37:47.440874371Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:37:47.442460 containerd[2018]: time="2026-04-24T23:37:47.442403531Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=32467511" Apr 24 23:37:47.447031 containerd[2018]: time="2026-04-24T23:37:47.446630171Z" level=info msg="ImageCreate event name:\"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:37:47.456872 containerd[2018]: time="2026-04-24T23:37:47.456800387Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:37:47.458688 containerd[2018]: time="2026-04-24T23:37:47.458616827Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"33865028\" in 2.05208965s" Apr 24 23:37:47.459041 containerd[2018]: time="2026-04-24T23:37:47.458863607Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\"" Apr 24 23:37:47.461533 containerd[2018]: time="2026-04-24T23:37:47.461285795Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Apr 24 23:37:47.480014 containerd[2018]: time="2026-04-24T23:37:47.479435412Z" level=info msg="CreateContainer within sandbox \"7dcdfb12bc4c3a9fb3735bd2d9bfa72c0ffecf361c415b1f655630b208651e3f\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Apr 24 23:37:47.513195 containerd[2018]: time="2026-04-24T23:37:47.512960172Z" level=info msg="CreateContainer within sandbox \"7dcdfb12bc4c3a9fb3735bd2d9bfa72c0ffecf361c415b1f655630b208651e3f\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"a630f0cf13d25c5f74b8b1d56f6a70a86df556566b93781128eee05a4b7e1dd1\"" Apr 24 23:37:47.515704 containerd[2018]: time="2026-04-24T23:37:47.515506980Z" level=info msg="StartContainer for \"a630f0cf13d25c5f74b8b1d56f6a70a86df556566b93781128eee05a4b7e1dd1\"" Apr 24 23:37:47.576316 systemd[1]: Started cri-containerd-a630f0cf13d25c5f74b8b1d56f6a70a86df556566b93781128eee05a4b7e1dd1.scope - libcontainer container a630f0cf13d25c5f74b8b1d56f6a70a86df556566b93781128eee05a4b7e1dd1. Apr 24 23:37:47.586826 kubelet[3426]: E0424 23:37:47.586747 3426 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-d77dn" podUID="6eaa5414-466a-4dc1-ab02-5e5914bb112e" Apr 24 23:37:47.648891 containerd[2018]: time="2026-04-24T23:37:47.648727344Z" level=info msg="StartContainer for \"a630f0cf13d25c5f74b8b1d56f6a70a86df556566b93781128eee05a4b7e1dd1\" returns successfully" Apr 24 23:37:47.935422 kubelet[3426]: I0424 23:37:47.934693 3426 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-dfd6d48b-x8bd7" podStartSLOduration=1.53649451 podStartE2EDuration="4.934663622s" podCreationTimestamp="2026-04-24 23:37:43 +0000 UTC" firstStartedPulling="2026-04-24 23:37:44.062387299 +0000 UTC m=+29.786970413" lastFinishedPulling="2026-04-24 23:37:47.460556339 +0000 UTC m=+33.185139525" observedRunningTime="2026-04-24 23:37:47.925095518 +0000 UTC m=+33.649678656" watchObservedRunningTime="2026-04-24 23:37:47.934663622 +0000 UTC m=+33.659246844" Apr 24 23:37:48.856017 kubelet[3426]: I0424 23:37:48.854492 3426 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 24 23:37:49.587436 kubelet[3426]: E0424 23:37:49.587082 3426 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-d77dn" podUID="6eaa5414-466a-4dc1-ab02-5e5914bb112e" Apr 24 23:37:50.410094 kubelet[3426]: I0424 23:37:50.409433 3426 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 24 23:37:51.588097 kubelet[3426]: E0424 23:37:51.587873 3426 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-d77dn" podUID="6eaa5414-466a-4dc1-ab02-5e5914bb112e" Apr 24 23:37:53.587644 kubelet[3426]: E0424 23:37:53.587122 3426 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-d77dn" podUID="6eaa5414-466a-4dc1-ab02-5e5914bb112e" Apr 24 23:37:53.897938 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3566472966.mount: Deactivated successfully. Apr 24 23:37:53.958025 containerd[2018]: time="2026-04-24T23:37:53.957585428Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:37:53.959089 containerd[2018]: time="2026-04-24T23:37:53.958920248Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=153921674" Apr 24 23:37:53.960728 containerd[2018]: time="2026-04-24T23:37:53.960040844Z" level=info msg="ImageCreate event name:\"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:37:53.964215 containerd[2018]: time="2026-04-24T23:37:53.964145660Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:37:53.966446 containerd[2018]: time="2026-04-24T23:37:53.965872916Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"153921536\" in 6.504517329s" Apr 24 23:37:53.966446 containerd[2018]: time="2026-04-24T23:37:53.965929544Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\"" Apr 24 23:37:53.973519 containerd[2018]: time="2026-04-24T23:37:53.972993140Z" level=info msg="CreateContainer within sandbox \"c1ff540a999b6409b87802697185f58606ce56846171241509849048f53f0932\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Apr 24 23:37:53.997440 containerd[2018]: time="2026-04-24T23:37:53.997217948Z" level=info msg="CreateContainer within sandbox \"c1ff540a999b6409b87802697185f58606ce56846171241509849048f53f0932\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"94281e26e2d422a264255615151f70ffa1d22b7a90cf5edfcfd96a8f76f6c1a7\"" Apr 24 23:37:53.998877 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3058869838.mount: Deactivated successfully. Apr 24 23:37:54.002011 containerd[2018]: time="2026-04-24T23:37:54.000682972Z" level=info msg="StartContainer for \"94281e26e2d422a264255615151f70ffa1d22b7a90cf5edfcfd96a8f76f6c1a7\"" Apr 24 23:37:54.063320 systemd[1]: Started cri-containerd-94281e26e2d422a264255615151f70ffa1d22b7a90cf5edfcfd96a8f76f6c1a7.scope - libcontainer container 94281e26e2d422a264255615151f70ffa1d22b7a90cf5edfcfd96a8f76f6c1a7. Apr 24 23:37:54.122390 containerd[2018]: time="2026-04-24T23:37:54.122316581Z" level=info msg="StartContainer for \"94281e26e2d422a264255615151f70ffa1d22b7a90cf5edfcfd96a8f76f6c1a7\" returns successfully" Apr 24 23:37:54.312419 systemd[1]: cri-containerd-94281e26e2d422a264255615151f70ffa1d22b7a90cf5edfcfd96a8f76f6c1a7.scope: Deactivated successfully. Apr 24 23:37:54.904933 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-94281e26e2d422a264255615151f70ffa1d22b7a90cf5edfcfd96a8f76f6c1a7-rootfs.mount: Deactivated successfully. Apr 24 23:37:55.099501 containerd[2018]: time="2026-04-24T23:37:55.099117785Z" level=info msg="shim disconnected" id=94281e26e2d422a264255615151f70ffa1d22b7a90cf5edfcfd96a8f76f6c1a7 namespace=k8s.io Apr 24 23:37:55.099501 containerd[2018]: time="2026-04-24T23:37:55.099194705Z" level=warning msg="cleaning up after shim disconnected" id=94281e26e2d422a264255615151f70ffa1d22b7a90cf5edfcfd96a8f76f6c1a7 namespace=k8s.io Apr 24 23:37:55.099501 containerd[2018]: time="2026-04-24T23:37:55.099217997Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 24 23:37:55.587718 kubelet[3426]: E0424 23:37:55.587645 3426 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-d77dn" podUID="6eaa5414-466a-4dc1-ab02-5e5914bb112e" Apr 24 23:37:55.886215 containerd[2018]: time="2026-04-24T23:37:55.885003393Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Apr 24 23:37:57.587149 kubelet[3426]: E0424 23:37:57.587092 3426 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-d77dn" podUID="6eaa5414-466a-4dc1-ab02-5e5914bb112e" Apr 24 23:37:58.968514 containerd[2018]: time="2026-04-24T23:37:58.968432293Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:37:58.971472 containerd[2018]: time="2026-04-24T23:37:58.971402137Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=66009216" Apr 24 23:37:58.974240 containerd[2018]: time="2026-04-24T23:37:58.974170945Z" level=info msg="ImageCreate event name:\"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:37:58.980338 containerd[2018]: time="2026-04-24T23:37:58.980252665Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:37:58.983205 containerd[2018]: time="2026-04-24T23:37:58.983131369Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"67406741\" in 3.098064988s" Apr 24 23:37:58.983205 containerd[2018]: time="2026-04-24T23:37:58.983194933Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\"" Apr 24 23:37:58.991388 containerd[2018]: time="2026-04-24T23:37:58.991325701Z" level=info msg="CreateContainer within sandbox \"c1ff540a999b6409b87802697185f58606ce56846171241509849048f53f0932\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 24 23:37:59.021295 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3198052057.mount: Deactivated successfully. Apr 24 23:37:59.028918 containerd[2018]: time="2026-04-24T23:37:59.028853937Z" level=info msg="CreateContainer within sandbox \"c1ff540a999b6409b87802697185f58606ce56846171241509849048f53f0932\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"093e43327025cc9244e0d2cd6ece43d5050bd0a7b941a62c9396649a9c40712f\"" Apr 24 23:37:59.030659 containerd[2018]: time="2026-04-24T23:37:59.030571605Z" level=info msg="StartContainer for \"093e43327025cc9244e0d2cd6ece43d5050bd0a7b941a62c9396649a9c40712f\"" Apr 24 23:37:59.083029 systemd[1]: run-containerd-runc-k8s.io-093e43327025cc9244e0d2cd6ece43d5050bd0a7b941a62c9396649a9c40712f-runc.wkjSKT.mount: Deactivated successfully. Apr 24 23:37:59.096372 systemd[1]: Started cri-containerd-093e43327025cc9244e0d2cd6ece43d5050bd0a7b941a62c9396649a9c40712f.scope - libcontainer container 093e43327025cc9244e0d2cd6ece43d5050bd0a7b941a62c9396649a9c40712f. Apr 24 23:37:59.159612 containerd[2018]: time="2026-04-24T23:37:59.159547642Z" level=info msg="StartContainer for \"093e43327025cc9244e0d2cd6ece43d5050bd0a7b941a62c9396649a9c40712f\" returns successfully" Apr 24 23:37:59.588894 kubelet[3426]: E0424 23:37:59.588398 3426 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-d77dn" podUID="6eaa5414-466a-4dc1-ab02-5e5914bb112e" Apr 24 23:38:00.989234 systemd[1]: cri-containerd-093e43327025cc9244e0d2cd6ece43d5050bd0a7b941a62c9396649a9c40712f.scope: Deactivated successfully. Apr 24 23:38:00.991229 systemd[1]: cri-containerd-093e43327025cc9244e0d2cd6ece43d5050bd0a7b941a62c9396649a9c40712f.scope: Consumed 1.075s CPU time. Apr 24 23:38:01.007521 kubelet[3426]: I0424 23:38:01.007475 3426 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Apr 24 23:38:01.058928 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-093e43327025cc9244e0d2cd6ece43d5050bd0a7b941a62c9396649a9c40712f-rootfs.mount: Deactivated successfully. Apr 24 23:38:01.081754 containerd[2018]: time="2026-04-24T23:38:01.081629891Z" level=info msg="shim disconnected" id=093e43327025cc9244e0d2cd6ece43d5050bd0a7b941a62c9396649a9c40712f namespace=k8s.io Apr 24 23:38:01.081754 containerd[2018]: time="2026-04-24T23:38:01.081733415Z" level=warning msg="cleaning up after shim disconnected" id=093e43327025cc9244e0d2cd6ece43d5050bd0a7b941a62c9396649a9c40712f namespace=k8s.io Apr 24 23:38:01.082517 containerd[2018]: time="2026-04-24T23:38:01.081760427Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 24 23:38:01.103020 systemd[1]: Created slice kubepods-burstable-pod4359df61_6ba9_4f39_beb9_94953c5d320d.slice - libcontainer container kubepods-burstable-pod4359df61_6ba9_4f39_beb9_94953c5d320d.slice. Apr 24 23:38:01.139963 systemd[1]: Created slice kubepods-burstable-podaf04062c_5fa7_4ac4_a06d_12268ffd9166.slice - libcontainer container kubepods-burstable-podaf04062c_5fa7_4ac4_a06d_12268ffd9166.slice. Apr 24 23:38:01.204837 systemd[1]: Created slice kubepods-besteffort-pod484f4df4_9a9a_41cc_87e4_b75a5e07666e.slice - libcontainer container kubepods-besteffort-pod484f4df4_9a9a_41cc_87e4_b75a5e07666e.slice. Apr 24 23:38:01.225344 kubelet[3426]: I0424 23:38:01.224815 3426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzsq9\" (UniqueName: \"kubernetes.io/projected/4359df61-6ba9-4f39-beb9-94953c5d320d-kube-api-access-gzsq9\") pod \"coredns-674b8bbfcf-nvwfk\" (UID: \"4359df61-6ba9-4f39-beb9-94953c5d320d\") " pod="kube-system/coredns-674b8bbfcf-nvwfk" Apr 24 23:38:01.225344 kubelet[3426]: I0424 23:38:01.224915 3426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4359df61-6ba9-4f39-beb9-94953c5d320d-config-volume\") pod \"coredns-674b8bbfcf-nvwfk\" (UID: \"4359df61-6ba9-4f39-beb9-94953c5d320d\") " pod="kube-system/coredns-674b8bbfcf-nvwfk" Apr 24 23:38:01.225344 kubelet[3426]: I0424 23:38:01.225022 3426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxzxx\" (UniqueName: \"kubernetes.io/projected/af04062c-5fa7-4ac4-a06d-12268ffd9166-kube-api-access-mxzxx\") pod \"coredns-674b8bbfcf-sv25n\" (UID: \"af04062c-5fa7-4ac4-a06d-12268ffd9166\") " pod="kube-system/coredns-674b8bbfcf-sv25n" Apr 24 23:38:01.227744 kubelet[3426]: I0424 23:38:01.226636 3426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4q2n\" (UniqueName: \"kubernetes.io/projected/484f4df4-9a9a-41cc-87e4-b75a5e07666e-kube-api-access-t4q2n\") pod \"calico-kube-controllers-8696996ff8-qz98n\" (UID: \"484f4df4-9a9a-41cc-87e4-b75a5e07666e\") " pod="calico-system/calico-kube-controllers-8696996ff8-qz98n" Apr 24 23:38:01.227744 kubelet[3426]: I0424 23:38:01.227225 3426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/af04062c-5fa7-4ac4-a06d-12268ffd9166-config-volume\") pod \"coredns-674b8bbfcf-sv25n\" (UID: \"af04062c-5fa7-4ac4-a06d-12268ffd9166\") " pod="kube-system/coredns-674b8bbfcf-sv25n" Apr 24 23:38:01.227744 kubelet[3426]: I0424 23:38:01.227295 3426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/484f4df4-9a9a-41cc-87e4-b75a5e07666e-tigera-ca-bundle\") pod \"calico-kube-controllers-8696996ff8-qz98n\" (UID: \"484f4df4-9a9a-41cc-87e4-b75a5e07666e\") " pod="calico-system/calico-kube-controllers-8696996ff8-qz98n" Apr 24 23:38:01.235721 systemd[1]: Created slice kubepods-besteffort-pod9fd92f4c_e0fc_4914_8842_cd565a232751.slice - libcontainer container kubepods-besteffort-pod9fd92f4c_e0fc_4914_8842_cd565a232751.slice. Apr 24 23:38:01.253308 systemd[1]: Created slice kubepods-besteffort-podb3876f02_9c03_4cf9_883e_82b297082dbe.slice - libcontainer container kubepods-besteffort-podb3876f02_9c03_4cf9_883e_82b297082dbe.slice. Apr 24 23:38:01.270961 systemd[1]: Created slice kubepods-besteffort-pod65e053c6_4f3f_427f_8a78_6c78634b5620.slice - libcontainer container kubepods-besteffort-pod65e053c6_4f3f_427f_8a78_6c78634b5620.slice. Apr 24 23:38:01.289775 systemd[1]: Created slice kubepods-besteffort-pod8d59be44_1022_4a6d_9634_d8c1fdfc4c4b.slice - libcontainer container kubepods-besteffort-pod8d59be44_1022_4a6d_9634_d8c1fdfc4c4b.slice. Apr 24 23:38:01.329902 kubelet[3426]: I0424 23:38:01.329821 3426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9fd92f4c-e0fc-4914-8842-cd565a232751-whisker-ca-bundle\") pod \"whisker-5ddfdf984-kwv9n\" (UID: \"9fd92f4c-e0fc-4914-8842-cd565a232751\") " pod="calico-system/whisker-5ddfdf984-kwv9n" Apr 24 23:38:01.330338 kubelet[3426]: I0424 23:38:01.330148 3426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kq59v\" (UniqueName: \"kubernetes.io/projected/9fd92f4c-e0fc-4914-8842-cd565a232751-kube-api-access-kq59v\") pod \"whisker-5ddfdf984-kwv9n\" (UID: \"9fd92f4c-e0fc-4914-8842-cd565a232751\") " pod="calico-system/whisker-5ddfdf984-kwv9n" Apr 24 23:38:01.330506 kubelet[3426]: I0424 23:38:01.330449 3426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/8d59be44-1022-4a6d-9634-d8c1fdfc4c4b-goldmane-key-pair\") pod \"goldmane-5b85766d88-77cfg\" (UID: \"8d59be44-1022-4a6d-9634-d8c1fdfc4c4b\") " pod="calico-system/goldmane-5b85766d88-77cfg" Apr 24 23:38:01.330752 kubelet[3426]: I0424 23:38:01.330723 3426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8d59be44-1022-4a6d-9634-d8c1fdfc4c4b-goldmane-ca-bundle\") pod \"goldmane-5b85766d88-77cfg\" (UID: \"8d59be44-1022-4a6d-9634-d8c1fdfc4c4b\") " pod="calico-system/goldmane-5b85766d88-77cfg" Apr 24 23:38:01.331998 kubelet[3426]: I0424 23:38:01.331704 3426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/b3876f02-9c03-4cf9-883e-82b297082dbe-calico-apiserver-certs\") pod \"calico-apiserver-df9f5595c-hn778\" (UID: \"b3876f02-9c03-4cf9-883e-82b297082dbe\") " pod="calico-system/calico-apiserver-df9f5595c-hn778" Apr 24 23:38:01.331998 kubelet[3426]: I0424 23:38:01.331779 3426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fs49\" (UniqueName: \"kubernetes.io/projected/b3876f02-9c03-4cf9-883e-82b297082dbe-kube-api-access-8fs49\") pod \"calico-apiserver-df9f5595c-hn778\" (UID: \"b3876f02-9c03-4cf9-883e-82b297082dbe\") " pod="calico-system/calico-apiserver-df9f5595c-hn778" Apr 24 23:38:01.331998 kubelet[3426]: I0424 23:38:01.331825 3426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/9fd92f4c-e0fc-4914-8842-cd565a232751-nginx-config\") pod \"whisker-5ddfdf984-kwv9n\" (UID: \"9fd92f4c-e0fc-4914-8842-cd565a232751\") " pod="calico-system/whisker-5ddfdf984-kwv9n" Apr 24 23:38:01.331998 kubelet[3426]: I0424 23:38:01.331865 3426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/65e053c6-4f3f-427f-8a78-6c78634b5620-calico-apiserver-certs\") pod \"calico-apiserver-df9f5595c-2967j\" (UID: \"65e053c6-4f3f-427f-8a78-6c78634b5620\") " pod="calico-system/calico-apiserver-df9f5595c-2967j" Apr 24 23:38:01.331998 kubelet[3426]: I0424 23:38:01.331924 3426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnxrh\" (UniqueName: \"kubernetes.io/projected/8d59be44-1022-4a6d-9634-d8c1fdfc4c4b-kube-api-access-mnxrh\") pod \"goldmane-5b85766d88-77cfg\" (UID: \"8d59be44-1022-4a6d-9634-d8c1fdfc4c4b\") " pod="calico-system/goldmane-5b85766d88-77cfg" Apr 24 23:38:01.332388 kubelet[3426]: I0424 23:38:01.332074 3426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d59be44-1022-4a6d-9634-d8c1fdfc4c4b-config\") pod \"goldmane-5b85766d88-77cfg\" (UID: \"8d59be44-1022-4a6d-9634-d8c1fdfc4c4b\") " pod="calico-system/goldmane-5b85766d88-77cfg" Apr 24 23:38:01.332388 kubelet[3426]: I0424 23:38:01.332119 3426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9fd92f4c-e0fc-4914-8842-cd565a232751-whisker-backend-key-pair\") pod \"whisker-5ddfdf984-kwv9n\" (UID: \"9fd92f4c-e0fc-4914-8842-cd565a232751\") " pod="calico-system/whisker-5ddfdf984-kwv9n" Apr 24 23:38:01.332388 kubelet[3426]: I0424 23:38:01.332181 3426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rpjvr\" (UniqueName: \"kubernetes.io/projected/65e053c6-4f3f-427f-8a78-6c78634b5620-kube-api-access-rpjvr\") pod \"calico-apiserver-df9f5595c-2967j\" (UID: \"65e053c6-4f3f-427f-8a78-6c78634b5620\") " pod="calico-system/calico-apiserver-df9f5595c-2967j" Apr 24 23:38:01.431086 containerd[2018]: time="2026-04-24T23:38:01.430852105Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-nvwfk,Uid:4359df61-6ba9-4f39-beb9-94953c5d320d,Namespace:kube-system,Attempt:0,}" Apr 24 23:38:01.486999 containerd[2018]: time="2026-04-24T23:38:01.486187129Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-sv25n,Uid:af04062c-5fa7-4ac4-a06d-12268ffd9166,Namespace:kube-system,Attempt:0,}" Apr 24 23:38:01.540308 containerd[2018]: time="2026-04-24T23:38:01.539091565Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8696996ff8-qz98n,Uid:484f4df4-9a9a-41cc-87e4-b75a5e07666e,Namespace:calico-system,Attempt:0,}" Apr 24 23:38:01.547409 containerd[2018]: time="2026-04-24T23:38:01.547336141Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5ddfdf984-kwv9n,Uid:9fd92f4c-e0fc-4914-8842-cd565a232751,Namespace:calico-system,Attempt:0,}" Apr 24 23:38:01.564486 containerd[2018]: time="2026-04-24T23:38:01.564434798Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-df9f5595c-hn778,Uid:b3876f02-9c03-4cf9-883e-82b297082dbe,Namespace:calico-system,Attempt:0,}" Apr 24 23:38:01.584747 containerd[2018]: time="2026-04-24T23:38:01.584283494Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-df9f5595c-2967j,Uid:65e053c6-4f3f-427f-8a78-6c78634b5620,Namespace:calico-system,Attempt:0,}" Apr 24 23:38:01.599692 containerd[2018]: time="2026-04-24T23:38:01.599626886Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-77cfg,Uid:8d59be44-1022-4a6d-9634-d8c1fdfc4c4b,Namespace:calico-system,Attempt:0,}" Apr 24 23:38:01.605502 systemd[1]: Created slice kubepods-besteffort-pod6eaa5414_466a_4dc1_ab02_5e5914bb112e.slice - libcontainer container kubepods-besteffort-pod6eaa5414_466a_4dc1_ab02_5e5914bb112e.slice. Apr 24 23:38:01.615608 containerd[2018]: time="2026-04-24T23:38:01.615530438Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-d77dn,Uid:6eaa5414-466a-4dc1-ab02-5e5914bb112e,Namespace:calico-system,Attempt:0,}" Apr 24 23:38:01.932674 containerd[2018]: time="2026-04-24T23:38:01.932223615Z" level=error msg="Failed to destroy network for sandbox \"5f6667010837cd94ebbf3cdfb23db789fce4fcd741e17499e8078f196a7bfab3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:38:01.961014 containerd[2018]: time="2026-04-24T23:38:01.960383104Z" level=error msg="encountered an error cleaning up failed sandbox \"5f6667010837cd94ebbf3cdfb23db789fce4fcd741e17499e8078f196a7bfab3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:38:01.971470 containerd[2018]: time="2026-04-24T23:38:01.971399788Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-sv25n,Uid:af04062c-5fa7-4ac4-a06d-12268ffd9166,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5f6667010837cd94ebbf3cdfb23db789fce4fcd741e17499e8078f196a7bfab3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:38:01.972066 kubelet[3426]: E0424 23:38:01.971947 3426 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f6667010837cd94ebbf3cdfb23db789fce4fcd741e17499e8078f196a7bfab3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:38:01.972254 kubelet[3426]: E0424 23:38:01.972089 3426 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f6667010837cd94ebbf3cdfb23db789fce4fcd741e17499e8078f196a7bfab3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-sv25n" Apr 24 23:38:01.972254 kubelet[3426]: E0424 23:38:01.972126 3426 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f6667010837cd94ebbf3cdfb23db789fce4fcd741e17499e8078f196a7bfab3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-sv25n" Apr 24 23:38:01.972390 kubelet[3426]: E0424 23:38:01.972240 3426 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-sv25n_kube-system(af04062c-5fa7-4ac4-a06d-12268ffd9166)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-sv25n_kube-system(af04062c-5fa7-4ac4-a06d-12268ffd9166)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5f6667010837cd94ebbf3cdfb23db789fce4fcd741e17499e8078f196a7bfab3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-sv25n" podUID="af04062c-5fa7-4ac4-a06d-12268ffd9166" Apr 24 23:38:01.987056 containerd[2018]: time="2026-04-24T23:38:01.985550464Z" level=info msg="CreateContainer within sandbox \"c1ff540a999b6409b87802697185f58606ce56846171241509849048f53f0932\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Apr 24 23:38:02.003730 containerd[2018]: time="2026-04-24T23:38:02.003589620Z" level=error msg="Failed to destroy network for sandbox \"b4fd534663ef2e41688818e71b439da31221a08a63e61924e40cb1b8056cdbca\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:38:02.009873 containerd[2018]: time="2026-04-24T23:38:02.009282768Z" level=error msg="encountered an error cleaning up failed sandbox \"b4fd534663ef2e41688818e71b439da31221a08a63e61924e40cb1b8056cdbca\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:38:02.009873 containerd[2018]: time="2026-04-24T23:38:02.009387564Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-nvwfk,Uid:4359df61-6ba9-4f39-beb9-94953c5d320d,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b4fd534663ef2e41688818e71b439da31221a08a63e61924e40cb1b8056cdbca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:38:02.010913 kubelet[3426]: E0424 23:38:02.010861 3426 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b4fd534663ef2e41688818e71b439da31221a08a63e61924e40cb1b8056cdbca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:38:02.011766 kubelet[3426]: E0424 23:38:02.011627 3426 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b4fd534663ef2e41688818e71b439da31221a08a63e61924e40cb1b8056cdbca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-nvwfk" Apr 24 23:38:02.011766 kubelet[3426]: E0424 23:38:02.011704 3426 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b4fd534663ef2e41688818e71b439da31221a08a63e61924e40cb1b8056cdbca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-nvwfk" Apr 24 23:38:02.012292 kubelet[3426]: E0424 23:38:02.012057 3426 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-nvwfk_kube-system(4359df61-6ba9-4f39-beb9-94953c5d320d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-nvwfk_kube-system(4359df61-6ba9-4f39-beb9-94953c5d320d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b4fd534663ef2e41688818e71b439da31221a08a63e61924e40cb1b8056cdbca\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-nvwfk" podUID="4359df61-6ba9-4f39-beb9-94953c5d320d" Apr 24 23:38:02.063555 containerd[2018]: time="2026-04-24T23:38:02.063456228Z" level=info msg="CreateContainer within sandbox \"c1ff540a999b6409b87802697185f58606ce56846171241509849048f53f0932\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"959d96bbe55600143817234e2b398b6ac58d1a16a42e378ba2f442b1e9c78d68\"" Apr 24 23:38:02.069360 containerd[2018]: time="2026-04-24T23:38:02.069233016Z" level=info msg="StartContainer for \"959d96bbe55600143817234e2b398b6ac58d1a16a42e378ba2f442b1e9c78d68\"" Apr 24 23:38:02.238087 containerd[2018]: time="2026-04-24T23:38:02.235512721Z" level=error msg="Failed to destroy network for sandbox \"d611da877bf47a0689ed7698eb7d95eac66730db14e5ef9bc32069502d8da731\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:38:02.241898 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d611da877bf47a0689ed7698eb7d95eac66730db14e5ef9bc32069502d8da731-shm.mount: Deactivated successfully. Apr 24 23:38:02.253805 containerd[2018]: time="2026-04-24T23:38:02.252559045Z" level=error msg="encountered an error cleaning up failed sandbox \"d611da877bf47a0689ed7698eb7d95eac66730db14e5ef9bc32069502d8da731\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:38:02.253805 containerd[2018]: time="2026-04-24T23:38:02.252680533Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-df9f5595c-hn778,Uid:b3876f02-9c03-4cf9-883e-82b297082dbe,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d611da877bf47a0689ed7698eb7d95eac66730db14e5ef9bc32069502d8da731\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:38:02.255348 kubelet[3426]: E0424 23:38:02.253216 3426 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d611da877bf47a0689ed7698eb7d95eac66730db14e5ef9bc32069502d8da731\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:38:02.255348 kubelet[3426]: E0424 23:38:02.253293 3426 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d611da877bf47a0689ed7698eb7d95eac66730db14e5ef9bc32069502d8da731\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-df9f5595c-hn778" Apr 24 23:38:02.255348 kubelet[3426]: E0424 23:38:02.253329 3426 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d611da877bf47a0689ed7698eb7d95eac66730db14e5ef9bc32069502d8da731\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-df9f5595c-hn778" Apr 24 23:38:02.255575 kubelet[3426]: E0424 23:38:02.253420 3426 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-df9f5595c-hn778_calico-system(b3876f02-9c03-4cf9-883e-82b297082dbe)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-df9f5595c-hn778_calico-system(b3876f02-9c03-4cf9-883e-82b297082dbe)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d611da877bf47a0689ed7698eb7d95eac66730db14e5ef9bc32069502d8da731\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-df9f5595c-hn778" podUID="b3876f02-9c03-4cf9-883e-82b297082dbe" Apr 24 23:38:02.256583 systemd[1]: Started cri-containerd-959d96bbe55600143817234e2b398b6ac58d1a16a42e378ba2f442b1e9c78d68.scope - libcontainer container 959d96bbe55600143817234e2b398b6ac58d1a16a42e378ba2f442b1e9c78d68. Apr 24 23:38:02.337775 containerd[2018]: time="2026-04-24T23:38:02.337475737Z" level=error msg="Failed to destroy network for sandbox \"cd91ce50f98b3389d0cdb159560eea51cefdfb2df53e1173d8b7056a89b1f0cb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:38:02.338469 containerd[2018]: time="2026-04-24T23:38:02.338415013Z" level=error msg="encountered an error cleaning up failed sandbox \"cd91ce50f98b3389d0cdb159560eea51cefdfb2df53e1173d8b7056a89b1f0cb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:38:02.340569 containerd[2018]: time="2026-04-24T23:38:02.340395313Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-df9f5595c-2967j,Uid:65e053c6-4f3f-427f-8a78-6c78634b5620,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"cd91ce50f98b3389d0cdb159560eea51cefdfb2df53e1173d8b7056a89b1f0cb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:38:02.342637 kubelet[3426]: E0424 23:38:02.341805 3426 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd91ce50f98b3389d0cdb159560eea51cefdfb2df53e1173d8b7056a89b1f0cb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:38:02.342637 kubelet[3426]: E0424 23:38:02.341918 3426 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd91ce50f98b3389d0cdb159560eea51cefdfb2df53e1173d8b7056a89b1f0cb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-df9f5595c-2967j" Apr 24 23:38:02.342637 kubelet[3426]: E0424 23:38:02.342011 3426 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd91ce50f98b3389d0cdb159560eea51cefdfb2df53e1173d8b7056a89b1f0cb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-df9f5595c-2967j" Apr 24 23:38:02.342895 kubelet[3426]: E0424 23:38:02.342546 3426 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-df9f5595c-2967j_calico-system(65e053c6-4f3f-427f-8a78-6c78634b5620)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-df9f5595c-2967j_calico-system(65e053c6-4f3f-427f-8a78-6c78634b5620)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cd91ce50f98b3389d0cdb159560eea51cefdfb2df53e1173d8b7056a89b1f0cb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-df9f5595c-2967j" podUID="65e053c6-4f3f-427f-8a78-6c78634b5620" Apr 24 23:38:02.347486 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-cd91ce50f98b3389d0cdb159560eea51cefdfb2df53e1173d8b7056a89b1f0cb-shm.mount: Deactivated successfully. Apr 24 23:38:02.358199 containerd[2018]: time="2026-04-24T23:38:02.358041349Z" level=error msg="Failed to destroy network for sandbox \"51e6c7175af89438d549e4773eccba71da85969522309a807e57c019d02ba2fc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:38:02.365369 containerd[2018]: time="2026-04-24T23:38:02.365125706Z" level=error msg="encountered an error cleaning up failed sandbox \"51e6c7175af89438d549e4773eccba71da85969522309a807e57c019d02ba2fc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:38:02.366051 containerd[2018]: time="2026-04-24T23:38:02.365784146Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8696996ff8-qz98n,Uid:484f4df4-9a9a-41cc-87e4-b75a5e07666e,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"51e6c7175af89438d549e4773eccba71da85969522309a807e57c019d02ba2fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:38:02.366540 containerd[2018]: time="2026-04-24T23:38:02.365217530Z" level=error msg="Failed to destroy network for sandbox \"c6865e68965e0f323af2659e1f94cfc606429b30a9a728e50f0f2c8690d3f9b8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:38:02.367665 kubelet[3426]: E0424 23:38:02.366963 3426 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"51e6c7175af89438d549e4773eccba71da85969522309a807e57c019d02ba2fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:38:02.367665 kubelet[3426]: E0424 23:38:02.367102 3426 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"51e6c7175af89438d549e4773eccba71da85969522309a807e57c019d02ba2fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-8696996ff8-qz98n" Apr 24 23:38:02.367665 kubelet[3426]: E0424 23:38:02.367137 3426 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"51e6c7175af89438d549e4773eccba71da85969522309a807e57c019d02ba2fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-8696996ff8-qz98n" Apr 24 23:38:02.368093 kubelet[3426]: E0424 23:38:02.367209 3426 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-8696996ff8-qz98n_calico-system(484f4df4-9a9a-41cc-87e4-b75a5e07666e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-8696996ff8-qz98n_calico-system(484f4df4-9a9a-41cc-87e4-b75a5e07666e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"51e6c7175af89438d549e4773eccba71da85969522309a807e57c019d02ba2fc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-8696996ff8-qz98n" podUID="484f4df4-9a9a-41cc-87e4-b75a5e07666e" Apr 24 23:38:02.371595 containerd[2018]: time="2026-04-24T23:38:02.371040962Z" level=error msg="encountered an error cleaning up failed sandbox \"c6865e68965e0f323af2659e1f94cfc606429b30a9a728e50f0f2c8690d3f9b8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:38:02.372362 containerd[2018]: time="2026-04-24T23:38:02.371736614Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-d77dn,Uid:6eaa5414-466a-4dc1-ab02-5e5914bb112e,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c6865e68965e0f323af2659e1f94cfc606429b30a9a728e50f0f2c8690d3f9b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:38:02.373680 kubelet[3426]: E0424 23:38:02.372964 3426 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c6865e68965e0f323af2659e1f94cfc606429b30a9a728e50f0f2c8690d3f9b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:38:02.373680 kubelet[3426]: E0424 23:38:02.373376 3426 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c6865e68965e0f323af2659e1f94cfc606429b30a9a728e50f0f2c8690d3f9b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-d77dn" Apr 24 23:38:02.374760 kubelet[3426]: E0424 23:38:02.374052 3426 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c6865e68965e0f323af2659e1f94cfc606429b30a9a728e50f0f2c8690d3f9b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-d77dn" Apr 24 23:38:02.378029 kubelet[3426]: E0424 23:38:02.376298 3426 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-d77dn_calico-system(6eaa5414-466a-4dc1-ab02-5e5914bb112e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-d77dn_calico-system(6eaa5414-466a-4dc1-ab02-5e5914bb112e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c6865e68965e0f323af2659e1f94cfc606429b30a9a728e50f0f2c8690d3f9b8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-d77dn" podUID="6eaa5414-466a-4dc1-ab02-5e5914bb112e" Apr 24 23:38:02.395072 containerd[2018]: time="2026-04-24T23:38:02.394472618Z" level=error msg="Failed to destroy network for sandbox \"24955415d6c11992198b2e474a1f3de2b1a74873953fdbd459948b28d4d9a850\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:38:02.396378 containerd[2018]: time="2026-04-24T23:38:02.396109790Z" level=error msg="encountered an error cleaning up failed sandbox \"24955415d6c11992198b2e474a1f3de2b1a74873953fdbd459948b28d4d9a850\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:38:02.396654 containerd[2018]: time="2026-04-24T23:38:02.396605486Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-77cfg,Uid:8d59be44-1022-4a6d-9634-d8c1fdfc4c4b,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"24955415d6c11992198b2e474a1f3de2b1a74873953fdbd459948b28d4d9a850\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:38:02.397692 kubelet[3426]: E0424 23:38:02.397544 3426 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"24955415d6c11992198b2e474a1f3de2b1a74873953fdbd459948b28d4d9a850\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:38:02.397914 kubelet[3426]: E0424 23:38:02.397654 3426 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"24955415d6c11992198b2e474a1f3de2b1a74873953fdbd459948b28d4d9a850\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5b85766d88-77cfg" Apr 24 23:38:02.397914 kubelet[3426]: E0424 23:38:02.397859 3426 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"24955415d6c11992198b2e474a1f3de2b1a74873953fdbd459948b28d4d9a850\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5b85766d88-77cfg" Apr 24 23:38:02.398315 kubelet[3426]: E0424 23:38:02.398250 3426 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-5b85766d88-77cfg_calico-system(8d59be44-1022-4a6d-9634-d8c1fdfc4c4b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-5b85766d88-77cfg_calico-system(8d59be44-1022-4a6d-9634-d8c1fdfc4c4b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"24955415d6c11992198b2e474a1f3de2b1a74873953fdbd459948b28d4d9a850\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-5b85766d88-77cfg" podUID="8d59be44-1022-4a6d-9634-d8c1fdfc4c4b" Apr 24 23:38:02.414006 containerd[2018]: time="2026-04-24T23:38:02.413767034Z" level=info msg="StartContainer for \"959d96bbe55600143817234e2b398b6ac58d1a16a42e378ba2f442b1e9c78d68\" returns successfully" Apr 24 23:38:02.418138 containerd[2018]: time="2026-04-24T23:38:02.418046678Z" level=error msg="Failed to destroy network for sandbox \"d87af8c771fb4a4090e53b4e52007664fd855acbbd729e12d75d54f763109684\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:38:02.418937 containerd[2018]: time="2026-04-24T23:38:02.418835438Z" level=error msg="encountered an error cleaning up failed sandbox \"d87af8c771fb4a4090e53b4e52007664fd855acbbd729e12d75d54f763109684\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:38:02.419553 containerd[2018]: time="2026-04-24T23:38:02.418934966Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5ddfdf984-kwv9n,Uid:9fd92f4c-e0fc-4914-8842-cd565a232751,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d87af8c771fb4a4090e53b4e52007664fd855acbbd729e12d75d54f763109684\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:38:02.419643 kubelet[3426]: E0424 23:38:02.419254 3426 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d87af8c771fb4a4090e53b4e52007664fd855acbbd729e12d75d54f763109684\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:38:02.419643 kubelet[3426]: E0424 23:38:02.419332 3426 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d87af8c771fb4a4090e53b4e52007664fd855acbbd729e12d75d54f763109684\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5ddfdf984-kwv9n" Apr 24 23:38:02.419643 kubelet[3426]: E0424 23:38:02.419369 3426 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d87af8c771fb4a4090e53b4e52007664fd855acbbd729e12d75d54f763109684\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5ddfdf984-kwv9n" Apr 24 23:38:02.419843 kubelet[3426]: E0424 23:38:02.419440 3426 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5ddfdf984-kwv9n_calico-system(9fd92f4c-e0fc-4914-8842-cd565a232751)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5ddfdf984-kwv9n_calico-system(9fd92f4c-e0fc-4914-8842-cd565a232751)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d87af8c771fb4a4090e53b4e52007664fd855acbbd729e12d75d54f763109684\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5ddfdf984-kwv9n" podUID="9fd92f4c-e0fc-4914-8842-cd565a232751" Apr 24 23:38:02.948462 kubelet[3426]: I0424 23:38:02.948403 3426 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="24955415d6c11992198b2e474a1f3de2b1a74873953fdbd459948b28d4d9a850" Apr 24 23:38:02.955084 containerd[2018]: time="2026-04-24T23:38:02.954464308Z" level=info msg="StopPodSandbox for \"24955415d6c11992198b2e474a1f3de2b1a74873953fdbd459948b28d4d9a850\"" Apr 24 23:38:02.956636 containerd[2018]: time="2026-04-24T23:38:02.955392676Z" level=info msg="Ensure that sandbox 24955415d6c11992198b2e474a1f3de2b1a74873953fdbd459948b28d4d9a850 in task-service has been cleanup successfully" Apr 24 23:38:02.961900 kubelet[3426]: I0424 23:38:02.961427 3426 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cd91ce50f98b3389d0cdb159560eea51cefdfb2df53e1173d8b7056a89b1f0cb" Apr 24 23:38:02.967853 containerd[2018]: time="2026-04-24T23:38:02.967770557Z" level=info msg="StopPodSandbox for \"cd91ce50f98b3389d0cdb159560eea51cefdfb2df53e1173d8b7056a89b1f0cb\"" Apr 24 23:38:02.968278 containerd[2018]: time="2026-04-24T23:38:02.968096753Z" level=info msg="Ensure that sandbox cd91ce50f98b3389d0cdb159560eea51cefdfb2df53e1173d8b7056a89b1f0cb in task-service has been cleanup successfully" Apr 24 23:38:02.978023 kubelet[3426]: I0424 23:38:02.977509 3426 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d611da877bf47a0689ed7698eb7d95eac66730db14e5ef9bc32069502d8da731" Apr 24 23:38:02.981910 containerd[2018]: time="2026-04-24T23:38:02.981838073Z" level=info msg="StopPodSandbox for \"d611da877bf47a0689ed7698eb7d95eac66730db14e5ef9bc32069502d8da731\"" Apr 24 23:38:02.983511 containerd[2018]: time="2026-04-24T23:38:02.983455469Z" level=info msg="Ensure that sandbox d611da877bf47a0689ed7698eb7d95eac66730db14e5ef9bc32069502d8da731 in task-service has been cleanup successfully" Apr 24 23:38:02.989320 kubelet[3426]: I0424 23:38:02.988339 3426 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="51e6c7175af89438d549e4773eccba71da85969522309a807e57c019d02ba2fc" Apr 24 23:38:02.991451 containerd[2018]: time="2026-04-24T23:38:02.991366577Z" level=info msg="StopPodSandbox for \"51e6c7175af89438d549e4773eccba71da85969522309a807e57c019d02ba2fc\"" Apr 24 23:38:02.991732 containerd[2018]: time="2026-04-24T23:38:02.991674257Z" level=info msg="Ensure that sandbox 51e6c7175af89438d549e4773eccba71da85969522309a807e57c019d02ba2fc in task-service has been cleanup successfully" Apr 24 23:38:03.006008 kubelet[3426]: I0424 23:38:03.005148 3426 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c6865e68965e0f323af2659e1f94cfc606429b30a9a728e50f0f2c8690d3f9b8" Apr 24 23:38:03.007268 containerd[2018]: time="2026-04-24T23:38:03.007195885Z" level=info msg="StopPodSandbox for \"c6865e68965e0f323af2659e1f94cfc606429b30a9a728e50f0f2c8690d3f9b8\"" Apr 24 23:38:03.012553 containerd[2018]: time="2026-04-24T23:38:03.011569333Z" level=info msg="Ensure that sandbox c6865e68965e0f323af2659e1f94cfc606429b30a9a728e50f0f2c8690d3f9b8 in task-service has been cleanup successfully" Apr 24 23:38:03.029725 kubelet[3426]: I0424 23:38:03.029593 3426 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-pjgsk" podStartSLOduration=5.051032362 podStartE2EDuration="20.029567857s" podCreationTimestamp="2026-04-24 23:37:43 +0000 UTC" firstStartedPulling="2026-04-24 23:37:44.00605559 +0000 UTC m=+29.730638704" lastFinishedPulling="2026-04-24 23:37:58.984550537 +0000 UTC m=+44.709174199" observedRunningTime="2026-04-24 23:38:03.028295809 +0000 UTC m=+48.752879019" watchObservedRunningTime="2026-04-24 23:38:03.029567857 +0000 UTC m=+48.754151007" Apr 24 23:38:03.047023 kubelet[3426]: I0424 23:38:03.045709 3426 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d87af8c771fb4a4090e53b4e52007664fd855acbbd729e12d75d54f763109684" Apr 24 23:38:03.060364 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-24955415d6c11992198b2e474a1f3de2b1a74873953fdbd459948b28d4d9a850-shm.mount: Deactivated successfully. Apr 24 23:38:03.061262 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c6865e68965e0f323af2659e1f94cfc606429b30a9a728e50f0f2c8690d3f9b8-shm.mount: Deactivated successfully. Apr 24 23:38:03.063323 containerd[2018]: time="2026-04-24T23:38:03.062954437Z" level=info msg="StopPodSandbox for \"d87af8c771fb4a4090e53b4e52007664fd855acbbd729e12d75d54f763109684\"" Apr 24 23:38:03.061405 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d87af8c771fb4a4090e53b4e52007664fd855acbbd729e12d75d54f763109684-shm.mount: Deactivated successfully. Apr 24 23:38:03.061540 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-51e6c7175af89438d549e4773eccba71da85969522309a807e57c019d02ba2fc-shm.mount: Deactivated successfully. Apr 24 23:38:03.064445 containerd[2018]: time="2026-04-24T23:38:03.064162117Z" level=info msg="Ensure that sandbox d87af8c771fb4a4090e53b4e52007664fd855acbbd729e12d75d54f763109684 in task-service has been cleanup successfully" Apr 24 23:38:03.081235 kubelet[3426]: I0424 23:38:03.081191 3426 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5f6667010837cd94ebbf3cdfb23db789fce4fcd741e17499e8078f196a7bfab3" Apr 24 23:38:03.089134 containerd[2018]: time="2026-04-24T23:38:03.089065645Z" level=info msg="StopPodSandbox for \"5f6667010837cd94ebbf3cdfb23db789fce4fcd741e17499e8078f196a7bfab3\"" Apr 24 23:38:03.093321 containerd[2018]: time="2026-04-24T23:38:03.093264145Z" level=info msg="Ensure that sandbox 5f6667010837cd94ebbf3cdfb23db789fce4fcd741e17499e8078f196a7bfab3 in task-service has been cleanup successfully" Apr 24 23:38:03.136506 kubelet[3426]: I0424 23:38:03.136471 3426 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b4fd534663ef2e41688818e71b439da31221a08a63e61924e40cb1b8056cdbca" Apr 24 23:38:03.139761 containerd[2018]: time="2026-04-24T23:38:03.139686301Z" level=info msg="StopPodSandbox for \"b4fd534663ef2e41688818e71b439da31221a08a63e61924e40cb1b8056cdbca\"" Apr 24 23:38:03.151660 containerd[2018]: time="2026-04-24T23:38:03.150747913Z" level=info msg="Ensure that sandbox b4fd534663ef2e41688818e71b439da31221a08a63e61924e40cb1b8056cdbca in task-service has been cleanup successfully" Apr 24 23:38:03.906432 containerd[2018]: 2026-04-24 23:38:03.624 [INFO][4603] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="24955415d6c11992198b2e474a1f3de2b1a74873953fdbd459948b28d4d9a850" Apr 24 23:38:03.906432 containerd[2018]: 2026-04-24 23:38:03.628 [INFO][4603] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="24955415d6c11992198b2e474a1f3de2b1a74873953fdbd459948b28d4d9a850" iface="eth0" netns="/var/run/netns/cni-c875d280-7870-a1da-1607-057afd11cb2b" Apr 24 23:38:03.906432 containerd[2018]: 2026-04-24 23:38:03.642 [INFO][4603] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="24955415d6c11992198b2e474a1f3de2b1a74873953fdbd459948b28d4d9a850" iface="eth0" netns="/var/run/netns/cni-c875d280-7870-a1da-1607-057afd11cb2b" Apr 24 23:38:03.906432 containerd[2018]: 2026-04-24 23:38:03.642 [INFO][4603] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="24955415d6c11992198b2e474a1f3de2b1a74873953fdbd459948b28d4d9a850" iface="eth0" netns="/var/run/netns/cni-c875d280-7870-a1da-1607-057afd11cb2b" Apr 24 23:38:03.906432 containerd[2018]: 2026-04-24 23:38:03.642 [INFO][4603] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="24955415d6c11992198b2e474a1f3de2b1a74873953fdbd459948b28d4d9a850" Apr 24 23:38:03.906432 containerd[2018]: 2026-04-24 23:38:03.642 [INFO][4603] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="24955415d6c11992198b2e474a1f3de2b1a74873953fdbd459948b28d4d9a850" Apr 24 23:38:03.906432 containerd[2018]: 2026-04-24 23:38:03.849 [INFO][4743] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="24955415d6c11992198b2e474a1f3de2b1a74873953fdbd459948b28d4d9a850" HandleID="k8s-pod-network.24955415d6c11992198b2e474a1f3de2b1a74873953fdbd459948b28d4d9a850" Workload="ip--172--31--17--112-k8s-goldmane--5b85766d88--77cfg-eth0" Apr 24 23:38:03.906432 containerd[2018]: 2026-04-24 23:38:03.850 [INFO][4743] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:38:03.906432 containerd[2018]: 2026-04-24 23:38:03.850 [INFO][4743] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:38:03.906432 containerd[2018]: 2026-04-24 23:38:03.873 [WARNING][4743] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="24955415d6c11992198b2e474a1f3de2b1a74873953fdbd459948b28d4d9a850" HandleID="k8s-pod-network.24955415d6c11992198b2e474a1f3de2b1a74873953fdbd459948b28d4d9a850" Workload="ip--172--31--17--112-k8s-goldmane--5b85766d88--77cfg-eth0" Apr 24 23:38:03.906432 containerd[2018]: 2026-04-24 23:38:03.873 [INFO][4743] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="24955415d6c11992198b2e474a1f3de2b1a74873953fdbd459948b28d4d9a850" HandleID="k8s-pod-network.24955415d6c11992198b2e474a1f3de2b1a74873953fdbd459948b28d4d9a850" Workload="ip--172--31--17--112-k8s-goldmane--5b85766d88--77cfg-eth0" Apr 24 23:38:03.906432 containerd[2018]: 2026-04-24 23:38:03.876 [INFO][4743] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:38:03.906432 containerd[2018]: 2026-04-24 23:38:03.886 [INFO][4603] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="24955415d6c11992198b2e474a1f3de2b1a74873953fdbd459948b28d4d9a850" Apr 24 23:38:03.914617 containerd[2018]: time="2026-04-24T23:38:03.913088885Z" level=info msg="TearDown network for sandbox \"24955415d6c11992198b2e474a1f3de2b1a74873953fdbd459948b28d4d9a850\" successfully" Apr 24 23:38:03.914617 containerd[2018]: time="2026-04-24T23:38:03.913145009Z" level=info msg="StopPodSandbox for \"24955415d6c11992198b2e474a1f3de2b1a74873953fdbd459948b28d4d9a850\" returns successfully" Apr 24 23:38:03.914617 containerd[2018]: time="2026-04-24T23:38:03.914395949Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-77cfg,Uid:8d59be44-1022-4a6d-9634-d8c1fdfc4c4b,Namespace:calico-system,Attempt:1,}" Apr 24 23:38:03.916749 systemd[1]: run-netns-cni\x2dc875d280\x2d7870\x2da1da\x2d1607\x2d057afd11cb2b.mount: Deactivated successfully. Apr 24 23:38:03.983122 containerd[2018]: 2026-04-24 23:38:03.598 [INFO][4640] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="51e6c7175af89438d549e4773eccba71da85969522309a807e57c019d02ba2fc" Apr 24 23:38:03.983122 containerd[2018]: 2026-04-24 23:38:03.600 [INFO][4640] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="51e6c7175af89438d549e4773eccba71da85969522309a807e57c019d02ba2fc" iface="eth0" netns="/var/run/netns/cni-f11d0fc4-324a-91eb-d170-b470191c1671" Apr 24 23:38:03.983122 containerd[2018]: 2026-04-24 23:38:03.600 [INFO][4640] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="51e6c7175af89438d549e4773eccba71da85969522309a807e57c019d02ba2fc" iface="eth0" netns="/var/run/netns/cni-f11d0fc4-324a-91eb-d170-b470191c1671" Apr 24 23:38:03.983122 containerd[2018]: 2026-04-24 23:38:03.603 [INFO][4640] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="51e6c7175af89438d549e4773eccba71da85969522309a807e57c019d02ba2fc" iface="eth0" netns="/var/run/netns/cni-f11d0fc4-324a-91eb-d170-b470191c1671" Apr 24 23:38:03.983122 containerd[2018]: 2026-04-24 23:38:03.603 [INFO][4640] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="51e6c7175af89438d549e4773eccba71da85969522309a807e57c019d02ba2fc" Apr 24 23:38:03.983122 containerd[2018]: 2026-04-24 23:38:03.603 [INFO][4640] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="51e6c7175af89438d549e4773eccba71da85969522309a807e57c019d02ba2fc" Apr 24 23:38:03.983122 containerd[2018]: 2026-04-24 23:38:03.900 [INFO][4734] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="51e6c7175af89438d549e4773eccba71da85969522309a807e57c019d02ba2fc" HandleID="k8s-pod-network.51e6c7175af89438d549e4773eccba71da85969522309a807e57c019d02ba2fc" Workload="ip--172--31--17--112-k8s-calico--kube--controllers--8696996ff8--qz98n-eth0" Apr 24 23:38:03.983122 containerd[2018]: 2026-04-24 23:38:03.900 [INFO][4734] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:38:03.983122 containerd[2018]: 2026-04-24 23:38:03.900 [INFO][4734] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:38:03.983122 containerd[2018]: 2026-04-24 23:38:03.933 [WARNING][4734] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="51e6c7175af89438d549e4773eccba71da85969522309a807e57c019d02ba2fc" HandleID="k8s-pod-network.51e6c7175af89438d549e4773eccba71da85969522309a807e57c019d02ba2fc" Workload="ip--172--31--17--112-k8s-calico--kube--controllers--8696996ff8--qz98n-eth0" Apr 24 23:38:03.983122 containerd[2018]: 2026-04-24 23:38:03.933 [INFO][4734] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="51e6c7175af89438d549e4773eccba71da85969522309a807e57c019d02ba2fc" HandleID="k8s-pod-network.51e6c7175af89438d549e4773eccba71da85969522309a807e57c019d02ba2fc" Workload="ip--172--31--17--112-k8s-calico--kube--controllers--8696996ff8--qz98n-eth0" Apr 24 23:38:03.983122 containerd[2018]: 2026-04-24 23:38:03.940 [INFO][4734] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:38:03.983122 containerd[2018]: 2026-04-24 23:38:03.951 [INFO][4640] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="51e6c7175af89438d549e4773eccba71da85969522309a807e57c019d02ba2fc" Apr 24 23:38:03.983122 containerd[2018]: time="2026-04-24T23:38:03.983082966Z" level=info msg="TearDown network for sandbox \"51e6c7175af89438d549e4773eccba71da85969522309a807e57c019d02ba2fc\" successfully" Apr 24 23:38:03.989999 containerd[2018]: time="2026-04-24T23:38:03.983132106Z" level=info msg="StopPodSandbox for \"51e6c7175af89438d549e4773eccba71da85969522309a807e57c019d02ba2fc\" returns successfully" Apr 24 23:38:03.989999 containerd[2018]: time="2026-04-24T23:38:03.984380898Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8696996ff8-qz98n,Uid:484f4df4-9a9a-41cc-87e4-b75a5e07666e,Namespace:calico-system,Attempt:1,}" Apr 24 23:38:04.064790 systemd[1]: run-netns-cni\x2df11d0fc4\x2d324a\x2d91eb\x2dd170\x2db470191c1671.mount: Deactivated successfully. Apr 24 23:38:04.081005 containerd[2018]: 2026-04-24 23:38:03.660 [INFO][4611] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="cd91ce50f98b3389d0cdb159560eea51cefdfb2df53e1173d8b7056a89b1f0cb" Apr 24 23:38:04.081005 containerd[2018]: 2026-04-24 23:38:03.663 [INFO][4611] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="cd91ce50f98b3389d0cdb159560eea51cefdfb2df53e1173d8b7056a89b1f0cb" iface="eth0" netns="/var/run/netns/cni-ac9a5ae5-2b1d-c4ff-d85f-1c1056ee70b8" Apr 24 23:38:04.081005 containerd[2018]: 2026-04-24 23:38:03.663 [INFO][4611] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="cd91ce50f98b3389d0cdb159560eea51cefdfb2df53e1173d8b7056a89b1f0cb" iface="eth0" netns="/var/run/netns/cni-ac9a5ae5-2b1d-c4ff-d85f-1c1056ee70b8" Apr 24 23:38:04.081005 containerd[2018]: 2026-04-24 23:38:03.664 [INFO][4611] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="cd91ce50f98b3389d0cdb159560eea51cefdfb2df53e1173d8b7056a89b1f0cb" iface="eth0" netns="/var/run/netns/cni-ac9a5ae5-2b1d-c4ff-d85f-1c1056ee70b8" Apr 24 23:38:04.081005 containerd[2018]: 2026-04-24 23:38:03.665 [INFO][4611] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="cd91ce50f98b3389d0cdb159560eea51cefdfb2df53e1173d8b7056a89b1f0cb" Apr 24 23:38:04.081005 containerd[2018]: 2026-04-24 23:38:03.666 [INFO][4611] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="cd91ce50f98b3389d0cdb159560eea51cefdfb2df53e1173d8b7056a89b1f0cb" Apr 24 23:38:04.081005 containerd[2018]: 2026-04-24 23:38:04.007 [INFO][4748] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="cd91ce50f98b3389d0cdb159560eea51cefdfb2df53e1173d8b7056a89b1f0cb" HandleID="k8s-pod-network.cd91ce50f98b3389d0cdb159560eea51cefdfb2df53e1173d8b7056a89b1f0cb" Workload="ip--172--31--17--112-k8s-calico--apiserver--df9f5595c--2967j-eth0" Apr 24 23:38:04.081005 containerd[2018]: 2026-04-24 23:38:04.011 [INFO][4748] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:38:04.081005 containerd[2018]: 2026-04-24 23:38:04.012 [INFO][4748] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:38:04.081005 containerd[2018]: 2026-04-24 23:38:04.044 [WARNING][4748] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="cd91ce50f98b3389d0cdb159560eea51cefdfb2df53e1173d8b7056a89b1f0cb" HandleID="k8s-pod-network.cd91ce50f98b3389d0cdb159560eea51cefdfb2df53e1173d8b7056a89b1f0cb" Workload="ip--172--31--17--112-k8s-calico--apiserver--df9f5595c--2967j-eth0" Apr 24 23:38:04.081005 containerd[2018]: 2026-04-24 23:38:04.044 [INFO][4748] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="cd91ce50f98b3389d0cdb159560eea51cefdfb2df53e1173d8b7056a89b1f0cb" HandleID="k8s-pod-network.cd91ce50f98b3389d0cdb159560eea51cefdfb2df53e1173d8b7056a89b1f0cb" Workload="ip--172--31--17--112-k8s-calico--apiserver--df9f5595c--2967j-eth0" Apr 24 23:38:04.081005 containerd[2018]: 2026-04-24 23:38:04.056 [INFO][4748] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:38:04.081005 containerd[2018]: 2026-04-24 23:38:04.070 [INFO][4611] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="cd91ce50f98b3389d0cdb159560eea51cefdfb2df53e1173d8b7056a89b1f0cb" Apr 24 23:38:04.087350 containerd[2018]: time="2026-04-24T23:38:04.085591106Z" level=info msg="TearDown network for sandbox \"cd91ce50f98b3389d0cdb159560eea51cefdfb2df53e1173d8b7056a89b1f0cb\" successfully" Apr 24 23:38:04.087350 containerd[2018]: time="2026-04-24T23:38:04.085643330Z" level=info msg="StopPodSandbox for \"cd91ce50f98b3389d0cdb159560eea51cefdfb2df53e1173d8b7056a89b1f0cb\" returns successfully" Apr 24 23:38:04.090514 systemd[1]: run-netns-cni\x2dac9a5ae5\x2d2b1d\x2dc4ff\x2dd85f\x2d1c1056ee70b8.mount: Deactivated successfully. Apr 24 23:38:04.095655 containerd[2018]: time="2026-04-24T23:38:04.095123654Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-df9f5595c-2967j,Uid:65e053c6-4f3f-427f-8a78-6c78634b5620,Namespace:calico-system,Attempt:1,}" Apr 24 23:38:04.161885 containerd[2018]: 2026-04-24 23:38:03.656 [INFO][4643] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="d611da877bf47a0689ed7698eb7d95eac66730db14e5ef9bc32069502d8da731" Apr 24 23:38:04.161885 containerd[2018]: 2026-04-24 23:38:03.660 [INFO][4643] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d611da877bf47a0689ed7698eb7d95eac66730db14e5ef9bc32069502d8da731" iface="eth0" netns="/var/run/netns/cni-96437ccb-8390-e0d7-5d6a-a8efc3f7f494" Apr 24 23:38:04.161885 containerd[2018]: 2026-04-24 23:38:03.662 [INFO][4643] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d611da877bf47a0689ed7698eb7d95eac66730db14e5ef9bc32069502d8da731" iface="eth0" netns="/var/run/netns/cni-96437ccb-8390-e0d7-5d6a-a8efc3f7f494" Apr 24 23:38:04.161885 containerd[2018]: 2026-04-24 23:38:03.673 [INFO][4643] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d611da877bf47a0689ed7698eb7d95eac66730db14e5ef9bc32069502d8da731" iface="eth0" netns="/var/run/netns/cni-96437ccb-8390-e0d7-5d6a-a8efc3f7f494" Apr 24 23:38:04.161885 containerd[2018]: 2026-04-24 23:38:03.673 [INFO][4643] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="d611da877bf47a0689ed7698eb7d95eac66730db14e5ef9bc32069502d8da731" Apr 24 23:38:04.161885 containerd[2018]: 2026-04-24 23:38:03.673 [INFO][4643] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="d611da877bf47a0689ed7698eb7d95eac66730db14e5ef9bc32069502d8da731" Apr 24 23:38:04.161885 containerd[2018]: 2026-04-24 23:38:04.016 [INFO][4750] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="d611da877bf47a0689ed7698eb7d95eac66730db14e5ef9bc32069502d8da731" HandleID="k8s-pod-network.d611da877bf47a0689ed7698eb7d95eac66730db14e5ef9bc32069502d8da731" Workload="ip--172--31--17--112-k8s-calico--apiserver--df9f5595c--hn778-eth0" Apr 24 23:38:04.161885 containerd[2018]: 2026-04-24 23:38:04.016 [INFO][4750] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:38:04.161885 containerd[2018]: 2026-04-24 23:38:04.067 [INFO][4750] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:38:04.161885 containerd[2018]: 2026-04-24 23:38:04.105 [WARNING][4750] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="d611da877bf47a0689ed7698eb7d95eac66730db14e5ef9bc32069502d8da731" HandleID="k8s-pod-network.d611da877bf47a0689ed7698eb7d95eac66730db14e5ef9bc32069502d8da731" Workload="ip--172--31--17--112-k8s-calico--apiserver--df9f5595c--hn778-eth0" Apr 24 23:38:04.161885 containerd[2018]: 2026-04-24 23:38:04.105 [INFO][4750] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="d611da877bf47a0689ed7698eb7d95eac66730db14e5ef9bc32069502d8da731" HandleID="k8s-pod-network.d611da877bf47a0689ed7698eb7d95eac66730db14e5ef9bc32069502d8da731" Workload="ip--172--31--17--112-k8s-calico--apiserver--df9f5595c--hn778-eth0" Apr 24 23:38:04.161885 containerd[2018]: 2026-04-24 23:38:04.110 [INFO][4750] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:38:04.161885 containerd[2018]: 2026-04-24 23:38:04.127 [INFO][4643] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="d611da877bf47a0689ed7698eb7d95eac66730db14e5ef9bc32069502d8da731" Apr 24 23:38:04.175933 systemd[1]: run-netns-cni\x2d96437ccb\x2d8390\x2de0d7\x2d5d6a\x2da8efc3f7f494.mount: Deactivated successfully. Apr 24 23:38:04.179995 containerd[2018]: time="2026-04-24T23:38:04.178917735Z" level=info msg="TearDown network for sandbox \"d611da877bf47a0689ed7698eb7d95eac66730db14e5ef9bc32069502d8da731\" successfully" Apr 24 23:38:04.179995 containerd[2018]: time="2026-04-24T23:38:04.179394843Z" level=info msg="StopPodSandbox for \"d611da877bf47a0689ed7698eb7d95eac66730db14e5ef9bc32069502d8da731\" returns successfully" Apr 24 23:38:04.193407 containerd[2018]: time="2026-04-24T23:38:04.193334751Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-df9f5595c-hn778,Uid:b3876f02-9c03-4cf9-883e-82b297082dbe,Namespace:calico-system,Attempt:1,}" Apr 24 23:38:04.217121 containerd[2018]: 2026-04-24 23:38:03.669 [INFO][4681] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="d87af8c771fb4a4090e53b4e52007664fd855acbbd729e12d75d54f763109684" Apr 24 23:38:04.217121 containerd[2018]: 2026-04-24 23:38:03.674 [INFO][4681] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d87af8c771fb4a4090e53b4e52007664fd855acbbd729e12d75d54f763109684" iface="eth0" netns="/var/run/netns/cni-465e6558-eb6b-a323-4ff0-b8df2d1ef928" Apr 24 23:38:04.217121 containerd[2018]: 2026-04-24 23:38:03.675 [INFO][4681] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d87af8c771fb4a4090e53b4e52007664fd855acbbd729e12d75d54f763109684" iface="eth0" netns="/var/run/netns/cni-465e6558-eb6b-a323-4ff0-b8df2d1ef928" Apr 24 23:38:04.217121 containerd[2018]: 2026-04-24 23:38:03.680 [INFO][4681] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d87af8c771fb4a4090e53b4e52007664fd855acbbd729e12d75d54f763109684" iface="eth0" netns="/var/run/netns/cni-465e6558-eb6b-a323-4ff0-b8df2d1ef928" Apr 24 23:38:04.217121 containerd[2018]: 2026-04-24 23:38:03.680 [INFO][4681] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="d87af8c771fb4a4090e53b4e52007664fd855acbbd729e12d75d54f763109684" Apr 24 23:38:04.217121 containerd[2018]: 2026-04-24 23:38:03.680 [INFO][4681] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="d87af8c771fb4a4090e53b4e52007664fd855acbbd729e12d75d54f763109684" Apr 24 23:38:04.217121 containerd[2018]: 2026-04-24 23:38:04.014 [INFO][4752] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="d87af8c771fb4a4090e53b4e52007664fd855acbbd729e12d75d54f763109684" HandleID="k8s-pod-network.d87af8c771fb4a4090e53b4e52007664fd855acbbd729e12d75d54f763109684" Workload="ip--172--31--17--112-k8s-whisker--5ddfdf984--kwv9n-eth0" Apr 24 23:38:04.217121 containerd[2018]: 2026-04-24 23:38:04.021 [INFO][4752] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:38:04.217121 containerd[2018]: 2026-04-24 23:38:04.110 [INFO][4752] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:38:04.217121 containerd[2018]: 2026-04-24 23:38:04.159 [WARNING][4752] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="d87af8c771fb4a4090e53b4e52007664fd855acbbd729e12d75d54f763109684" HandleID="k8s-pod-network.d87af8c771fb4a4090e53b4e52007664fd855acbbd729e12d75d54f763109684" Workload="ip--172--31--17--112-k8s-whisker--5ddfdf984--kwv9n-eth0" Apr 24 23:38:04.217121 containerd[2018]: 2026-04-24 23:38:04.159 [INFO][4752] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="d87af8c771fb4a4090e53b4e52007664fd855acbbd729e12d75d54f763109684" HandleID="k8s-pod-network.d87af8c771fb4a4090e53b4e52007664fd855acbbd729e12d75d54f763109684" Workload="ip--172--31--17--112-k8s-whisker--5ddfdf984--kwv9n-eth0" Apr 24 23:38:04.217121 containerd[2018]: 2026-04-24 23:38:04.173 [INFO][4752] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:38:04.217121 containerd[2018]: 2026-04-24 23:38:04.200 [INFO][4681] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="d87af8c771fb4a4090e53b4e52007664fd855acbbd729e12d75d54f763109684" Apr 24 23:38:04.220616 containerd[2018]: time="2026-04-24T23:38:04.220541979Z" level=info msg="TearDown network for sandbox \"d87af8c771fb4a4090e53b4e52007664fd855acbbd729e12d75d54f763109684\" successfully" Apr 24 23:38:04.220616 containerd[2018]: time="2026-04-24T23:38:04.220599591Z" level=info msg="StopPodSandbox for \"d87af8c771fb4a4090e53b4e52007664fd855acbbd729e12d75d54f763109684\" returns successfully" Apr 24 23:38:04.264281 containerd[2018]: 2026-04-24 23:38:03.599 [INFO][4699] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b4fd534663ef2e41688818e71b439da31221a08a63e61924e40cb1b8056cdbca" Apr 24 23:38:04.264281 containerd[2018]: 2026-04-24 23:38:03.600 [INFO][4699] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b4fd534663ef2e41688818e71b439da31221a08a63e61924e40cb1b8056cdbca" iface="eth0" netns="/var/run/netns/cni-d8198388-8c32-a13c-00ad-1babf09dccc0" Apr 24 23:38:04.264281 containerd[2018]: 2026-04-24 23:38:03.600 [INFO][4699] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b4fd534663ef2e41688818e71b439da31221a08a63e61924e40cb1b8056cdbca" iface="eth0" netns="/var/run/netns/cni-d8198388-8c32-a13c-00ad-1babf09dccc0" Apr 24 23:38:04.264281 containerd[2018]: 2026-04-24 23:38:03.602 [INFO][4699] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b4fd534663ef2e41688818e71b439da31221a08a63e61924e40cb1b8056cdbca" iface="eth0" netns="/var/run/netns/cni-d8198388-8c32-a13c-00ad-1babf09dccc0" Apr 24 23:38:04.264281 containerd[2018]: 2026-04-24 23:38:03.603 [INFO][4699] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b4fd534663ef2e41688818e71b439da31221a08a63e61924e40cb1b8056cdbca" Apr 24 23:38:04.264281 containerd[2018]: 2026-04-24 23:38:03.603 [INFO][4699] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b4fd534663ef2e41688818e71b439da31221a08a63e61924e40cb1b8056cdbca" Apr 24 23:38:04.264281 containerd[2018]: 2026-04-24 23:38:04.010 [INFO][4732] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b4fd534663ef2e41688818e71b439da31221a08a63e61924e40cb1b8056cdbca" HandleID="k8s-pod-network.b4fd534663ef2e41688818e71b439da31221a08a63e61924e40cb1b8056cdbca" Workload="ip--172--31--17--112-k8s-coredns--674b8bbfcf--nvwfk-eth0" Apr 24 23:38:04.264281 containerd[2018]: 2026-04-24 23:38:04.014 [INFO][4732] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:38:04.264281 containerd[2018]: 2026-04-24 23:38:04.169 [INFO][4732] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:38:04.264281 containerd[2018]: 2026-04-24 23:38:04.220 [WARNING][4732] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b4fd534663ef2e41688818e71b439da31221a08a63e61924e40cb1b8056cdbca" HandleID="k8s-pod-network.b4fd534663ef2e41688818e71b439da31221a08a63e61924e40cb1b8056cdbca" Workload="ip--172--31--17--112-k8s-coredns--674b8bbfcf--nvwfk-eth0" Apr 24 23:38:04.264281 containerd[2018]: 2026-04-24 23:38:04.221 [INFO][4732] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b4fd534663ef2e41688818e71b439da31221a08a63e61924e40cb1b8056cdbca" HandleID="k8s-pod-network.b4fd534663ef2e41688818e71b439da31221a08a63e61924e40cb1b8056cdbca" Workload="ip--172--31--17--112-k8s-coredns--674b8bbfcf--nvwfk-eth0" Apr 24 23:38:04.264281 containerd[2018]: 2026-04-24 23:38:04.234 [INFO][4732] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:38:04.264281 containerd[2018]: 2026-04-24 23:38:04.249 [INFO][4699] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b4fd534663ef2e41688818e71b439da31221a08a63e61924e40cb1b8056cdbca" Apr 24 23:38:04.270324 kubelet[3426]: I0424 23:38:04.269524 3426 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kq59v\" (UniqueName: \"kubernetes.io/projected/9fd92f4c-e0fc-4914-8842-cd565a232751-kube-api-access-kq59v\") pod \"9fd92f4c-e0fc-4914-8842-cd565a232751\" (UID: \"9fd92f4c-e0fc-4914-8842-cd565a232751\") " Apr 24 23:38:04.270324 kubelet[3426]: I0424 23:38:04.269612 3426 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/9fd92f4c-e0fc-4914-8842-cd565a232751-nginx-config\") pod \"9fd92f4c-e0fc-4914-8842-cd565a232751\" (UID: \"9fd92f4c-e0fc-4914-8842-cd565a232751\") " Apr 24 23:38:04.270324 kubelet[3426]: I0424 23:38:04.269660 3426 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9fd92f4c-e0fc-4914-8842-cd565a232751-whisker-backend-key-pair\") pod \"9fd92f4c-e0fc-4914-8842-cd565a232751\" (UID: \"9fd92f4c-e0fc-4914-8842-cd565a232751\") " Apr 24 23:38:04.270324 kubelet[3426]: I0424 23:38:04.269706 3426 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9fd92f4c-e0fc-4914-8842-cd565a232751-whisker-ca-bundle\") pod \"9fd92f4c-e0fc-4914-8842-cd565a232751\" (UID: \"9fd92f4c-e0fc-4914-8842-cd565a232751\") " Apr 24 23:38:04.273013 containerd[2018]: time="2026-04-24T23:38:04.272317527Z" level=info msg="TearDown network for sandbox \"b4fd534663ef2e41688818e71b439da31221a08a63e61924e40cb1b8056cdbca\" successfully" Apr 24 23:38:04.273013 containerd[2018]: time="2026-04-24T23:38:04.272363619Z" level=info msg="StopPodSandbox for \"b4fd534663ef2e41688818e71b439da31221a08a63e61924e40cb1b8056cdbca\" returns successfully" Apr 24 23:38:04.279771 containerd[2018]: time="2026-04-24T23:38:04.279380055Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-nvwfk,Uid:4359df61-6ba9-4f39-beb9-94953c5d320d,Namespace:kube-system,Attempt:1,}" Apr 24 23:38:04.281342 kubelet[3426]: I0424 23:38:04.281272 3426 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9fd92f4c-e0fc-4914-8842-cd565a232751-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "9fd92f4c-e0fc-4914-8842-cd565a232751" (UID: "9fd92f4c-e0fc-4914-8842-cd565a232751"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 24 23:38:04.286080 kubelet[3426]: I0424 23:38:04.285810 3426 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9fd92f4c-e0fc-4914-8842-cd565a232751-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "9fd92f4c-e0fc-4914-8842-cd565a232751" (UID: "9fd92f4c-e0fc-4914-8842-cd565a232751"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 24 23:38:04.291146 kubelet[3426]: I0424 23:38:04.290870 3426 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9fd92f4c-e0fc-4914-8842-cd565a232751-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "9fd92f4c-e0fc-4914-8842-cd565a232751" (UID: "9fd92f4c-e0fc-4914-8842-cd565a232751"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 24 23:38:04.294939 kubelet[3426]: I0424 23:38:04.293902 3426 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9fd92f4c-e0fc-4914-8842-cd565a232751-kube-api-access-kq59v" (OuterVolumeSpecName: "kube-api-access-kq59v") pod "9fd92f4c-e0fc-4914-8842-cd565a232751" (UID: "9fd92f4c-e0fc-4914-8842-cd565a232751"). InnerVolumeSpecName "kube-api-access-kq59v". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 24 23:38:04.329256 containerd[2018]: 2026-04-24 23:38:03.655 [INFO][4695] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="5f6667010837cd94ebbf3cdfb23db789fce4fcd741e17499e8078f196a7bfab3" Apr 24 23:38:04.329256 containerd[2018]: 2026-04-24 23:38:03.673 [INFO][4695] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5f6667010837cd94ebbf3cdfb23db789fce4fcd741e17499e8078f196a7bfab3" iface="eth0" netns="/var/run/netns/cni-4f808dac-e291-6837-9097-b93acd556063" Apr 24 23:38:04.329256 containerd[2018]: 2026-04-24 23:38:03.678 [INFO][4695] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5f6667010837cd94ebbf3cdfb23db789fce4fcd741e17499e8078f196a7bfab3" iface="eth0" netns="/var/run/netns/cni-4f808dac-e291-6837-9097-b93acd556063" Apr 24 23:38:04.329256 containerd[2018]: 2026-04-24 23:38:03.688 [INFO][4695] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5f6667010837cd94ebbf3cdfb23db789fce4fcd741e17499e8078f196a7bfab3" iface="eth0" netns="/var/run/netns/cni-4f808dac-e291-6837-9097-b93acd556063" Apr 24 23:38:04.329256 containerd[2018]: 2026-04-24 23:38:03.688 [INFO][4695] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="5f6667010837cd94ebbf3cdfb23db789fce4fcd741e17499e8078f196a7bfab3" Apr 24 23:38:04.329256 containerd[2018]: 2026-04-24 23:38:03.688 [INFO][4695] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="5f6667010837cd94ebbf3cdfb23db789fce4fcd741e17499e8078f196a7bfab3" Apr 24 23:38:04.329256 containerd[2018]: 2026-04-24 23:38:04.028 [INFO][4760] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="5f6667010837cd94ebbf3cdfb23db789fce4fcd741e17499e8078f196a7bfab3" HandleID="k8s-pod-network.5f6667010837cd94ebbf3cdfb23db789fce4fcd741e17499e8078f196a7bfab3" Workload="ip--172--31--17--112-k8s-coredns--674b8bbfcf--sv25n-eth0" Apr 24 23:38:04.329256 containerd[2018]: 2026-04-24 23:38:04.028 [INFO][4760] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:38:04.329256 containerd[2018]: 2026-04-24 23:38:04.238 [INFO][4760] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:38:04.329256 containerd[2018]: 2026-04-24 23:38:04.280 [WARNING][4760] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="5f6667010837cd94ebbf3cdfb23db789fce4fcd741e17499e8078f196a7bfab3" HandleID="k8s-pod-network.5f6667010837cd94ebbf3cdfb23db789fce4fcd741e17499e8078f196a7bfab3" Workload="ip--172--31--17--112-k8s-coredns--674b8bbfcf--sv25n-eth0" Apr 24 23:38:04.329256 containerd[2018]: 2026-04-24 23:38:04.280 [INFO][4760] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="5f6667010837cd94ebbf3cdfb23db789fce4fcd741e17499e8078f196a7bfab3" HandleID="k8s-pod-network.5f6667010837cd94ebbf3cdfb23db789fce4fcd741e17499e8078f196a7bfab3" Workload="ip--172--31--17--112-k8s-coredns--674b8bbfcf--sv25n-eth0" Apr 24 23:38:04.329256 containerd[2018]: 2026-04-24 23:38:04.295 [INFO][4760] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:38:04.329256 containerd[2018]: 2026-04-24 23:38:04.307 [INFO][4695] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="5f6667010837cd94ebbf3cdfb23db789fce4fcd741e17499e8078f196a7bfab3" Apr 24 23:38:04.332512 containerd[2018]: time="2026-04-24T23:38:04.331282239Z" level=info msg="TearDown network for sandbox \"5f6667010837cd94ebbf3cdfb23db789fce4fcd741e17499e8078f196a7bfab3\" successfully" Apr 24 23:38:04.334823 containerd[2018]: time="2026-04-24T23:38:04.333800607Z" level=info msg="StopPodSandbox for \"5f6667010837cd94ebbf3cdfb23db789fce4fcd741e17499e8078f196a7bfab3\" returns successfully" Apr 24 23:38:04.337383 containerd[2018]: time="2026-04-24T23:38:04.337315359Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-sv25n,Uid:af04062c-5fa7-4ac4-a06d-12268ffd9166,Namespace:kube-system,Attempt:1,}" Apr 24 23:38:04.372153 kubelet[3426]: I0424 23:38:04.371451 3426 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-kq59v\" (UniqueName: \"kubernetes.io/projected/9fd92f4c-e0fc-4914-8842-cd565a232751-kube-api-access-kq59v\") on node \"ip-172-31-17-112\" DevicePath \"\"" Apr 24 23:38:04.372153 kubelet[3426]: I0424 23:38:04.371511 3426 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/9fd92f4c-e0fc-4914-8842-cd565a232751-nginx-config\") on node \"ip-172-31-17-112\" DevicePath \"\"" Apr 24 23:38:04.373659 kubelet[3426]: I0424 23:38:04.373287 3426 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9fd92f4c-e0fc-4914-8842-cd565a232751-whisker-backend-key-pair\") on node \"ip-172-31-17-112\" DevicePath \"\"" Apr 24 23:38:04.373659 kubelet[3426]: I0424 23:38:04.373342 3426 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9fd92f4c-e0fc-4914-8842-cd565a232751-whisker-ca-bundle\") on node \"ip-172-31-17-112\" DevicePath \"\"" Apr 24 23:38:04.397538 containerd[2018]: 2026-04-24 23:38:03.663 [INFO][4658] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="c6865e68965e0f323af2659e1f94cfc606429b30a9a728e50f0f2c8690d3f9b8" Apr 24 23:38:04.397538 containerd[2018]: 2026-04-24 23:38:03.669 [INFO][4658] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c6865e68965e0f323af2659e1f94cfc606429b30a9a728e50f0f2c8690d3f9b8" iface="eth0" netns="/var/run/netns/cni-8042d4ae-fbb7-e655-c7b7-9a56d970c4c4" Apr 24 23:38:04.397538 containerd[2018]: 2026-04-24 23:38:03.690 [INFO][4658] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c6865e68965e0f323af2659e1f94cfc606429b30a9a728e50f0f2c8690d3f9b8" iface="eth0" netns="/var/run/netns/cni-8042d4ae-fbb7-e655-c7b7-9a56d970c4c4" Apr 24 23:38:04.397538 containerd[2018]: 2026-04-24 23:38:03.693 [INFO][4658] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c6865e68965e0f323af2659e1f94cfc606429b30a9a728e50f0f2c8690d3f9b8" iface="eth0" netns="/var/run/netns/cni-8042d4ae-fbb7-e655-c7b7-9a56d970c4c4" Apr 24 23:38:04.397538 containerd[2018]: 2026-04-24 23:38:03.693 [INFO][4658] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="c6865e68965e0f323af2659e1f94cfc606429b30a9a728e50f0f2c8690d3f9b8" Apr 24 23:38:04.397538 containerd[2018]: 2026-04-24 23:38:03.693 [INFO][4658] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="c6865e68965e0f323af2659e1f94cfc606429b30a9a728e50f0f2c8690d3f9b8" Apr 24 23:38:04.397538 containerd[2018]: 2026-04-24 23:38:04.050 [INFO][4757] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="c6865e68965e0f323af2659e1f94cfc606429b30a9a728e50f0f2c8690d3f9b8" HandleID="k8s-pod-network.c6865e68965e0f323af2659e1f94cfc606429b30a9a728e50f0f2c8690d3f9b8" Workload="ip--172--31--17--112-k8s-csi--node--driver--d77dn-eth0" Apr 24 23:38:04.397538 containerd[2018]: 2026-04-24 23:38:04.055 [INFO][4757] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:38:04.397538 containerd[2018]: 2026-04-24 23:38:04.297 [INFO][4757] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:38:04.397538 containerd[2018]: 2026-04-24 23:38:04.349 [WARNING][4757] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="c6865e68965e0f323af2659e1f94cfc606429b30a9a728e50f0f2c8690d3f9b8" HandleID="k8s-pod-network.c6865e68965e0f323af2659e1f94cfc606429b30a9a728e50f0f2c8690d3f9b8" Workload="ip--172--31--17--112-k8s-csi--node--driver--d77dn-eth0" Apr 24 23:38:04.397538 containerd[2018]: 2026-04-24 23:38:04.350 [INFO][4757] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="c6865e68965e0f323af2659e1f94cfc606429b30a9a728e50f0f2c8690d3f9b8" HandleID="k8s-pod-network.c6865e68965e0f323af2659e1f94cfc606429b30a9a728e50f0f2c8690d3f9b8" Workload="ip--172--31--17--112-k8s-csi--node--driver--d77dn-eth0" Apr 24 23:38:04.397538 containerd[2018]: 2026-04-24 23:38:04.356 [INFO][4757] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:38:04.397538 containerd[2018]: 2026-04-24 23:38:04.372 [INFO][4658] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="c6865e68965e0f323af2659e1f94cfc606429b30a9a728e50f0f2c8690d3f9b8" Apr 24 23:38:04.397538 containerd[2018]: time="2026-04-24T23:38:04.396770788Z" level=info msg="TearDown network for sandbox \"c6865e68965e0f323af2659e1f94cfc606429b30a9a728e50f0f2c8690d3f9b8\" successfully" Apr 24 23:38:04.397538 containerd[2018]: time="2026-04-24T23:38:04.396809512Z" level=info msg="StopPodSandbox for \"c6865e68965e0f323af2659e1f94cfc606429b30a9a728e50f0f2c8690d3f9b8\" returns successfully" Apr 24 23:38:04.400672 containerd[2018]: time="2026-04-24T23:38:04.398427040Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-d77dn,Uid:6eaa5414-466a-4dc1-ab02-5e5914bb112e,Namespace:calico-system,Attempt:1,}" Apr 24 23:38:04.629906 systemd[1]: Removed slice kubepods-besteffort-pod9fd92f4c_e0fc_4914_8842_cd565a232751.slice - libcontainer container kubepods-besteffort-pod9fd92f4c_e0fc_4914_8842_cd565a232751.slice. Apr 24 23:38:05.003568 (udev-worker)[4977]: Network interface NamePolicy= disabled on kernel command line. Apr 24 23:38:05.017368 systemd-networkd[1934]: cali3b5848d13c2: Link UP Apr 24 23:38:05.074894 systemd[1]: run-netns-cni\x2d8042d4ae\x2dfbb7\x2de655\x2dc7b7\x2d9a56d970c4c4.mount: Deactivated successfully. Apr 24 23:38:05.075402 systemd[1]: run-netns-cni\x2d465e6558\x2deb6b\x2da323\x2d4ff0\x2db8df2d1ef928.mount: Deactivated successfully. Apr 24 23:38:05.075702 systemd[1]: run-netns-cni\x2d4f808dac\x2de291\x2d6837\x2d9097\x2db93acd556063.mount: Deactivated successfully. Apr 24 23:38:05.075839 systemd[1]: var-lib-kubelet-pods-9fd92f4c\x2de0fc\x2d4914\x2d8842\x2dcd565a232751-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dkq59v.mount: Deactivated successfully. Apr 24 23:38:05.076383 systemd[1]: var-lib-kubelet-pods-9fd92f4c\x2de0fc\x2d4914\x2d8842\x2dcd565a232751-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Apr 24 23:38:05.076551 systemd[1]: run-netns-cni\x2dd8198388\x2d8c32\x2da13c\x2d00ad\x2d1babf09dccc0.mount: Deactivated successfully. Apr 24 23:38:05.107909 systemd-networkd[1934]: cali3b5848d13c2: Gained carrier Apr 24 23:38:05.227332 (udev-worker)[4976]: Network interface NamePolicy= disabled on kernel command line. Apr 24 23:38:05.255472 systemd-networkd[1934]: calidb205269d41: Link UP Apr 24 23:38:05.274373 systemd-networkd[1934]: calidb205269d41: Gained carrier Apr 24 23:38:05.291961 containerd[2018]: 2026-04-24 23:38:04.436 [ERROR][4798] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 24 23:38:05.291961 containerd[2018]: 2026-04-24 23:38:04.541 [INFO][4798] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--17--112-k8s-calico--kube--controllers--8696996ff8--qz98n-eth0 calico-kube-controllers-8696996ff8- calico-system 484f4df4-9a9a-41cc-87e4-b75a5e07666e 943 0 2026-04-24 23:37:43 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:8696996ff8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-17-112 calico-kube-controllers-8696996ff8-qz98n eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali3b5848d13c2 [] [] }} ContainerID="ad914eed2b59c59276ebf6f1027ad20c822c5a424be19db35328cd8f442d3e28" Namespace="calico-system" Pod="calico-kube-controllers-8696996ff8-qz98n" WorkloadEndpoint="ip--172--31--17--112-k8s-calico--kube--controllers--8696996ff8--qz98n-" Apr 24 23:38:05.291961 containerd[2018]: 2026-04-24 23:38:04.541 [INFO][4798] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ad914eed2b59c59276ebf6f1027ad20c822c5a424be19db35328cd8f442d3e28" Namespace="calico-system" Pod="calico-kube-controllers-8696996ff8-qz98n" WorkloadEndpoint="ip--172--31--17--112-k8s-calico--kube--controllers--8696996ff8--qz98n-eth0" Apr 24 23:38:05.291961 containerd[2018]: 2026-04-24 23:38:04.711 [INFO][4921] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ad914eed2b59c59276ebf6f1027ad20c822c5a424be19db35328cd8f442d3e28" HandleID="k8s-pod-network.ad914eed2b59c59276ebf6f1027ad20c822c5a424be19db35328cd8f442d3e28" Workload="ip--172--31--17--112-k8s-calico--kube--controllers--8696996ff8--qz98n-eth0" Apr 24 23:38:05.291961 containerd[2018]: 2026-04-24 23:38:04.770 [INFO][4921] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="ad914eed2b59c59276ebf6f1027ad20c822c5a424be19db35328cd8f442d3e28" HandleID="k8s-pod-network.ad914eed2b59c59276ebf6f1027ad20c822c5a424be19db35328cd8f442d3e28" Workload="ip--172--31--17--112-k8s-calico--kube--controllers--8696996ff8--qz98n-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400034e290), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-17-112", "pod":"calico-kube-controllers-8696996ff8-qz98n", "timestamp":"2026-04-24 23:38:04.711496745 +0000 UTC"}, Hostname:"ip-172-31-17-112", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40001d8420)} Apr 24 23:38:05.291961 containerd[2018]: 2026-04-24 23:38:04.770 [INFO][4921] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:38:05.291961 containerd[2018]: 2026-04-24 23:38:04.773 [INFO][4921] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:38:05.291961 containerd[2018]: 2026-04-24 23:38:04.774 [INFO][4921] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-17-112' Apr 24 23:38:05.291961 containerd[2018]: 2026-04-24 23:38:04.781 [INFO][4921] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.ad914eed2b59c59276ebf6f1027ad20c822c5a424be19db35328cd8f442d3e28" host="ip-172-31-17-112" Apr 24 23:38:05.291961 containerd[2018]: 2026-04-24 23:38:04.807 [INFO][4921] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-17-112" Apr 24 23:38:05.291961 containerd[2018]: 2026-04-24 23:38:04.834 [INFO][4921] ipam/ipam.go 526: Trying affinity for 192.168.36.128/26 host="ip-172-31-17-112" Apr 24 23:38:05.291961 containerd[2018]: 2026-04-24 23:38:04.846 [INFO][4921] ipam/ipam.go 160: Attempting to load block cidr=192.168.36.128/26 host="ip-172-31-17-112" Apr 24 23:38:05.291961 containerd[2018]: 2026-04-24 23:38:04.858 [INFO][4921] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.36.128/26 host="ip-172-31-17-112" Apr 24 23:38:05.291961 containerd[2018]: 2026-04-24 23:38:04.862 [INFO][4921] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.36.128/26 handle="k8s-pod-network.ad914eed2b59c59276ebf6f1027ad20c822c5a424be19db35328cd8f442d3e28" host="ip-172-31-17-112" Apr 24 23:38:05.291961 containerd[2018]: 2026-04-24 23:38:04.868 [INFO][4921] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.ad914eed2b59c59276ebf6f1027ad20c822c5a424be19db35328cd8f442d3e28 Apr 24 23:38:05.291961 containerd[2018]: 2026-04-24 23:38:04.897 [INFO][4921] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.36.128/26 handle="k8s-pod-network.ad914eed2b59c59276ebf6f1027ad20c822c5a424be19db35328cd8f442d3e28" host="ip-172-31-17-112" Apr 24 23:38:05.291961 containerd[2018]: 2026-04-24 23:38:04.931 [INFO][4921] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.36.129/26] block=192.168.36.128/26 handle="k8s-pod-network.ad914eed2b59c59276ebf6f1027ad20c822c5a424be19db35328cd8f442d3e28" host="ip-172-31-17-112" Apr 24 23:38:05.291961 containerd[2018]: 2026-04-24 23:38:04.931 [INFO][4921] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.36.129/26] handle="k8s-pod-network.ad914eed2b59c59276ebf6f1027ad20c822c5a424be19db35328cd8f442d3e28" host="ip-172-31-17-112" Apr 24 23:38:05.291961 containerd[2018]: 2026-04-24 23:38:04.933 [INFO][4921] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:38:05.291961 containerd[2018]: 2026-04-24 23:38:04.935 [INFO][4921] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.36.129/26] IPv6=[] ContainerID="ad914eed2b59c59276ebf6f1027ad20c822c5a424be19db35328cd8f442d3e28" HandleID="k8s-pod-network.ad914eed2b59c59276ebf6f1027ad20c822c5a424be19db35328cd8f442d3e28" Workload="ip--172--31--17--112-k8s-calico--kube--controllers--8696996ff8--qz98n-eth0" Apr 24 23:38:05.293706 containerd[2018]: 2026-04-24 23:38:04.964 [INFO][4798] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ad914eed2b59c59276ebf6f1027ad20c822c5a424be19db35328cd8f442d3e28" Namespace="calico-system" Pod="calico-kube-controllers-8696996ff8-qz98n" WorkloadEndpoint="ip--172--31--17--112-k8s-calico--kube--controllers--8696996ff8--qz98n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--112-k8s-calico--kube--controllers--8696996ff8--qz98n-eth0", GenerateName:"calico-kube-controllers-8696996ff8-", Namespace:"calico-system", SelfLink:"", UID:"484f4df4-9a9a-41cc-87e4-b75a5e07666e", ResourceVersion:"943", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 37, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"8696996ff8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-112", ContainerID:"", Pod:"calico-kube-controllers-8696996ff8-qz98n", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.36.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3b5848d13c2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:38:05.293706 containerd[2018]: 2026-04-24 23:38:04.965 [INFO][4798] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.36.129/32] ContainerID="ad914eed2b59c59276ebf6f1027ad20c822c5a424be19db35328cd8f442d3e28" Namespace="calico-system" Pod="calico-kube-controllers-8696996ff8-qz98n" WorkloadEndpoint="ip--172--31--17--112-k8s-calico--kube--controllers--8696996ff8--qz98n-eth0" Apr 24 23:38:05.293706 containerd[2018]: 2026-04-24 23:38:04.965 [INFO][4798] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3b5848d13c2 ContainerID="ad914eed2b59c59276ebf6f1027ad20c822c5a424be19db35328cd8f442d3e28" Namespace="calico-system" Pod="calico-kube-controllers-8696996ff8-qz98n" WorkloadEndpoint="ip--172--31--17--112-k8s-calico--kube--controllers--8696996ff8--qz98n-eth0" Apr 24 23:38:05.293706 containerd[2018]: 2026-04-24 23:38:05.131 [INFO][4798] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ad914eed2b59c59276ebf6f1027ad20c822c5a424be19db35328cd8f442d3e28" Namespace="calico-system" Pod="calico-kube-controllers-8696996ff8-qz98n" WorkloadEndpoint="ip--172--31--17--112-k8s-calico--kube--controllers--8696996ff8--qz98n-eth0" Apr 24 23:38:05.293706 containerd[2018]: 2026-04-24 23:38:05.138 [INFO][4798] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ad914eed2b59c59276ebf6f1027ad20c822c5a424be19db35328cd8f442d3e28" Namespace="calico-system" Pod="calico-kube-controllers-8696996ff8-qz98n" WorkloadEndpoint="ip--172--31--17--112-k8s-calico--kube--controllers--8696996ff8--qz98n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--112-k8s-calico--kube--controllers--8696996ff8--qz98n-eth0", GenerateName:"calico-kube-controllers-8696996ff8-", Namespace:"calico-system", SelfLink:"", UID:"484f4df4-9a9a-41cc-87e4-b75a5e07666e", ResourceVersion:"943", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 37, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"8696996ff8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-112", ContainerID:"ad914eed2b59c59276ebf6f1027ad20c822c5a424be19db35328cd8f442d3e28", Pod:"calico-kube-controllers-8696996ff8-qz98n", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.36.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3b5848d13c2", MAC:"5e:da:a0:bf:fe:63", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:38:05.293706 containerd[2018]: 2026-04-24 23:38:05.201 [INFO][4798] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ad914eed2b59c59276ebf6f1027ad20c822c5a424be19db35328cd8f442d3e28" Namespace="calico-system" Pod="calico-kube-controllers-8696996ff8-qz98n" WorkloadEndpoint="ip--172--31--17--112-k8s-calico--kube--controllers--8696996ff8--qz98n-eth0" Apr 24 23:38:05.421445 systemd[1]: Created slice kubepods-besteffort-podc5c3744c_27ec_45e5_be2a_86c057393fd1.slice - libcontainer container kubepods-besteffort-podc5c3744c_27ec_45e5_be2a_86c057393fd1.slice. Apr 24 23:38:05.430986 containerd[2018]: 2026-04-24 23:38:04.197 [ERROR][4781] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 24 23:38:05.430986 containerd[2018]: 2026-04-24 23:38:04.291 [INFO][4781] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--17--112-k8s-goldmane--5b85766d88--77cfg-eth0 goldmane-5b85766d88- calico-system 8d59be44-1022-4a6d-9634-d8c1fdfc4c4b 946 0 2026-04-24 23:37:40 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:5b85766d88 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-17-112 goldmane-5b85766d88-77cfg eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calidb205269d41 [] [] }} ContainerID="945f363dcecabc0353b5849f480ea2cb873e26c787654b87b47a626785dd8f3d" Namespace="calico-system" Pod="goldmane-5b85766d88-77cfg" WorkloadEndpoint="ip--172--31--17--112-k8s-goldmane--5b85766d88--77cfg-" Apr 24 23:38:05.430986 containerd[2018]: 2026-04-24 23:38:04.291 [INFO][4781] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="945f363dcecabc0353b5849f480ea2cb873e26c787654b87b47a626785dd8f3d" Namespace="calico-system" Pod="goldmane-5b85766d88-77cfg" WorkloadEndpoint="ip--172--31--17--112-k8s-goldmane--5b85766d88--77cfg-eth0" Apr 24 23:38:05.430986 containerd[2018]: 2026-04-24 23:38:04.747 [INFO][4845] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="945f363dcecabc0353b5849f480ea2cb873e26c787654b87b47a626785dd8f3d" HandleID="k8s-pod-network.945f363dcecabc0353b5849f480ea2cb873e26c787654b87b47a626785dd8f3d" Workload="ip--172--31--17--112-k8s-goldmane--5b85766d88--77cfg-eth0" Apr 24 23:38:05.430986 containerd[2018]: 2026-04-24 23:38:04.804 [INFO][4845] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="945f363dcecabc0353b5849f480ea2cb873e26c787654b87b47a626785dd8f3d" HandleID="k8s-pod-network.945f363dcecabc0353b5849f480ea2cb873e26c787654b87b47a626785dd8f3d" Workload="ip--172--31--17--112-k8s-goldmane--5b85766d88--77cfg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000398c60), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-17-112", "pod":"goldmane-5b85766d88-77cfg", "timestamp":"2026-04-24 23:38:04.747029681 +0000 UTC"}, Hostname:"ip-172-31-17-112", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40004c6420)} Apr 24 23:38:05.430986 containerd[2018]: 2026-04-24 23:38:04.804 [INFO][4845] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:38:05.430986 containerd[2018]: 2026-04-24 23:38:04.935 [INFO][4845] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:38:05.430986 containerd[2018]: 2026-04-24 23:38:04.935 [INFO][4845] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-17-112' Apr 24 23:38:05.430986 containerd[2018]: 2026-04-24 23:38:04.946 [INFO][4845] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.945f363dcecabc0353b5849f480ea2cb873e26c787654b87b47a626785dd8f3d" host="ip-172-31-17-112" Apr 24 23:38:05.430986 containerd[2018]: 2026-04-24 23:38:04.976 [INFO][4845] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-17-112" Apr 24 23:38:05.430986 containerd[2018]: 2026-04-24 23:38:05.005 [INFO][4845] ipam/ipam.go 526: Trying affinity for 192.168.36.128/26 host="ip-172-31-17-112" Apr 24 23:38:05.430986 containerd[2018]: 2026-04-24 23:38:05.013 [INFO][4845] ipam/ipam.go 160: Attempting to load block cidr=192.168.36.128/26 host="ip-172-31-17-112" Apr 24 23:38:05.430986 containerd[2018]: 2026-04-24 23:38:05.031 [INFO][4845] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.36.128/26 host="ip-172-31-17-112" Apr 24 23:38:05.430986 containerd[2018]: 2026-04-24 23:38:05.037 [INFO][4845] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.36.128/26 handle="k8s-pod-network.945f363dcecabc0353b5849f480ea2cb873e26c787654b87b47a626785dd8f3d" host="ip-172-31-17-112" Apr 24 23:38:05.430986 containerd[2018]: 2026-04-24 23:38:05.048 [INFO][4845] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.945f363dcecabc0353b5849f480ea2cb873e26c787654b87b47a626785dd8f3d Apr 24 23:38:05.430986 containerd[2018]: 2026-04-24 23:38:05.123 [INFO][4845] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.36.128/26 handle="k8s-pod-network.945f363dcecabc0353b5849f480ea2cb873e26c787654b87b47a626785dd8f3d" host="ip-172-31-17-112" Apr 24 23:38:05.430986 containerd[2018]: 2026-04-24 23:38:05.148 [INFO][4845] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.36.130/26] block=192.168.36.128/26 handle="k8s-pod-network.945f363dcecabc0353b5849f480ea2cb873e26c787654b87b47a626785dd8f3d" host="ip-172-31-17-112" Apr 24 23:38:05.430986 containerd[2018]: 2026-04-24 23:38:05.148 [INFO][4845] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.36.130/26] handle="k8s-pod-network.945f363dcecabc0353b5849f480ea2cb873e26c787654b87b47a626785dd8f3d" host="ip-172-31-17-112" Apr 24 23:38:05.430986 containerd[2018]: 2026-04-24 23:38:05.148 [INFO][4845] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:38:05.430986 containerd[2018]: 2026-04-24 23:38:05.149 [INFO][4845] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.36.130/26] IPv6=[] ContainerID="945f363dcecabc0353b5849f480ea2cb873e26c787654b87b47a626785dd8f3d" HandleID="k8s-pod-network.945f363dcecabc0353b5849f480ea2cb873e26c787654b87b47a626785dd8f3d" Workload="ip--172--31--17--112-k8s-goldmane--5b85766d88--77cfg-eth0" Apr 24 23:38:05.434230 containerd[2018]: 2026-04-24 23:38:05.194 [INFO][4781] cni-plugin/k8s.go 418: Populated endpoint ContainerID="945f363dcecabc0353b5849f480ea2cb873e26c787654b87b47a626785dd8f3d" Namespace="calico-system" Pod="goldmane-5b85766d88-77cfg" WorkloadEndpoint="ip--172--31--17--112-k8s-goldmane--5b85766d88--77cfg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--112-k8s-goldmane--5b85766d88--77cfg-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"8d59be44-1022-4a6d-9634-d8c1fdfc4c4b", ResourceVersion:"946", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 37, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-112", ContainerID:"", Pod:"goldmane-5b85766d88-77cfg", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.36.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calidb205269d41", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:38:05.434230 containerd[2018]: 2026-04-24 23:38:05.194 [INFO][4781] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.36.130/32] ContainerID="945f363dcecabc0353b5849f480ea2cb873e26c787654b87b47a626785dd8f3d" Namespace="calico-system" Pod="goldmane-5b85766d88-77cfg" WorkloadEndpoint="ip--172--31--17--112-k8s-goldmane--5b85766d88--77cfg-eth0" Apr 24 23:38:05.434230 containerd[2018]: 2026-04-24 23:38:05.194 [INFO][4781] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidb205269d41 ContainerID="945f363dcecabc0353b5849f480ea2cb873e26c787654b87b47a626785dd8f3d" Namespace="calico-system" Pod="goldmane-5b85766d88-77cfg" WorkloadEndpoint="ip--172--31--17--112-k8s-goldmane--5b85766d88--77cfg-eth0" Apr 24 23:38:05.434230 containerd[2018]: 2026-04-24 23:38:05.298 [INFO][4781] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="945f363dcecabc0353b5849f480ea2cb873e26c787654b87b47a626785dd8f3d" Namespace="calico-system" Pod="goldmane-5b85766d88-77cfg" WorkloadEndpoint="ip--172--31--17--112-k8s-goldmane--5b85766d88--77cfg-eth0" Apr 24 23:38:05.434230 containerd[2018]: 2026-04-24 23:38:05.298 [INFO][4781] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="945f363dcecabc0353b5849f480ea2cb873e26c787654b87b47a626785dd8f3d" Namespace="calico-system" Pod="goldmane-5b85766d88-77cfg" WorkloadEndpoint="ip--172--31--17--112-k8s-goldmane--5b85766d88--77cfg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--112-k8s-goldmane--5b85766d88--77cfg-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"8d59be44-1022-4a6d-9634-d8c1fdfc4c4b", ResourceVersion:"946", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 37, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-112", ContainerID:"945f363dcecabc0353b5849f480ea2cb873e26c787654b87b47a626785dd8f3d", Pod:"goldmane-5b85766d88-77cfg", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.36.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calidb205269d41", MAC:"42:1c:00:40:e7:42", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:38:05.434230 containerd[2018]: 2026-04-24 23:38:05.402 [INFO][4781] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="945f363dcecabc0353b5849f480ea2cb873e26c787654b87b47a626785dd8f3d" Namespace="calico-system" Pod="goldmane-5b85766d88-77cfg" WorkloadEndpoint="ip--172--31--17--112-k8s-goldmane--5b85766d88--77cfg-eth0" Apr 24 23:38:05.487959 kubelet[3426]: I0424 23:38:05.486647 3426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c5c3744c-27ec-45e5-be2a-86c057393fd1-whisker-ca-bundle\") pod \"whisker-bc76d45f7-hgqwn\" (UID: \"c5c3744c-27ec-45e5-be2a-86c057393fd1\") " pod="calico-system/whisker-bc76d45f7-hgqwn" Apr 24 23:38:05.488649 kubelet[3426]: I0424 23:38:05.488086 3426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d25lj\" (UniqueName: \"kubernetes.io/projected/c5c3744c-27ec-45e5-be2a-86c057393fd1-kube-api-access-d25lj\") pod \"whisker-bc76d45f7-hgqwn\" (UID: \"c5c3744c-27ec-45e5-be2a-86c057393fd1\") " pod="calico-system/whisker-bc76d45f7-hgqwn" Apr 24 23:38:05.488649 kubelet[3426]: I0424 23:38:05.488407 3426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/c5c3744c-27ec-45e5-be2a-86c057393fd1-nginx-config\") pod \"whisker-bc76d45f7-hgqwn\" (UID: \"c5c3744c-27ec-45e5-be2a-86c057393fd1\") " pod="calico-system/whisker-bc76d45f7-hgqwn" Apr 24 23:38:05.488777 kubelet[3426]: I0424 23:38:05.488659 3426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c5c3744c-27ec-45e5-be2a-86c057393fd1-whisker-backend-key-pair\") pod \"whisker-bc76d45f7-hgqwn\" (UID: \"c5c3744c-27ec-45e5-be2a-86c057393fd1\") " pod="calico-system/whisker-bc76d45f7-hgqwn" Apr 24 23:38:05.557697 systemd-networkd[1934]: cali3b1306e0af3: Link UP Apr 24 23:38:05.560587 systemd-networkd[1934]: cali3b1306e0af3: Gained carrier Apr 24 23:38:05.595852 containerd[2018]: time="2026-04-24T23:38:05.590408010Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:38:05.595852 containerd[2018]: time="2026-04-24T23:38:05.590531154Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:38:05.595852 containerd[2018]: time="2026-04-24T23:38:05.590562282Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:38:05.595852 containerd[2018]: time="2026-04-24T23:38:05.590735214Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:38:05.634739 containerd[2018]: time="2026-04-24T23:38:05.622039962Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:38:05.637391 containerd[2018]: time="2026-04-24T23:38:05.636925614Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:38:05.640502 containerd[2018]: time="2026-04-24T23:38:05.637868718Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:38:05.644846 containerd[2018]: time="2026-04-24T23:38:05.643242606Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:38:05.683550 containerd[2018]: 2026-04-24 23:38:04.536 [ERROR][4830] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 24 23:38:05.683550 containerd[2018]: 2026-04-24 23:38:04.673 [INFO][4830] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--17--112-k8s-calico--apiserver--df9f5595c--hn778-eth0 calico-apiserver-df9f5595c- calico-system b3876f02-9c03-4cf9-883e-82b297082dbe 947 0 2026-04-24 23:37:38 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:df9f5595c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-17-112 calico-apiserver-df9f5595c-hn778 eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali3b1306e0af3 [] [] }} ContainerID="e0422e01fa6e6da1721fffed0b680c22f4b88d5fa9b67c767a1ddbd8b5081159" Namespace="calico-system" Pod="calico-apiserver-df9f5595c-hn778" WorkloadEndpoint="ip--172--31--17--112-k8s-calico--apiserver--df9f5595c--hn778-" Apr 24 23:38:05.683550 containerd[2018]: 2026-04-24 23:38:04.673 [INFO][4830] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e0422e01fa6e6da1721fffed0b680c22f4b88d5fa9b67c767a1ddbd8b5081159" Namespace="calico-system" Pod="calico-apiserver-df9f5595c-hn778" WorkloadEndpoint="ip--172--31--17--112-k8s-calico--apiserver--df9f5595c--hn778-eth0" Apr 24 23:38:05.683550 containerd[2018]: 2026-04-24 23:38:04.949 [INFO][4949] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e0422e01fa6e6da1721fffed0b680c22f4b88d5fa9b67c767a1ddbd8b5081159" HandleID="k8s-pod-network.e0422e01fa6e6da1721fffed0b680c22f4b88d5fa9b67c767a1ddbd8b5081159" Workload="ip--172--31--17--112-k8s-calico--apiserver--df9f5595c--hn778-eth0" Apr 24 23:38:05.683550 containerd[2018]: 2026-04-24 23:38:05.060 [INFO][4949] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="e0422e01fa6e6da1721fffed0b680c22f4b88d5fa9b67c767a1ddbd8b5081159" HandleID="k8s-pod-network.e0422e01fa6e6da1721fffed0b680c22f4b88d5fa9b67c767a1ddbd8b5081159" Workload="ip--172--31--17--112-k8s-calico--apiserver--df9f5595c--hn778-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000626a00), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-17-112", "pod":"calico-apiserver-df9f5595c-hn778", "timestamp":"2026-04-24 23:38:04.949190166 +0000 UTC"}, Hostname:"ip-172-31-17-112", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x4000187080)} Apr 24 23:38:05.683550 containerd[2018]: 2026-04-24 23:38:05.060 [INFO][4949] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:38:05.683550 containerd[2018]: 2026-04-24 23:38:05.156 [INFO][4949] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:38:05.683550 containerd[2018]: 2026-04-24 23:38:05.157 [INFO][4949] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-17-112' Apr 24 23:38:05.683550 containerd[2018]: 2026-04-24 23:38:05.167 [INFO][4949] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.e0422e01fa6e6da1721fffed0b680c22f4b88d5fa9b67c767a1ddbd8b5081159" host="ip-172-31-17-112" Apr 24 23:38:05.683550 containerd[2018]: 2026-04-24 23:38:05.213 [INFO][4949] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-17-112" Apr 24 23:38:05.683550 containerd[2018]: 2026-04-24 23:38:05.272 [INFO][4949] ipam/ipam.go 526: Trying affinity for 192.168.36.128/26 host="ip-172-31-17-112" Apr 24 23:38:05.683550 containerd[2018]: 2026-04-24 23:38:05.302 [INFO][4949] ipam/ipam.go 160: Attempting to load block cidr=192.168.36.128/26 host="ip-172-31-17-112" Apr 24 23:38:05.683550 containerd[2018]: 2026-04-24 23:38:05.355 [INFO][4949] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.36.128/26 host="ip-172-31-17-112" Apr 24 23:38:05.683550 containerd[2018]: 2026-04-24 23:38:05.356 [INFO][4949] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.36.128/26 handle="k8s-pod-network.e0422e01fa6e6da1721fffed0b680c22f4b88d5fa9b67c767a1ddbd8b5081159" host="ip-172-31-17-112" Apr 24 23:38:05.683550 containerd[2018]: 2026-04-24 23:38:05.389 [INFO][4949] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.e0422e01fa6e6da1721fffed0b680c22f4b88d5fa9b67c767a1ddbd8b5081159 Apr 24 23:38:05.683550 containerd[2018]: 2026-04-24 23:38:05.443 [INFO][4949] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.36.128/26 handle="k8s-pod-network.e0422e01fa6e6da1721fffed0b680c22f4b88d5fa9b67c767a1ddbd8b5081159" host="ip-172-31-17-112" Apr 24 23:38:05.683550 containerd[2018]: 2026-04-24 23:38:05.504 [INFO][4949] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.36.131/26] block=192.168.36.128/26 handle="k8s-pod-network.e0422e01fa6e6da1721fffed0b680c22f4b88d5fa9b67c767a1ddbd8b5081159" host="ip-172-31-17-112" Apr 24 23:38:05.683550 containerd[2018]: 2026-04-24 23:38:05.504 [INFO][4949] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.36.131/26] handle="k8s-pod-network.e0422e01fa6e6da1721fffed0b680c22f4b88d5fa9b67c767a1ddbd8b5081159" host="ip-172-31-17-112" Apr 24 23:38:05.683550 containerd[2018]: 2026-04-24 23:38:05.504 [INFO][4949] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:38:05.683550 containerd[2018]: 2026-04-24 23:38:05.504 [INFO][4949] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.36.131/26] IPv6=[] ContainerID="e0422e01fa6e6da1721fffed0b680c22f4b88d5fa9b67c767a1ddbd8b5081159" HandleID="k8s-pod-network.e0422e01fa6e6da1721fffed0b680c22f4b88d5fa9b67c767a1ddbd8b5081159" Workload="ip--172--31--17--112-k8s-calico--apiserver--df9f5595c--hn778-eth0" Apr 24 23:38:05.686965 containerd[2018]: 2026-04-24 23:38:05.525 [INFO][4830] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e0422e01fa6e6da1721fffed0b680c22f4b88d5fa9b67c767a1ddbd8b5081159" Namespace="calico-system" Pod="calico-apiserver-df9f5595c-hn778" WorkloadEndpoint="ip--172--31--17--112-k8s-calico--apiserver--df9f5595c--hn778-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--112-k8s-calico--apiserver--df9f5595c--hn778-eth0", GenerateName:"calico-apiserver-df9f5595c-", Namespace:"calico-system", SelfLink:"", UID:"b3876f02-9c03-4cf9-883e-82b297082dbe", ResourceVersion:"947", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 37, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"df9f5595c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-112", ContainerID:"", Pod:"calico-apiserver-df9f5595c-hn778", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.36.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali3b1306e0af3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:38:05.686965 containerd[2018]: 2026-04-24 23:38:05.525 [INFO][4830] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.36.131/32] ContainerID="e0422e01fa6e6da1721fffed0b680c22f4b88d5fa9b67c767a1ddbd8b5081159" Namespace="calico-system" Pod="calico-apiserver-df9f5595c-hn778" WorkloadEndpoint="ip--172--31--17--112-k8s-calico--apiserver--df9f5595c--hn778-eth0" Apr 24 23:38:05.686965 containerd[2018]: 2026-04-24 23:38:05.525 [INFO][4830] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3b1306e0af3 ContainerID="e0422e01fa6e6da1721fffed0b680c22f4b88d5fa9b67c767a1ddbd8b5081159" Namespace="calico-system" Pod="calico-apiserver-df9f5595c-hn778" WorkloadEndpoint="ip--172--31--17--112-k8s-calico--apiserver--df9f5595c--hn778-eth0" Apr 24 23:38:05.686965 containerd[2018]: 2026-04-24 23:38:05.578 [INFO][4830] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e0422e01fa6e6da1721fffed0b680c22f4b88d5fa9b67c767a1ddbd8b5081159" Namespace="calico-system" Pod="calico-apiserver-df9f5595c-hn778" WorkloadEndpoint="ip--172--31--17--112-k8s-calico--apiserver--df9f5595c--hn778-eth0" Apr 24 23:38:05.686965 containerd[2018]: 2026-04-24 23:38:05.580 [INFO][4830] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e0422e01fa6e6da1721fffed0b680c22f4b88d5fa9b67c767a1ddbd8b5081159" Namespace="calico-system" Pod="calico-apiserver-df9f5595c-hn778" WorkloadEndpoint="ip--172--31--17--112-k8s-calico--apiserver--df9f5595c--hn778-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--112-k8s-calico--apiserver--df9f5595c--hn778-eth0", GenerateName:"calico-apiserver-df9f5595c-", Namespace:"calico-system", SelfLink:"", UID:"b3876f02-9c03-4cf9-883e-82b297082dbe", ResourceVersion:"947", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 37, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"df9f5595c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-112", ContainerID:"e0422e01fa6e6da1721fffed0b680c22f4b88d5fa9b67c767a1ddbd8b5081159", Pod:"calico-apiserver-df9f5595c-hn778", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.36.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali3b1306e0af3", MAC:"a2:3c:fd:f3:e2:90", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:38:05.686965 containerd[2018]: 2026-04-24 23:38:05.640 [INFO][4830] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e0422e01fa6e6da1721fffed0b680c22f4b88d5fa9b67c767a1ddbd8b5081159" Namespace="calico-system" Pod="calico-apiserver-df9f5595c-hn778" WorkloadEndpoint="ip--172--31--17--112-k8s-calico--apiserver--df9f5595c--hn778-eth0" Apr 24 23:38:05.831667 systemd-networkd[1934]: cali91e7ca06fa7: Link UP Apr 24 23:38:05.841240 systemd-networkd[1934]: cali91e7ca06fa7: Gained carrier Apr 24 23:38:05.850019 containerd[2018]: time="2026-04-24T23:38:05.849018127Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:38:05.852117 containerd[2018]: time="2026-04-24T23:38:05.851967019Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:38:05.854008 containerd[2018]: time="2026-04-24T23:38:05.852655783Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:38:05.860356 containerd[2018]: time="2026-04-24T23:38:05.859523839Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:38:05.947335 containerd[2018]: 2026-04-24 23:38:04.499 [ERROR][4806] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 24 23:38:05.947335 containerd[2018]: 2026-04-24 23:38:04.576 [INFO][4806] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--17--112-k8s-calico--apiserver--df9f5595c--2967j-eth0 calico-apiserver-df9f5595c- calico-system 65e053c6-4f3f-427f-8a78-6c78634b5620 948 0 2026-04-24 23:37:38 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:df9f5595c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-17-112 calico-apiserver-df9f5595c-2967j eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali91e7ca06fa7 [] [] }} ContainerID="25d24863abf6207fbc84750c17022dad3d07b7f8eef932d80c00f3877cd06816" Namespace="calico-system" Pod="calico-apiserver-df9f5595c-2967j" WorkloadEndpoint="ip--172--31--17--112-k8s-calico--apiserver--df9f5595c--2967j-" Apr 24 23:38:05.947335 containerd[2018]: 2026-04-24 23:38:04.577 [INFO][4806] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="25d24863abf6207fbc84750c17022dad3d07b7f8eef932d80c00f3877cd06816" Namespace="calico-system" Pod="calico-apiserver-df9f5595c-2967j" WorkloadEndpoint="ip--172--31--17--112-k8s-calico--apiserver--df9f5595c--2967j-eth0" Apr 24 23:38:05.947335 containerd[2018]: 2026-04-24 23:38:05.304 [INFO][4929] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="25d24863abf6207fbc84750c17022dad3d07b7f8eef932d80c00f3877cd06816" HandleID="k8s-pod-network.25d24863abf6207fbc84750c17022dad3d07b7f8eef932d80c00f3877cd06816" Workload="ip--172--31--17--112-k8s-calico--apiserver--df9f5595c--2967j-eth0" Apr 24 23:38:05.947335 containerd[2018]: 2026-04-24 23:38:05.363 [INFO][4929] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="25d24863abf6207fbc84750c17022dad3d07b7f8eef932d80c00f3877cd06816" HandleID="k8s-pod-network.25d24863abf6207fbc84750c17022dad3d07b7f8eef932d80c00f3877cd06816" Workload="ip--172--31--17--112-k8s-calico--apiserver--df9f5595c--2967j-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003bf600), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-17-112", "pod":"calico-apiserver-df9f5595c-2967j", "timestamp":"2026-04-24 23:38:05.304921504 +0000 UTC"}, Hostname:"ip-172-31-17-112", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x4000186000)} Apr 24 23:38:05.947335 containerd[2018]: 2026-04-24 23:38:05.363 [INFO][4929] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:38:05.947335 containerd[2018]: 2026-04-24 23:38:05.507 [INFO][4929] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:38:05.947335 containerd[2018]: 2026-04-24 23:38:05.511 [INFO][4929] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-17-112' Apr 24 23:38:05.947335 containerd[2018]: 2026-04-24 23:38:05.526 [INFO][4929] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.25d24863abf6207fbc84750c17022dad3d07b7f8eef932d80c00f3877cd06816" host="ip-172-31-17-112" Apr 24 23:38:05.947335 containerd[2018]: 2026-04-24 23:38:05.598 [INFO][4929] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-17-112" Apr 24 23:38:05.947335 containerd[2018]: 2026-04-24 23:38:05.655 [INFO][4929] ipam/ipam.go 526: Trying affinity for 192.168.36.128/26 host="ip-172-31-17-112" Apr 24 23:38:05.947335 containerd[2018]: 2026-04-24 23:38:05.681 [INFO][4929] ipam/ipam.go 160: Attempting to load block cidr=192.168.36.128/26 host="ip-172-31-17-112" Apr 24 23:38:05.947335 containerd[2018]: 2026-04-24 23:38:05.697 [INFO][4929] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.36.128/26 host="ip-172-31-17-112" Apr 24 23:38:05.947335 containerd[2018]: 2026-04-24 23:38:05.698 [INFO][4929] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.36.128/26 handle="k8s-pod-network.25d24863abf6207fbc84750c17022dad3d07b7f8eef932d80c00f3877cd06816" host="ip-172-31-17-112" Apr 24 23:38:05.947335 containerd[2018]: 2026-04-24 23:38:05.710 [INFO][4929] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.25d24863abf6207fbc84750c17022dad3d07b7f8eef932d80c00f3877cd06816 Apr 24 23:38:05.947335 containerd[2018]: 2026-04-24 23:38:05.724 [INFO][4929] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.36.128/26 handle="k8s-pod-network.25d24863abf6207fbc84750c17022dad3d07b7f8eef932d80c00f3877cd06816" host="ip-172-31-17-112" Apr 24 23:38:05.947335 containerd[2018]: 2026-04-24 23:38:05.747 [INFO][4929] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.36.132/26] block=192.168.36.128/26 handle="k8s-pod-network.25d24863abf6207fbc84750c17022dad3d07b7f8eef932d80c00f3877cd06816" host="ip-172-31-17-112" Apr 24 23:38:05.947335 containerd[2018]: 2026-04-24 23:38:05.747 [INFO][4929] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.36.132/26] handle="k8s-pod-network.25d24863abf6207fbc84750c17022dad3d07b7f8eef932d80c00f3877cd06816" host="ip-172-31-17-112" Apr 24 23:38:05.947335 containerd[2018]: 2026-04-24 23:38:05.747 [INFO][4929] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:38:05.947335 containerd[2018]: 2026-04-24 23:38:05.747 [INFO][4929] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.36.132/26] IPv6=[] ContainerID="25d24863abf6207fbc84750c17022dad3d07b7f8eef932d80c00f3877cd06816" HandleID="k8s-pod-network.25d24863abf6207fbc84750c17022dad3d07b7f8eef932d80c00f3877cd06816" Workload="ip--172--31--17--112-k8s-calico--apiserver--df9f5595c--2967j-eth0" Apr 24 23:38:05.952144 containerd[2018]: 2026-04-24 23:38:05.793 [INFO][4806] cni-plugin/k8s.go 418: Populated endpoint ContainerID="25d24863abf6207fbc84750c17022dad3d07b7f8eef932d80c00f3877cd06816" Namespace="calico-system" Pod="calico-apiserver-df9f5595c-2967j" WorkloadEndpoint="ip--172--31--17--112-k8s-calico--apiserver--df9f5595c--2967j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--112-k8s-calico--apiserver--df9f5595c--2967j-eth0", GenerateName:"calico-apiserver-df9f5595c-", Namespace:"calico-system", SelfLink:"", UID:"65e053c6-4f3f-427f-8a78-6c78634b5620", ResourceVersion:"948", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 37, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"df9f5595c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-112", ContainerID:"", Pod:"calico-apiserver-df9f5595c-2967j", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.36.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali91e7ca06fa7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:38:05.952144 containerd[2018]: 2026-04-24 23:38:05.793 [INFO][4806] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.36.132/32] ContainerID="25d24863abf6207fbc84750c17022dad3d07b7f8eef932d80c00f3877cd06816" Namespace="calico-system" Pod="calico-apiserver-df9f5595c-2967j" WorkloadEndpoint="ip--172--31--17--112-k8s-calico--apiserver--df9f5595c--2967j-eth0" Apr 24 23:38:05.952144 containerd[2018]: 2026-04-24 23:38:05.793 [INFO][4806] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali91e7ca06fa7 ContainerID="25d24863abf6207fbc84750c17022dad3d07b7f8eef932d80c00f3877cd06816" Namespace="calico-system" Pod="calico-apiserver-df9f5595c-2967j" WorkloadEndpoint="ip--172--31--17--112-k8s-calico--apiserver--df9f5595c--2967j-eth0" Apr 24 23:38:05.952144 containerd[2018]: 2026-04-24 23:38:05.855 [INFO][4806] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="25d24863abf6207fbc84750c17022dad3d07b7f8eef932d80c00f3877cd06816" Namespace="calico-system" Pod="calico-apiserver-df9f5595c-2967j" WorkloadEndpoint="ip--172--31--17--112-k8s-calico--apiserver--df9f5595c--2967j-eth0" Apr 24 23:38:05.952144 containerd[2018]: 2026-04-24 23:38:05.875 [INFO][4806] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="25d24863abf6207fbc84750c17022dad3d07b7f8eef932d80c00f3877cd06816" Namespace="calico-system" Pod="calico-apiserver-df9f5595c-2967j" WorkloadEndpoint="ip--172--31--17--112-k8s-calico--apiserver--df9f5595c--2967j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--112-k8s-calico--apiserver--df9f5595c--2967j-eth0", GenerateName:"calico-apiserver-df9f5595c-", Namespace:"calico-system", SelfLink:"", UID:"65e053c6-4f3f-427f-8a78-6c78634b5620", ResourceVersion:"948", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 37, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"df9f5595c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-112", ContainerID:"25d24863abf6207fbc84750c17022dad3d07b7f8eef932d80c00f3877cd06816", Pod:"calico-apiserver-df9f5595c-2967j", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.36.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali91e7ca06fa7", MAC:"da:d2:10:d6:e8:ea", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:38:05.952144 containerd[2018]: 2026-04-24 23:38:05.926 [INFO][4806] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="25d24863abf6207fbc84750c17022dad3d07b7f8eef932d80c00f3877cd06816" Namespace="calico-system" Pod="calico-apiserver-df9f5595c-2967j" WorkloadEndpoint="ip--172--31--17--112-k8s-calico--apiserver--df9f5595c--2967j-eth0" Apr 24 23:38:05.970092 systemd[1]: Started cri-containerd-945f363dcecabc0353b5849f480ea2cb873e26c787654b87b47a626785dd8f3d.scope - libcontainer container 945f363dcecabc0353b5849f480ea2cb873e26c787654b87b47a626785dd8f3d. Apr 24 23:38:06.000858 systemd[1]: Started cri-containerd-ad914eed2b59c59276ebf6f1027ad20c822c5a424be19db35328cd8f442d3e28.scope - libcontainer container ad914eed2b59c59276ebf6f1027ad20c822c5a424be19db35328cd8f442d3e28. Apr 24 23:38:06.006457 systemd[1]: Started cri-containerd-e0422e01fa6e6da1721fffed0b680c22f4b88d5fa9b67c767a1ddbd8b5081159.scope - libcontainer container e0422e01fa6e6da1721fffed0b680c22f4b88d5fa9b67c767a1ddbd8b5081159. Apr 24 23:38:06.038870 containerd[2018]: time="2026-04-24T23:38:06.038094232Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-bc76d45f7-hgqwn,Uid:c5c3744c-27ec-45e5-be2a-86c057393fd1,Namespace:calico-system,Attempt:0,}" Apr 24 23:38:06.057924 systemd-networkd[1934]: cali12e44058a6d: Link UP Apr 24 23:38:06.061849 systemd-networkd[1934]: cali12e44058a6d: Gained carrier Apr 24 23:38:06.168227 containerd[2018]: 2026-04-24 23:38:04.799 [ERROR][4849] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 24 23:38:06.168227 containerd[2018]: 2026-04-24 23:38:04.898 [INFO][4849] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--17--112-k8s-coredns--674b8bbfcf--nvwfk-eth0 coredns-674b8bbfcf- kube-system 4359df61-6ba9-4f39-beb9-94953c5d320d 942 0 2026-04-24 23:37:19 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-17-112 coredns-674b8bbfcf-nvwfk eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali12e44058a6d [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="80613d10c1a5735a6ed7bcb4078765ba67427660c2e6ac46c6ad7436601d4010" Namespace="kube-system" Pod="coredns-674b8bbfcf-nvwfk" WorkloadEndpoint="ip--172--31--17--112-k8s-coredns--674b8bbfcf--nvwfk-" Apr 24 23:38:06.168227 containerd[2018]: 2026-04-24 23:38:04.898 [INFO][4849] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="80613d10c1a5735a6ed7bcb4078765ba67427660c2e6ac46c6ad7436601d4010" Namespace="kube-system" Pod="coredns-674b8bbfcf-nvwfk" WorkloadEndpoint="ip--172--31--17--112-k8s-coredns--674b8bbfcf--nvwfk-eth0" Apr 24 23:38:06.168227 containerd[2018]: 2026-04-24 23:38:05.625 [INFO][4969] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="80613d10c1a5735a6ed7bcb4078765ba67427660c2e6ac46c6ad7436601d4010" HandleID="k8s-pod-network.80613d10c1a5735a6ed7bcb4078765ba67427660c2e6ac46c6ad7436601d4010" Workload="ip--172--31--17--112-k8s-coredns--674b8bbfcf--nvwfk-eth0" Apr 24 23:38:06.168227 containerd[2018]: 2026-04-24 23:38:05.686 [INFO][4969] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="80613d10c1a5735a6ed7bcb4078765ba67427660c2e6ac46c6ad7436601d4010" HandleID="k8s-pod-network.80613d10c1a5735a6ed7bcb4078765ba67427660c2e6ac46c6ad7436601d4010" Workload="ip--172--31--17--112-k8s-coredns--674b8bbfcf--nvwfk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001216f0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-17-112", "pod":"coredns-674b8bbfcf-nvwfk", "timestamp":"2026-04-24 23:38:05.625922226 +0000 UTC"}, Hostname:"ip-172-31-17-112", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40002f0000)} Apr 24 23:38:06.168227 containerd[2018]: 2026-04-24 23:38:05.686 [INFO][4969] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:38:06.168227 containerd[2018]: 2026-04-24 23:38:05.750 [INFO][4969] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:38:06.168227 containerd[2018]: 2026-04-24 23:38:05.750 [INFO][4969] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-17-112' Apr 24 23:38:06.168227 containerd[2018]: 2026-04-24 23:38:05.758 [INFO][4969] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.80613d10c1a5735a6ed7bcb4078765ba67427660c2e6ac46c6ad7436601d4010" host="ip-172-31-17-112" Apr 24 23:38:06.168227 containerd[2018]: 2026-04-24 23:38:05.808 [INFO][4969] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-17-112" Apr 24 23:38:06.168227 containerd[2018]: 2026-04-24 23:38:05.852 [INFO][4969] ipam/ipam.go 526: Trying affinity for 192.168.36.128/26 host="ip-172-31-17-112" Apr 24 23:38:06.168227 containerd[2018]: 2026-04-24 23:38:05.869 [INFO][4969] ipam/ipam.go 160: Attempting to load block cidr=192.168.36.128/26 host="ip-172-31-17-112" Apr 24 23:38:06.168227 containerd[2018]: 2026-04-24 23:38:05.883 [INFO][4969] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.36.128/26 host="ip-172-31-17-112" Apr 24 23:38:06.168227 containerd[2018]: 2026-04-24 23:38:05.883 [INFO][4969] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.36.128/26 handle="k8s-pod-network.80613d10c1a5735a6ed7bcb4078765ba67427660c2e6ac46c6ad7436601d4010" host="ip-172-31-17-112" Apr 24 23:38:06.168227 containerd[2018]: 2026-04-24 23:38:05.910 [INFO][4969] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.80613d10c1a5735a6ed7bcb4078765ba67427660c2e6ac46c6ad7436601d4010 Apr 24 23:38:06.168227 containerd[2018]: 2026-04-24 23:38:05.922 [INFO][4969] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.36.128/26 handle="k8s-pod-network.80613d10c1a5735a6ed7bcb4078765ba67427660c2e6ac46c6ad7436601d4010" host="ip-172-31-17-112" Apr 24 23:38:06.168227 containerd[2018]: 2026-04-24 23:38:05.957 [INFO][4969] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.36.133/26] block=192.168.36.128/26 handle="k8s-pod-network.80613d10c1a5735a6ed7bcb4078765ba67427660c2e6ac46c6ad7436601d4010" host="ip-172-31-17-112" Apr 24 23:38:06.168227 containerd[2018]: 2026-04-24 23:38:05.962 [INFO][4969] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.36.133/26] handle="k8s-pod-network.80613d10c1a5735a6ed7bcb4078765ba67427660c2e6ac46c6ad7436601d4010" host="ip-172-31-17-112" Apr 24 23:38:06.168227 containerd[2018]: 2026-04-24 23:38:05.966 [INFO][4969] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:38:06.168227 containerd[2018]: 2026-04-24 23:38:05.966 [INFO][4969] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.36.133/26] IPv6=[] ContainerID="80613d10c1a5735a6ed7bcb4078765ba67427660c2e6ac46c6ad7436601d4010" HandleID="k8s-pod-network.80613d10c1a5735a6ed7bcb4078765ba67427660c2e6ac46c6ad7436601d4010" Workload="ip--172--31--17--112-k8s-coredns--674b8bbfcf--nvwfk-eth0" Apr 24 23:38:06.171581 containerd[2018]: 2026-04-24 23:38:06.020 [INFO][4849] cni-plugin/k8s.go 418: Populated endpoint ContainerID="80613d10c1a5735a6ed7bcb4078765ba67427660c2e6ac46c6ad7436601d4010" Namespace="kube-system" Pod="coredns-674b8bbfcf-nvwfk" WorkloadEndpoint="ip--172--31--17--112-k8s-coredns--674b8bbfcf--nvwfk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--112-k8s-coredns--674b8bbfcf--nvwfk-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"4359df61-6ba9-4f39-beb9-94953c5d320d", ResourceVersion:"942", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 37, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-112", ContainerID:"", Pod:"coredns-674b8bbfcf-nvwfk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.36.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali12e44058a6d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:38:06.171581 containerd[2018]: 2026-04-24 23:38:06.021 [INFO][4849] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.36.133/32] ContainerID="80613d10c1a5735a6ed7bcb4078765ba67427660c2e6ac46c6ad7436601d4010" Namespace="kube-system" Pod="coredns-674b8bbfcf-nvwfk" WorkloadEndpoint="ip--172--31--17--112-k8s-coredns--674b8bbfcf--nvwfk-eth0" Apr 24 23:38:06.171581 containerd[2018]: 2026-04-24 23:38:06.022 [INFO][4849] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali12e44058a6d ContainerID="80613d10c1a5735a6ed7bcb4078765ba67427660c2e6ac46c6ad7436601d4010" Namespace="kube-system" Pod="coredns-674b8bbfcf-nvwfk" WorkloadEndpoint="ip--172--31--17--112-k8s-coredns--674b8bbfcf--nvwfk-eth0" Apr 24 23:38:06.171581 containerd[2018]: 2026-04-24 23:38:06.069 [INFO][4849] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="80613d10c1a5735a6ed7bcb4078765ba67427660c2e6ac46c6ad7436601d4010" Namespace="kube-system" Pod="coredns-674b8bbfcf-nvwfk" WorkloadEndpoint="ip--172--31--17--112-k8s-coredns--674b8bbfcf--nvwfk-eth0" Apr 24 23:38:06.171581 containerd[2018]: 2026-04-24 23:38:06.076 [INFO][4849] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="80613d10c1a5735a6ed7bcb4078765ba67427660c2e6ac46c6ad7436601d4010" Namespace="kube-system" Pod="coredns-674b8bbfcf-nvwfk" WorkloadEndpoint="ip--172--31--17--112-k8s-coredns--674b8bbfcf--nvwfk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--112-k8s-coredns--674b8bbfcf--nvwfk-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"4359df61-6ba9-4f39-beb9-94953c5d320d", ResourceVersion:"942", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 37, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-112", ContainerID:"80613d10c1a5735a6ed7bcb4078765ba67427660c2e6ac46c6ad7436601d4010", Pod:"coredns-674b8bbfcf-nvwfk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.36.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali12e44058a6d", MAC:"56:66:68:a2:c4:e9", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:38:06.171581 containerd[2018]: 2026-04-24 23:38:06.128 [INFO][4849] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="80613d10c1a5735a6ed7bcb4078765ba67427660c2e6ac46c6ad7436601d4010" Namespace="kube-system" Pod="coredns-674b8bbfcf-nvwfk" WorkloadEndpoint="ip--172--31--17--112-k8s-coredns--674b8bbfcf--nvwfk-eth0" Apr 24 23:38:06.255643 containerd[2018]: time="2026-04-24T23:38:06.251057837Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:38:06.255643 containerd[2018]: time="2026-04-24T23:38:06.251165177Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:38:06.255643 containerd[2018]: time="2026-04-24T23:38:06.251201585Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:38:06.255643 containerd[2018]: time="2026-04-24T23:38:06.251416229Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:38:06.271249 systemd-networkd[1934]: cali19901b59e85: Link UP Apr 24 23:38:06.320510 systemd-networkd[1934]: cali19901b59e85: Gained carrier Apr 24 23:38:06.412166 containerd[2018]: 2026-04-24 23:38:04.921 [ERROR][4901] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 24 23:38:06.412166 containerd[2018]: 2026-04-24 23:38:05.110 [INFO][4901] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--17--112-k8s-csi--node--driver--d77dn-eth0 csi-node-driver- calico-system 6eaa5414-466a-4dc1-ab02-5e5914bb112e 944 0 2026-04-24 23:37:43 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6d9d697c7c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-17-112 csi-node-driver-d77dn eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali19901b59e85 [] [] }} ContainerID="23a184c55b23263ac7d131c8cb1b5e59201fe1e7d25b63176d6bfb88a040df22" Namespace="calico-system" Pod="csi-node-driver-d77dn" WorkloadEndpoint="ip--172--31--17--112-k8s-csi--node--driver--d77dn-" Apr 24 23:38:06.412166 containerd[2018]: 2026-04-24 23:38:05.110 [INFO][4901] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="23a184c55b23263ac7d131c8cb1b5e59201fe1e7d25b63176d6bfb88a040df22" Namespace="calico-system" Pod="csi-node-driver-d77dn" WorkloadEndpoint="ip--172--31--17--112-k8s-csi--node--driver--d77dn-eth0" Apr 24 23:38:06.412166 containerd[2018]: 2026-04-24 23:38:05.657 [INFO][4998] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="23a184c55b23263ac7d131c8cb1b5e59201fe1e7d25b63176d6bfb88a040df22" HandleID="k8s-pod-network.23a184c55b23263ac7d131c8cb1b5e59201fe1e7d25b63176d6bfb88a040df22" Workload="ip--172--31--17--112-k8s-csi--node--driver--d77dn-eth0" Apr 24 23:38:06.412166 containerd[2018]: 2026-04-24 23:38:05.721 [INFO][4998] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="23a184c55b23263ac7d131c8cb1b5e59201fe1e7d25b63176d6bfb88a040df22" HandleID="k8s-pod-network.23a184c55b23263ac7d131c8cb1b5e59201fe1e7d25b63176d6bfb88a040df22" Workload="ip--172--31--17--112-k8s-csi--node--driver--d77dn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000628b50), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-17-112", "pod":"csi-node-driver-d77dn", "timestamp":"2026-04-24 23:38:05.657852966 +0000 UTC"}, Hostname:"ip-172-31-17-112", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40004118c0)} Apr 24 23:38:06.412166 containerd[2018]: 2026-04-24 23:38:05.721 [INFO][4998] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:38:06.412166 containerd[2018]: 2026-04-24 23:38:05.967 [INFO][4998] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:38:06.412166 containerd[2018]: 2026-04-24 23:38:05.967 [INFO][4998] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-17-112' Apr 24 23:38:06.412166 containerd[2018]: 2026-04-24 23:38:05.988 [INFO][4998] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.23a184c55b23263ac7d131c8cb1b5e59201fe1e7d25b63176d6bfb88a040df22" host="ip-172-31-17-112" Apr 24 23:38:06.412166 containerd[2018]: 2026-04-24 23:38:06.040 [INFO][4998] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-17-112" Apr 24 23:38:06.412166 containerd[2018]: 2026-04-24 23:38:06.095 [INFO][4998] ipam/ipam.go 526: Trying affinity for 192.168.36.128/26 host="ip-172-31-17-112" Apr 24 23:38:06.412166 containerd[2018]: 2026-04-24 23:38:06.102 [INFO][4998] ipam/ipam.go 160: Attempting to load block cidr=192.168.36.128/26 host="ip-172-31-17-112" Apr 24 23:38:06.412166 containerd[2018]: 2026-04-24 23:38:06.109 [INFO][4998] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.36.128/26 host="ip-172-31-17-112" Apr 24 23:38:06.412166 containerd[2018]: 2026-04-24 23:38:06.109 [INFO][4998] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.36.128/26 handle="k8s-pod-network.23a184c55b23263ac7d131c8cb1b5e59201fe1e7d25b63176d6bfb88a040df22" host="ip-172-31-17-112" Apr 24 23:38:06.412166 containerd[2018]: 2026-04-24 23:38:06.116 [INFO][4998] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.23a184c55b23263ac7d131c8cb1b5e59201fe1e7d25b63176d6bfb88a040df22 Apr 24 23:38:06.412166 containerd[2018]: 2026-04-24 23:38:06.155 [INFO][4998] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.36.128/26 handle="k8s-pod-network.23a184c55b23263ac7d131c8cb1b5e59201fe1e7d25b63176d6bfb88a040df22" host="ip-172-31-17-112" Apr 24 23:38:06.412166 containerd[2018]: 2026-04-24 23:38:06.182 [INFO][4998] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.36.134/26] block=192.168.36.128/26 handle="k8s-pod-network.23a184c55b23263ac7d131c8cb1b5e59201fe1e7d25b63176d6bfb88a040df22" host="ip-172-31-17-112" Apr 24 23:38:06.412166 containerd[2018]: 2026-04-24 23:38:06.182 [INFO][4998] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.36.134/26] handle="k8s-pod-network.23a184c55b23263ac7d131c8cb1b5e59201fe1e7d25b63176d6bfb88a040df22" host="ip-172-31-17-112" Apr 24 23:38:06.412166 containerd[2018]: 2026-04-24 23:38:06.182 [INFO][4998] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:38:06.412166 containerd[2018]: 2026-04-24 23:38:06.185 [INFO][4998] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.36.134/26] IPv6=[] ContainerID="23a184c55b23263ac7d131c8cb1b5e59201fe1e7d25b63176d6bfb88a040df22" HandleID="k8s-pod-network.23a184c55b23263ac7d131c8cb1b5e59201fe1e7d25b63176d6bfb88a040df22" Workload="ip--172--31--17--112-k8s-csi--node--driver--d77dn-eth0" Apr 24 23:38:06.418582 containerd[2018]: 2026-04-24 23:38:06.226 [INFO][4901] cni-plugin/k8s.go 418: Populated endpoint ContainerID="23a184c55b23263ac7d131c8cb1b5e59201fe1e7d25b63176d6bfb88a040df22" Namespace="calico-system" Pod="csi-node-driver-d77dn" WorkloadEndpoint="ip--172--31--17--112-k8s-csi--node--driver--d77dn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--112-k8s-csi--node--driver--d77dn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6eaa5414-466a-4dc1-ab02-5e5914bb112e", ResourceVersion:"944", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 37, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-112", ContainerID:"", Pod:"csi-node-driver-d77dn", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.36.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali19901b59e85", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:38:06.418582 containerd[2018]: 2026-04-24 23:38:06.231 [INFO][4901] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.36.134/32] ContainerID="23a184c55b23263ac7d131c8cb1b5e59201fe1e7d25b63176d6bfb88a040df22" Namespace="calico-system" Pod="csi-node-driver-d77dn" WorkloadEndpoint="ip--172--31--17--112-k8s-csi--node--driver--d77dn-eth0" Apr 24 23:38:06.418582 containerd[2018]: 2026-04-24 23:38:06.231 [INFO][4901] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali19901b59e85 ContainerID="23a184c55b23263ac7d131c8cb1b5e59201fe1e7d25b63176d6bfb88a040df22" Namespace="calico-system" Pod="csi-node-driver-d77dn" WorkloadEndpoint="ip--172--31--17--112-k8s-csi--node--driver--d77dn-eth0" Apr 24 23:38:06.418582 containerd[2018]: 2026-04-24 23:38:06.324 [INFO][4901] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="23a184c55b23263ac7d131c8cb1b5e59201fe1e7d25b63176d6bfb88a040df22" Namespace="calico-system" Pod="csi-node-driver-d77dn" WorkloadEndpoint="ip--172--31--17--112-k8s-csi--node--driver--d77dn-eth0" Apr 24 23:38:06.418582 containerd[2018]: 2026-04-24 23:38:06.341 [INFO][4901] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="23a184c55b23263ac7d131c8cb1b5e59201fe1e7d25b63176d6bfb88a040df22" Namespace="calico-system" Pod="csi-node-driver-d77dn" WorkloadEndpoint="ip--172--31--17--112-k8s-csi--node--driver--d77dn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--112-k8s-csi--node--driver--d77dn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6eaa5414-466a-4dc1-ab02-5e5914bb112e", ResourceVersion:"944", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 37, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-112", ContainerID:"23a184c55b23263ac7d131c8cb1b5e59201fe1e7d25b63176d6bfb88a040df22", Pod:"csi-node-driver-d77dn", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.36.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali19901b59e85", MAC:"4a:51:f9:65:15:ce", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:38:06.418582 containerd[2018]: 2026-04-24 23:38:06.390 [INFO][4901] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="23a184c55b23263ac7d131c8cb1b5e59201fe1e7d25b63176d6bfb88a040df22" Namespace="calico-system" Pod="csi-node-driver-d77dn" WorkloadEndpoint="ip--172--31--17--112-k8s-csi--node--driver--d77dn-eth0" Apr 24 23:38:06.428815 systemd[1]: run-containerd-runc-k8s.io-25d24863abf6207fbc84750c17022dad3d07b7f8eef932d80c00f3877cd06816-runc.6Gbesq.mount: Deactivated successfully. Apr 24 23:38:06.476298 containerd[2018]: time="2026-04-24T23:38:06.475220730Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:38:06.476298 containerd[2018]: time="2026-04-24T23:38:06.475360770Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:38:06.476298 containerd[2018]: time="2026-04-24T23:38:06.475399530Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:38:06.476298 containerd[2018]: time="2026-04-24T23:38:06.475599294Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:38:06.477170 systemd-networkd[1934]: cali151bf1dcb99: Link UP Apr 24 23:38:06.481614 systemd-networkd[1934]: cali151bf1dcb99: Gained carrier Apr 24 23:38:06.482377 systemd-networkd[1934]: calidb205269d41: Gained IPv6LL Apr 24 23:38:06.555218 containerd[2018]: 2026-04-24 23:38:04.858 [ERROR][4875] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 24 23:38:06.555218 containerd[2018]: 2026-04-24 23:38:04.957 [INFO][4875] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--17--112-k8s-coredns--674b8bbfcf--sv25n-eth0 coredns-674b8bbfcf- kube-system af04062c-5fa7-4ac4-a06d-12268ffd9166 945 0 2026-04-24 23:37:19 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-17-112 coredns-674b8bbfcf-sv25n eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali151bf1dcb99 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="7fd7ec923b69746c1579ddf6cf0f4abc11b0e9bfd0b53a7b48c90bf3270f4318" Namespace="kube-system" Pod="coredns-674b8bbfcf-sv25n" WorkloadEndpoint="ip--172--31--17--112-k8s-coredns--674b8bbfcf--sv25n-" Apr 24 23:38:06.555218 containerd[2018]: 2026-04-24 23:38:04.960 [INFO][4875] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7fd7ec923b69746c1579ddf6cf0f4abc11b0e9bfd0b53a7b48c90bf3270f4318" Namespace="kube-system" Pod="coredns-674b8bbfcf-sv25n" WorkloadEndpoint="ip--172--31--17--112-k8s-coredns--674b8bbfcf--sv25n-eth0" Apr 24 23:38:06.555218 containerd[2018]: 2026-04-24 23:38:05.691 [INFO][4982] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7fd7ec923b69746c1579ddf6cf0f4abc11b0e9bfd0b53a7b48c90bf3270f4318" HandleID="k8s-pod-network.7fd7ec923b69746c1579ddf6cf0f4abc11b0e9bfd0b53a7b48c90bf3270f4318" Workload="ip--172--31--17--112-k8s-coredns--674b8bbfcf--sv25n-eth0" Apr 24 23:38:06.555218 containerd[2018]: 2026-04-24 23:38:05.733 [INFO][4982] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="7fd7ec923b69746c1579ddf6cf0f4abc11b0e9bfd0b53a7b48c90bf3270f4318" HandleID="k8s-pod-network.7fd7ec923b69746c1579ddf6cf0f4abc11b0e9bfd0b53a7b48c90bf3270f4318" Workload="ip--172--31--17--112-k8s-coredns--674b8bbfcf--sv25n-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004cbd0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-17-112", "pod":"coredns-674b8bbfcf-sv25n", "timestamp":"2026-04-24 23:38:05.691611606 +0000 UTC"}, Hostname:"ip-172-31-17-112", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x400038c6e0)} Apr 24 23:38:06.555218 containerd[2018]: 2026-04-24 23:38:05.734 [INFO][4982] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:38:06.555218 containerd[2018]: 2026-04-24 23:38:06.184 [INFO][4982] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:38:06.555218 containerd[2018]: 2026-04-24 23:38:06.187 [INFO][4982] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-17-112' Apr 24 23:38:06.555218 containerd[2018]: 2026-04-24 23:38:06.206 [INFO][4982] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.7fd7ec923b69746c1579ddf6cf0f4abc11b0e9bfd0b53a7b48c90bf3270f4318" host="ip-172-31-17-112" Apr 24 23:38:06.555218 containerd[2018]: 2026-04-24 23:38:06.259 [INFO][4982] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-17-112" Apr 24 23:38:06.555218 containerd[2018]: 2026-04-24 23:38:06.329 [INFO][4982] ipam/ipam.go 526: Trying affinity for 192.168.36.128/26 host="ip-172-31-17-112" Apr 24 23:38:06.555218 containerd[2018]: 2026-04-24 23:38:06.338 [INFO][4982] ipam/ipam.go 160: Attempting to load block cidr=192.168.36.128/26 host="ip-172-31-17-112" Apr 24 23:38:06.555218 containerd[2018]: 2026-04-24 23:38:06.366 [INFO][4982] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.36.128/26 host="ip-172-31-17-112" Apr 24 23:38:06.555218 containerd[2018]: 2026-04-24 23:38:06.366 [INFO][4982] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.36.128/26 handle="k8s-pod-network.7fd7ec923b69746c1579ddf6cf0f4abc11b0e9bfd0b53a7b48c90bf3270f4318" host="ip-172-31-17-112" Apr 24 23:38:06.555218 containerd[2018]: 2026-04-24 23:38:06.372 [INFO][4982] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.7fd7ec923b69746c1579ddf6cf0f4abc11b0e9bfd0b53a7b48c90bf3270f4318 Apr 24 23:38:06.555218 containerd[2018]: 2026-04-24 23:38:06.397 [INFO][4982] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.36.128/26 handle="k8s-pod-network.7fd7ec923b69746c1579ddf6cf0f4abc11b0e9bfd0b53a7b48c90bf3270f4318" host="ip-172-31-17-112" Apr 24 23:38:06.555218 containerd[2018]: 2026-04-24 23:38:06.437 [INFO][4982] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.36.135/26] block=192.168.36.128/26 handle="k8s-pod-network.7fd7ec923b69746c1579ddf6cf0f4abc11b0e9bfd0b53a7b48c90bf3270f4318" host="ip-172-31-17-112" Apr 24 23:38:06.555218 containerd[2018]: 2026-04-24 23:38:06.439 [INFO][4982] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.36.135/26] handle="k8s-pod-network.7fd7ec923b69746c1579ddf6cf0f4abc11b0e9bfd0b53a7b48c90bf3270f4318" host="ip-172-31-17-112" Apr 24 23:38:06.555218 containerd[2018]: 2026-04-24 23:38:06.439 [INFO][4982] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:38:06.555218 containerd[2018]: 2026-04-24 23:38:06.439 [INFO][4982] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.36.135/26] IPv6=[] ContainerID="7fd7ec923b69746c1579ddf6cf0f4abc11b0e9bfd0b53a7b48c90bf3270f4318" HandleID="k8s-pod-network.7fd7ec923b69746c1579ddf6cf0f4abc11b0e9bfd0b53a7b48c90bf3270f4318" Workload="ip--172--31--17--112-k8s-coredns--674b8bbfcf--sv25n-eth0" Apr 24 23:38:06.556550 containerd[2018]: 2026-04-24 23:38:06.458 [INFO][4875] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7fd7ec923b69746c1579ddf6cf0f4abc11b0e9bfd0b53a7b48c90bf3270f4318" Namespace="kube-system" Pod="coredns-674b8bbfcf-sv25n" WorkloadEndpoint="ip--172--31--17--112-k8s-coredns--674b8bbfcf--sv25n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--112-k8s-coredns--674b8bbfcf--sv25n-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"af04062c-5fa7-4ac4-a06d-12268ffd9166", ResourceVersion:"945", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 37, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-112", ContainerID:"", Pod:"coredns-674b8bbfcf-sv25n", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.36.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali151bf1dcb99", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:38:06.556550 containerd[2018]: 2026-04-24 23:38:06.459 [INFO][4875] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.36.135/32] ContainerID="7fd7ec923b69746c1579ddf6cf0f4abc11b0e9bfd0b53a7b48c90bf3270f4318" Namespace="kube-system" Pod="coredns-674b8bbfcf-sv25n" WorkloadEndpoint="ip--172--31--17--112-k8s-coredns--674b8bbfcf--sv25n-eth0" Apr 24 23:38:06.556550 containerd[2018]: 2026-04-24 23:38:06.459 [INFO][4875] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali151bf1dcb99 ContainerID="7fd7ec923b69746c1579ddf6cf0f4abc11b0e9bfd0b53a7b48c90bf3270f4318" Namespace="kube-system" Pod="coredns-674b8bbfcf-sv25n" WorkloadEndpoint="ip--172--31--17--112-k8s-coredns--674b8bbfcf--sv25n-eth0" Apr 24 23:38:06.556550 containerd[2018]: 2026-04-24 23:38:06.498 [INFO][4875] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7fd7ec923b69746c1579ddf6cf0f4abc11b0e9bfd0b53a7b48c90bf3270f4318" Namespace="kube-system" Pod="coredns-674b8bbfcf-sv25n" WorkloadEndpoint="ip--172--31--17--112-k8s-coredns--674b8bbfcf--sv25n-eth0" Apr 24 23:38:06.556550 containerd[2018]: 2026-04-24 23:38:06.503 [INFO][4875] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7fd7ec923b69746c1579ddf6cf0f4abc11b0e9bfd0b53a7b48c90bf3270f4318" Namespace="kube-system" Pod="coredns-674b8bbfcf-sv25n" WorkloadEndpoint="ip--172--31--17--112-k8s-coredns--674b8bbfcf--sv25n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--112-k8s-coredns--674b8bbfcf--sv25n-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"af04062c-5fa7-4ac4-a06d-12268ffd9166", ResourceVersion:"945", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 37, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-112", ContainerID:"7fd7ec923b69746c1579ddf6cf0f4abc11b0e9bfd0b53a7b48c90bf3270f4318", Pod:"coredns-674b8bbfcf-sv25n", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.36.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali151bf1dcb99", MAC:"7e:ed:eb:fa:af:3d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:38:06.556550 containerd[2018]: 2026-04-24 23:38:06.540 [INFO][4875] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7fd7ec923b69746c1579ddf6cf0f4abc11b0e9bfd0b53a7b48c90bf3270f4318" Namespace="kube-system" Pod="coredns-674b8bbfcf-sv25n" WorkloadEndpoint="ip--172--31--17--112-k8s-coredns--674b8bbfcf--sv25n-eth0" Apr 24 23:38:06.558870 containerd[2018]: time="2026-04-24T23:38:06.556309218Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:38:06.558870 containerd[2018]: time="2026-04-24T23:38:06.556777422Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:38:06.559829 containerd[2018]: time="2026-04-24T23:38:06.556957698Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:38:06.562137 containerd[2018]: time="2026-04-24T23:38:06.561102846Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:38:06.603751 kubelet[3426]: I0424 23:38:06.603678 3426 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9fd92f4c-e0fc-4914-8842-cd565a232751" path="/var/lib/kubelet/pods/9fd92f4c-e0fc-4914-8842-cd565a232751/volumes" Apr 24 23:38:06.625846 systemd[1]: Started cri-containerd-25d24863abf6207fbc84750c17022dad3d07b7f8eef932d80c00f3877cd06816.scope - libcontainer container 25d24863abf6207fbc84750c17022dad3d07b7f8eef932d80c00f3877cd06816. Apr 24 23:38:06.656594 systemd[1]: Started cri-containerd-80613d10c1a5735a6ed7bcb4078765ba67427660c2e6ac46c6ad7436601d4010.scope - libcontainer container 80613d10c1a5735a6ed7bcb4078765ba67427660c2e6ac46c6ad7436601d4010. Apr 24 23:38:06.685427 systemd[1]: Started cri-containerd-23a184c55b23263ac7d131c8cb1b5e59201fe1e7d25b63176d6bfb88a040df22.scope - libcontainer container 23a184c55b23263ac7d131c8cb1b5e59201fe1e7d25b63176d6bfb88a040df22. Apr 24 23:38:06.818113 containerd[2018]: time="2026-04-24T23:38:06.817870244Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-77cfg,Uid:8d59be44-1022-4a6d-9634-d8c1fdfc4c4b,Namespace:calico-system,Attempt:1,} returns sandbox id \"945f363dcecabc0353b5849f480ea2cb873e26c787654b87b47a626785dd8f3d\"" Apr 24 23:38:06.825960 containerd[2018]: time="2026-04-24T23:38:06.825196172Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Apr 24 23:38:06.830280 containerd[2018]: time="2026-04-24T23:38:06.807568052Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:38:06.830280 containerd[2018]: time="2026-04-24T23:38:06.829467080Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:38:06.830280 containerd[2018]: time="2026-04-24T23:38:06.829506560Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:38:06.830280 containerd[2018]: time="2026-04-24T23:38:06.829741904Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:38:06.866384 systemd-networkd[1934]: cali3b5848d13c2: Gained IPv6LL Apr 24 23:38:06.938407 systemd[1]: Started cri-containerd-7fd7ec923b69746c1579ddf6cf0f4abc11b0e9bfd0b53a7b48c90bf3270f4318.scope - libcontainer container 7fd7ec923b69746c1579ddf6cf0f4abc11b0e9bfd0b53a7b48c90bf3270f4318. Apr 24 23:38:07.051831 containerd[2018]: time="2026-04-24T23:38:07.051756881Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-df9f5595c-hn778,Uid:b3876f02-9c03-4cf9-883e-82b297082dbe,Namespace:calico-system,Attempt:1,} returns sandbox id \"e0422e01fa6e6da1721fffed0b680c22f4b88d5fa9b67c767a1ddbd8b5081159\"" Apr 24 23:38:07.099323 containerd[2018]: time="2026-04-24T23:38:07.098990837Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-nvwfk,Uid:4359df61-6ba9-4f39-beb9-94953c5d320d,Namespace:kube-system,Attempt:1,} returns sandbox id \"80613d10c1a5735a6ed7bcb4078765ba67427660c2e6ac46c6ad7436601d4010\"" Apr 24 23:38:07.142273 containerd[2018]: time="2026-04-24T23:38:07.140400701Z" level=info msg="CreateContainer within sandbox \"80613d10c1a5735a6ed7bcb4078765ba67427660c2e6ac46c6ad7436601d4010\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 24 23:38:07.143852 containerd[2018]: time="2026-04-24T23:38:07.140891273Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8696996ff8-qz98n,Uid:484f4df4-9a9a-41cc-87e4-b75a5e07666e,Namespace:calico-system,Attempt:1,} returns sandbox id \"ad914eed2b59c59276ebf6f1027ad20c822c5a424be19db35328cd8f442d3e28\"" Apr 24 23:38:07.169058 containerd[2018]: time="2026-04-24T23:38:07.168019853Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-d77dn,Uid:6eaa5414-466a-4dc1-ab02-5e5914bb112e,Namespace:calico-system,Attempt:1,} returns sandbox id \"23a184c55b23263ac7d131c8cb1b5e59201fe1e7d25b63176d6bfb88a040df22\"" Apr 24 23:38:07.184374 systemd-networkd[1934]: cali91e7ca06fa7: Gained IPv6LL Apr 24 23:38:07.236759 containerd[2018]: time="2026-04-24T23:38:07.235433070Z" level=info msg="CreateContainer within sandbox \"80613d10c1a5735a6ed7bcb4078765ba67427660c2e6ac46c6ad7436601d4010\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"904b828443b72a63f2106cee4b9ed3c9b349d77e6009ed16224433c148ae63f0\"" Apr 24 23:38:07.242112 containerd[2018]: time="2026-04-24T23:38:07.241046238Z" level=info msg="StartContainer for \"904b828443b72a63f2106cee4b9ed3c9b349d77e6009ed16224433c148ae63f0\"" Apr 24 23:38:07.249219 systemd-networkd[1934]: cali3b1306e0af3: Gained IPv6LL Apr 24 23:38:07.308510 systemd-networkd[1934]: cali4da18157ecb: Link UP Apr 24 23:38:07.315295 systemd-networkd[1934]: cali4da18157ecb: Gained carrier Apr 24 23:38:07.331128 containerd[2018]: time="2026-04-24T23:38:07.331074738Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-sv25n,Uid:af04062c-5fa7-4ac4-a06d-12268ffd9166,Namespace:kube-system,Attempt:1,} returns sandbox id \"7fd7ec923b69746c1579ddf6cf0f4abc11b0e9bfd0b53a7b48c90bf3270f4318\"" Apr 24 23:38:07.354851 containerd[2018]: time="2026-04-24T23:38:07.354687102Z" level=info msg="CreateContainer within sandbox \"7fd7ec923b69746c1579ddf6cf0f4abc11b0e9bfd0b53a7b48c90bf3270f4318\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 24 23:38:07.374953 containerd[2018]: time="2026-04-24T23:38:07.374569290Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-df9f5595c-2967j,Uid:65e053c6-4f3f-427f-8a78-6c78634b5620,Namespace:calico-system,Attempt:1,} returns sandbox id \"25d24863abf6207fbc84750c17022dad3d07b7f8eef932d80c00f3877cd06816\"" Apr 24 23:38:07.410150 containerd[2018]: 2026-04-24 23:38:06.569 [ERROR][5184] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 24 23:38:07.410150 containerd[2018]: 2026-04-24 23:38:06.618 [INFO][5184] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--17--112-k8s-whisker--bc76d45f7--hgqwn-eth0 whisker-bc76d45f7- calico-system c5c3744c-27ec-45e5-be2a-86c057393fd1 973 0 2026-04-24 23:38:05 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:bc76d45f7 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-17-112 whisker-bc76d45f7-hgqwn eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali4da18157ecb [] [] }} ContainerID="45f2c145111b999bdc4b7024b93f6b1db6a3a6a7d292600738c631e8976aa787" Namespace="calico-system" Pod="whisker-bc76d45f7-hgqwn" WorkloadEndpoint="ip--172--31--17--112-k8s-whisker--bc76d45f7--hgqwn-" Apr 24 23:38:07.410150 containerd[2018]: 2026-04-24 23:38:06.621 [INFO][5184] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="45f2c145111b999bdc4b7024b93f6b1db6a3a6a7d292600738c631e8976aa787" Namespace="calico-system" Pod="whisker-bc76d45f7-hgqwn" WorkloadEndpoint="ip--172--31--17--112-k8s-whisker--bc76d45f7--hgqwn-eth0" Apr 24 23:38:07.410150 containerd[2018]: 2026-04-24 23:38:06.940 [INFO][5298] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="45f2c145111b999bdc4b7024b93f6b1db6a3a6a7d292600738c631e8976aa787" HandleID="k8s-pod-network.45f2c145111b999bdc4b7024b93f6b1db6a3a6a7d292600738c631e8976aa787" Workload="ip--172--31--17--112-k8s-whisker--bc76d45f7--hgqwn-eth0" Apr 24 23:38:07.410150 containerd[2018]: 2026-04-24 23:38:06.996 [INFO][5298] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="45f2c145111b999bdc4b7024b93f6b1db6a3a6a7d292600738c631e8976aa787" HandleID="k8s-pod-network.45f2c145111b999bdc4b7024b93f6b1db6a3a6a7d292600738c631e8976aa787" Workload="ip--172--31--17--112-k8s-whisker--bc76d45f7--hgqwn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000186350), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-17-112", "pod":"whisker-bc76d45f7-hgqwn", "timestamp":"2026-04-24 23:38:06.940889432 +0000 UTC"}, Hostname:"ip-172-31-17-112", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40004d0840)} Apr 24 23:38:07.410150 containerd[2018]: 2026-04-24 23:38:06.996 [INFO][5298] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:38:07.410150 containerd[2018]: 2026-04-24 23:38:06.998 [INFO][5298] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:38:07.410150 containerd[2018]: 2026-04-24 23:38:07.000 [INFO][5298] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-17-112' Apr 24 23:38:07.410150 containerd[2018]: 2026-04-24 23:38:07.016 [INFO][5298] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.45f2c145111b999bdc4b7024b93f6b1db6a3a6a7d292600738c631e8976aa787" host="ip-172-31-17-112" Apr 24 23:38:07.410150 containerd[2018]: 2026-04-24 23:38:07.069 [INFO][5298] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-17-112" Apr 24 23:38:07.410150 containerd[2018]: 2026-04-24 23:38:07.102 [INFO][5298] ipam/ipam.go 526: Trying affinity for 192.168.36.128/26 host="ip-172-31-17-112" Apr 24 23:38:07.410150 containerd[2018]: 2026-04-24 23:38:07.108 [INFO][5298] ipam/ipam.go 160: Attempting to load block cidr=192.168.36.128/26 host="ip-172-31-17-112" Apr 24 23:38:07.410150 containerd[2018]: 2026-04-24 23:38:07.147 [INFO][5298] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.36.128/26 host="ip-172-31-17-112" Apr 24 23:38:07.410150 containerd[2018]: 2026-04-24 23:38:07.148 [INFO][5298] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.36.128/26 handle="k8s-pod-network.45f2c145111b999bdc4b7024b93f6b1db6a3a6a7d292600738c631e8976aa787" host="ip-172-31-17-112" Apr 24 23:38:07.410150 containerd[2018]: 2026-04-24 23:38:07.158 [INFO][5298] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.45f2c145111b999bdc4b7024b93f6b1db6a3a6a7d292600738c631e8976aa787 Apr 24 23:38:07.410150 containerd[2018]: 2026-04-24 23:38:07.183 [INFO][5298] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.36.128/26 handle="k8s-pod-network.45f2c145111b999bdc4b7024b93f6b1db6a3a6a7d292600738c631e8976aa787" host="ip-172-31-17-112" Apr 24 23:38:07.410150 containerd[2018]: 2026-04-24 23:38:07.231 [INFO][5298] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.36.136/26] block=192.168.36.128/26 handle="k8s-pod-network.45f2c145111b999bdc4b7024b93f6b1db6a3a6a7d292600738c631e8976aa787" host="ip-172-31-17-112" Apr 24 23:38:07.410150 containerd[2018]: 2026-04-24 23:38:07.235 [INFO][5298] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.36.136/26] handle="k8s-pod-network.45f2c145111b999bdc4b7024b93f6b1db6a3a6a7d292600738c631e8976aa787" host="ip-172-31-17-112" Apr 24 23:38:07.410150 containerd[2018]: 2026-04-24 23:38:07.240 [INFO][5298] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:38:07.410150 containerd[2018]: 2026-04-24 23:38:07.240 [INFO][5298] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.36.136/26] IPv6=[] ContainerID="45f2c145111b999bdc4b7024b93f6b1db6a3a6a7d292600738c631e8976aa787" HandleID="k8s-pod-network.45f2c145111b999bdc4b7024b93f6b1db6a3a6a7d292600738c631e8976aa787" Workload="ip--172--31--17--112-k8s-whisker--bc76d45f7--hgqwn-eth0" Apr 24 23:38:07.415161 containerd[2018]: 2026-04-24 23:38:07.269 [INFO][5184] cni-plugin/k8s.go 418: Populated endpoint ContainerID="45f2c145111b999bdc4b7024b93f6b1db6a3a6a7d292600738c631e8976aa787" Namespace="calico-system" Pod="whisker-bc76d45f7-hgqwn" WorkloadEndpoint="ip--172--31--17--112-k8s-whisker--bc76d45f7--hgqwn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--112-k8s-whisker--bc76d45f7--hgqwn-eth0", GenerateName:"whisker-bc76d45f7-", Namespace:"calico-system", SelfLink:"", UID:"c5c3744c-27ec-45e5-be2a-86c057393fd1", ResourceVersion:"973", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 38, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"bc76d45f7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-112", ContainerID:"", Pod:"whisker-bc76d45f7-hgqwn", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.36.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali4da18157ecb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:38:07.415161 containerd[2018]: 2026-04-24 23:38:07.272 [INFO][5184] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.36.136/32] ContainerID="45f2c145111b999bdc4b7024b93f6b1db6a3a6a7d292600738c631e8976aa787" Namespace="calico-system" Pod="whisker-bc76d45f7-hgqwn" WorkloadEndpoint="ip--172--31--17--112-k8s-whisker--bc76d45f7--hgqwn-eth0" Apr 24 23:38:07.415161 containerd[2018]: 2026-04-24 23:38:07.275 [INFO][5184] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4da18157ecb ContainerID="45f2c145111b999bdc4b7024b93f6b1db6a3a6a7d292600738c631e8976aa787" Namespace="calico-system" Pod="whisker-bc76d45f7-hgqwn" WorkloadEndpoint="ip--172--31--17--112-k8s-whisker--bc76d45f7--hgqwn-eth0" Apr 24 23:38:07.415161 containerd[2018]: 2026-04-24 23:38:07.327 [INFO][5184] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="45f2c145111b999bdc4b7024b93f6b1db6a3a6a7d292600738c631e8976aa787" Namespace="calico-system" Pod="whisker-bc76d45f7-hgqwn" WorkloadEndpoint="ip--172--31--17--112-k8s-whisker--bc76d45f7--hgqwn-eth0" Apr 24 23:38:07.415161 containerd[2018]: 2026-04-24 23:38:07.329 [INFO][5184] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="45f2c145111b999bdc4b7024b93f6b1db6a3a6a7d292600738c631e8976aa787" Namespace="calico-system" Pod="whisker-bc76d45f7-hgqwn" WorkloadEndpoint="ip--172--31--17--112-k8s-whisker--bc76d45f7--hgqwn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--112-k8s-whisker--bc76d45f7--hgqwn-eth0", GenerateName:"whisker-bc76d45f7-", Namespace:"calico-system", SelfLink:"", UID:"c5c3744c-27ec-45e5-be2a-86c057393fd1", ResourceVersion:"973", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 38, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"bc76d45f7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-112", ContainerID:"45f2c145111b999bdc4b7024b93f6b1db6a3a6a7d292600738c631e8976aa787", Pod:"whisker-bc76d45f7-hgqwn", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.36.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali4da18157ecb", MAC:"92:54:d1:45:ea:89", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:38:07.415161 containerd[2018]: 2026-04-24 23:38:07.371 [INFO][5184] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="45f2c145111b999bdc4b7024b93f6b1db6a3a6a7d292600738c631e8976aa787" Namespace="calico-system" Pod="whisker-bc76d45f7-hgqwn" WorkloadEndpoint="ip--172--31--17--112-k8s-whisker--bc76d45f7--hgqwn-eth0" Apr 24 23:38:07.445781 systemd[1]: Started cri-containerd-904b828443b72a63f2106cee4b9ed3c9b349d77e6009ed16224433c148ae63f0.scope - libcontainer container 904b828443b72a63f2106cee4b9ed3c9b349d77e6009ed16224433c148ae63f0. Apr 24 23:38:07.499266 containerd[2018]: time="2026-04-24T23:38:07.498290827Z" level=info msg="CreateContainer within sandbox \"7fd7ec923b69746c1579ddf6cf0f4abc11b0e9bfd0b53a7b48c90bf3270f4318\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2d507db33213007d572b245a307df63c407b0427b21b3814a88855897c6345cc\"" Apr 24 23:38:07.501666 containerd[2018]: time="2026-04-24T23:38:07.501339379Z" level=info msg="StartContainer for \"2d507db33213007d572b245a307df63c407b0427b21b3814a88855897c6345cc\"" Apr 24 23:38:07.545473 containerd[2018]: time="2026-04-24T23:38:07.544802551Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:38:07.545473 containerd[2018]: time="2026-04-24T23:38:07.544931311Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:38:07.545473 containerd[2018]: time="2026-04-24T23:38:07.544961479Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:38:07.545473 containerd[2018]: time="2026-04-24T23:38:07.545179039Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:38:07.569191 systemd-networkd[1934]: cali12e44058a6d: Gained IPv6LL Apr 24 23:38:07.638128 systemd-networkd[1934]: cali19901b59e85: Gained IPv6LL Apr 24 23:38:07.679777 containerd[2018]: time="2026-04-24T23:38:07.679710428Z" level=info msg="StartContainer for \"904b828443b72a63f2106cee4b9ed3c9b349d77e6009ed16224433c148ae63f0\" returns successfully" Apr 24 23:38:07.735757 systemd[1]: Started cri-containerd-2d507db33213007d572b245a307df63c407b0427b21b3814a88855897c6345cc.scope - libcontainer container 2d507db33213007d572b245a307df63c407b0427b21b3814a88855897c6345cc. Apr 24 23:38:07.743207 systemd[1]: Started cri-containerd-45f2c145111b999bdc4b7024b93f6b1db6a3a6a7d292600738c631e8976aa787.scope - libcontainer container 45f2c145111b999bdc4b7024b93f6b1db6a3a6a7d292600738c631e8976aa787. Apr 24 23:38:07.843956 containerd[2018]: time="2026-04-24T23:38:07.843785013Z" level=info msg="StartContainer for \"2d507db33213007d572b245a307df63c407b0427b21b3814a88855897c6345cc\" returns successfully" Apr 24 23:38:08.150369 containerd[2018]: time="2026-04-24T23:38:08.149905338Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-bc76d45f7-hgqwn,Uid:c5c3744c-27ec-45e5-be2a-86c057393fd1,Namespace:calico-system,Attempt:0,} returns sandbox id \"45f2c145111b999bdc4b7024b93f6b1db6a3a6a7d292600738c631e8976aa787\"" Apr 24 23:38:08.391883 kubelet[3426]: I0424 23:38:08.391506 3426 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-sv25n" podStartSLOduration=49.391471423 podStartE2EDuration="49.391471423s" podCreationTimestamp="2026-04-24 23:37:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 23:38:08.358234135 +0000 UTC m=+54.082817273" watchObservedRunningTime="2026-04-24 23:38:08.391471423 +0000 UTC m=+54.116054549" Apr 24 23:38:08.400399 systemd-networkd[1934]: cali151bf1dcb99: Gained IPv6LL Apr 24 23:38:08.449016 kubelet[3426]: I0424 23:38:08.447595 3426 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-nvwfk" podStartSLOduration=49.447575096 podStartE2EDuration="49.447575096s" podCreationTimestamp="2026-04-24 23:37:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 23:38:08.44753282 +0000 UTC m=+54.172115934" watchObservedRunningTime="2026-04-24 23:38:08.447575096 +0000 UTC m=+54.172158210" Apr 24 23:38:08.558087 kernel: calico-node[4895]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Apr 24 23:38:09.360280 systemd-networkd[1934]: cali4da18157ecb: Gained IPv6LL Apr 24 23:38:09.477211 systemd-networkd[1934]: vxlan.calico: Link UP Apr 24 23:38:09.477228 systemd-networkd[1934]: vxlan.calico: Gained carrier Apr 24 23:38:10.826542 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2470044738.mount: Deactivated successfully. Apr 24 23:38:11.216227 systemd-networkd[1934]: vxlan.calico: Gained IPv6LL Apr 24 23:38:11.483121 containerd[2018]: time="2026-04-24T23:38:11.482748851Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:38:11.485296 containerd[2018]: time="2026-04-24T23:38:11.484994807Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=51613980" Apr 24 23:38:11.488948 containerd[2018]: time="2026-04-24T23:38:11.487513307Z" level=info msg="ImageCreate event name:\"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:38:11.494005 containerd[2018]: time="2026-04-24T23:38:11.492649391Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:38:11.494665 containerd[2018]: time="2026-04-24T23:38:11.494614355Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"51613826\" in 4.669339043s" Apr 24 23:38:11.494836 containerd[2018]: time="2026-04-24T23:38:11.494805827Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\"" Apr 24 23:38:11.497719 containerd[2018]: time="2026-04-24T23:38:11.497635055Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 24 23:38:11.503479 containerd[2018]: time="2026-04-24T23:38:11.503400335Z" level=info msg="CreateContainer within sandbox \"945f363dcecabc0353b5849f480ea2cb873e26c787654b87b47a626785dd8f3d\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Apr 24 23:38:11.539272 containerd[2018]: time="2026-04-24T23:38:11.539087447Z" level=info msg="CreateContainer within sandbox \"945f363dcecabc0353b5849f480ea2cb873e26c787654b87b47a626785dd8f3d\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"a3348fe83d2ec85c0aa7b93c5550407da0dcbe2add579b51c214866c624427df\"" Apr 24 23:38:11.540429 containerd[2018]: time="2026-04-24T23:38:11.540178787Z" level=info msg="StartContainer for \"a3348fe83d2ec85c0aa7b93c5550407da0dcbe2add579b51c214866c624427df\"" Apr 24 23:38:11.648318 systemd[1]: Started cri-containerd-a3348fe83d2ec85c0aa7b93c5550407da0dcbe2add579b51c214866c624427df.scope - libcontainer container a3348fe83d2ec85c0aa7b93c5550407da0dcbe2add579b51c214866c624427df. Apr 24 23:38:11.751227 containerd[2018]: time="2026-04-24T23:38:11.751009320Z" level=info msg="StartContainer for \"a3348fe83d2ec85c0aa7b93c5550407da0dcbe2add579b51c214866c624427df\" returns successfully" Apr 24 23:38:12.529607 systemd[1]: run-containerd-runc-k8s.io-a3348fe83d2ec85c0aa7b93c5550407da0dcbe2add579b51c214866c624427df-runc.42sWry.mount: Deactivated successfully. Apr 24 23:38:13.958813 ntpd[1987]: Listen normally on 8 vxlan.calico 192.168.36.128:123 Apr 24 23:38:13.959603 ntpd[1987]: 24 Apr 23:38:13 ntpd[1987]: Listen normally on 8 vxlan.calico 192.168.36.128:123 Apr 24 23:38:13.959603 ntpd[1987]: 24 Apr 23:38:13 ntpd[1987]: Listen normally on 9 cali3b5848d13c2 [fe80::ecee:eeff:feee:eeee%4]:123 Apr 24 23:38:13.959603 ntpd[1987]: 24 Apr 23:38:13 ntpd[1987]: Listen normally on 10 calidb205269d41 [fe80::ecee:eeff:feee:eeee%5]:123 Apr 24 23:38:13.959603 ntpd[1987]: 24 Apr 23:38:13 ntpd[1987]: Listen normally on 11 cali3b1306e0af3 [fe80::ecee:eeff:feee:eeee%6]:123 Apr 24 23:38:13.959603 ntpd[1987]: 24 Apr 23:38:13 ntpd[1987]: Listen normally on 12 cali91e7ca06fa7 [fe80::ecee:eeff:feee:eeee%7]:123 Apr 24 23:38:13.959603 ntpd[1987]: 24 Apr 23:38:13 ntpd[1987]: Listen normally on 13 cali12e44058a6d [fe80::ecee:eeff:feee:eeee%8]:123 Apr 24 23:38:13.959603 ntpd[1987]: 24 Apr 23:38:13 ntpd[1987]: Listen normally on 14 cali19901b59e85 [fe80::ecee:eeff:feee:eeee%9]:123 Apr 24 23:38:13.959603 ntpd[1987]: 24 Apr 23:38:13 ntpd[1987]: Listen normally on 15 cali151bf1dcb99 [fe80::ecee:eeff:feee:eeee%10]:123 Apr 24 23:38:13.959603 ntpd[1987]: 24 Apr 23:38:13 ntpd[1987]: Listen normally on 16 cali4da18157ecb [fe80::ecee:eeff:feee:eeee%11]:123 Apr 24 23:38:13.959603 ntpd[1987]: 24 Apr 23:38:13 ntpd[1987]: Listen normally on 17 vxlan.calico [fe80::6473:30ff:fe7c:7c62%12]:123 Apr 24 23:38:13.958957 ntpd[1987]: Listen normally on 9 cali3b5848d13c2 [fe80::ecee:eeff:feee:eeee%4]:123 Apr 24 23:38:13.959085 ntpd[1987]: Listen normally on 10 calidb205269d41 [fe80::ecee:eeff:feee:eeee%5]:123 Apr 24 23:38:13.959157 ntpd[1987]: Listen normally on 11 cali3b1306e0af3 [fe80::ecee:eeff:feee:eeee%6]:123 Apr 24 23:38:13.959227 ntpd[1987]: Listen normally on 12 cali91e7ca06fa7 [fe80::ecee:eeff:feee:eeee%7]:123 Apr 24 23:38:13.959303 ntpd[1987]: Listen normally on 13 cali12e44058a6d [fe80::ecee:eeff:feee:eeee%8]:123 Apr 24 23:38:13.959379 ntpd[1987]: Listen normally on 14 cali19901b59e85 [fe80::ecee:eeff:feee:eeee%9]:123 Apr 24 23:38:13.959447 ntpd[1987]: Listen normally on 15 cali151bf1dcb99 [fe80::ecee:eeff:feee:eeee%10]:123 Apr 24 23:38:13.959515 ntpd[1987]: Listen normally on 16 cali4da18157ecb [fe80::ecee:eeff:feee:eeee%11]:123 Apr 24 23:38:13.959584 ntpd[1987]: Listen normally on 17 vxlan.calico [fe80::6473:30ff:fe7c:7c62%12]:123 Apr 24 23:38:14.331312 containerd[2018]: time="2026-04-24T23:38:14.331250005Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:38:14.333727 containerd[2018]: time="2026-04-24T23:38:14.333250165Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=45552315" Apr 24 23:38:14.333727 containerd[2018]: time="2026-04-24T23:38:14.333659605Z" level=info msg="ImageCreate event name:\"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:38:14.338681 containerd[2018]: time="2026-04-24T23:38:14.338624761Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:38:14.341010 containerd[2018]: time="2026-04-24T23:38:14.340423837Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"46949856\" in 2.842704626s" Apr 24 23:38:14.341010 containerd[2018]: time="2026-04-24T23:38:14.340477849Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\"" Apr 24 23:38:14.343603 containerd[2018]: time="2026-04-24T23:38:14.343131517Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Apr 24 23:38:14.350464 containerd[2018]: time="2026-04-24T23:38:14.350375821Z" level=info msg="CreateContainer within sandbox \"e0422e01fa6e6da1721fffed0b680c22f4b88d5fa9b67c767a1ddbd8b5081159\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 24 23:38:14.378924 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount396597479.mount: Deactivated successfully. Apr 24 23:38:14.384604 containerd[2018]: time="2026-04-24T23:38:14.384286165Z" level=info msg="CreateContainer within sandbox \"e0422e01fa6e6da1721fffed0b680c22f4b88d5fa9b67c767a1ddbd8b5081159\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"58877c8c84d2386bc2ad5b4416b267f9c3d69ad4d38895a991e2e5f8ddbc3dad\"" Apr 24 23:38:14.385885 containerd[2018]: time="2026-04-24T23:38:14.385340293Z" level=info msg="StartContainer for \"58877c8c84d2386bc2ad5b4416b267f9c3d69ad4d38895a991e2e5f8ddbc3dad\"" Apr 24 23:38:14.506336 systemd[1]: Started cri-containerd-58877c8c84d2386bc2ad5b4416b267f9c3d69ad4d38895a991e2e5f8ddbc3dad.scope - libcontainer container 58877c8c84d2386bc2ad5b4416b267f9c3d69ad4d38895a991e2e5f8ddbc3dad. Apr 24 23:38:14.620944 containerd[2018]: time="2026-04-24T23:38:14.620175158Z" level=info msg="StartContainer for \"58877c8c84d2386bc2ad5b4416b267f9c3d69ad4d38895a991e2e5f8ddbc3dad\" returns successfully" Apr 24 23:38:14.626441 containerd[2018]: time="2026-04-24T23:38:14.626342726Z" level=info msg="StopPodSandbox for \"5f6667010837cd94ebbf3cdfb23db789fce4fcd741e17499e8078f196a7bfab3\"" Apr 24 23:38:14.866734 containerd[2018]: 2026-04-24 23:38:14.736 [WARNING][5828] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5f6667010837cd94ebbf3cdfb23db789fce4fcd741e17499e8078f196a7bfab3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--112-k8s-coredns--674b8bbfcf--sv25n-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"af04062c-5fa7-4ac4-a06d-12268ffd9166", ResourceVersion:"1014", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 37, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-112", ContainerID:"7fd7ec923b69746c1579ddf6cf0f4abc11b0e9bfd0b53a7b48c90bf3270f4318", Pod:"coredns-674b8bbfcf-sv25n", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.36.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali151bf1dcb99", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:38:14.866734 containerd[2018]: 2026-04-24 23:38:14.737 [INFO][5828] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="5f6667010837cd94ebbf3cdfb23db789fce4fcd741e17499e8078f196a7bfab3" Apr 24 23:38:14.866734 containerd[2018]: 2026-04-24 23:38:14.737 [INFO][5828] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5f6667010837cd94ebbf3cdfb23db789fce4fcd741e17499e8078f196a7bfab3" iface="eth0" netns="" Apr 24 23:38:14.866734 containerd[2018]: 2026-04-24 23:38:14.737 [INFO][5828] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="5f6667010837cd94ebbf3cdfb23db789fce4fcd741e17499e8078f196a7bfab3" Apr 24 23:38:14.866734 containerd[2018]: 2026-04-24 23:38:14.737 [INFO][5828] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="5f6667010837cd94ebbf3cdfb23db789fce4fcd741e17499e8078f196a7bfab3" Apr 24 23:38:14.866734 containerd[2018]: 2026-04-24 23:38:14.831 [INFO][5835] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="5f6667010837cd94ebbf3cdfb23db789fce4fcd741e17499e8078f196a7bfab3" HandleID="k8s-pod-network.5f6667010837cd94ebbf3cdfb23db789fce4fcd741e17499e8078f196a7bfab3" Workload="ip--172--31--17--112-k8s-coredns--674b8bbfcf--sv25n-eth0" Apr 24 23:38:14.866734 containerd[2018]: 2026-04-24 23:38:14.833 [INFO][5835] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:38:14.866734 containerd[2018]: 2026-04-24 23:38:14.833 [INFO][5835] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:38:14.866734 containerd[2018]: 2026-04-24 23:38:14.857 [WARNING][5835] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="5f6667010837cd94ebbf3cdfb23db789fce4fcd741e17499e8078f196a7bfab3" HandleID="k8s-pod-network.5f6667010837cd94ebbf3cdfb23db789fce4fcd741e17499e8078f196a7bfab3" Workload="ip--172--31--17--112-k8s-coredns--674b8bbfcf--sv25n-eth0" Apr 24 23:38:14.866734 containerd[2018]: 2026-04-24 23:38:14.857 [INFO][5835] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="5f6667010837cd94ebbf3cdfb23db789fce4fcd741e17499e8078f196a7bfab3" HandleID="k8s-pod-network.5f6667010837cd94ebbf3cdfb23db789fce4fcd741e17499e8078f196a7bfab3" Workload="ip--172--31--17--112-k8s-coredns--674b8bbfcf--sv25n-eth0" Apr 24 23:38:14.866734 containerd[2018]: 2026-04-24 23:38:14.859 [INFO][5835] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:38:14.866734 containerd[2018]: 2026-04-24 23:38:14.863 [INFO][5828] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="5f6667010837cd94ebbf3cdfb23db789fce4fcd741e17499e8078f196a7bfab3" Apr 24 23:38:14.868695 containerd[2018]: time="2026-04-24T23:38:14.866793160Z" level=info msg="TearDown network for sandbox \"5f6667010837cd94ebbf3cdfb23db789fce4fcd741e17499e8078f196a7bfab3\" successfully" Apr 24 23:38:14.868695 containerd[2018]: time="2026-04-24T23:38:14.866834632Z" level=info msg="StopPodSandbox for \"5f6667010837cd94ebbf3cdfb23db789fce4fcd741e17499e8078f196a7bfab3\" returns successfully" Apr 24 23:38:14.868695 containerd[2018]: time="2026-04-24T23:38:14.867886996Z" level=info msg="RemovePodSandbox for \"5f6667010837cd94ebbf3cdfb23db789fce4fcd741e17499e8078f196a7bfab3\"" Apr 24 23:38:14.868695 containerd[2018]: time="2026-04-24T23:38:14.867943480Z" level=info msg="Forcibly stopping sandbox \"5f6667010837cd94ebbf3cdfb23db789fce4fcd741e17499e8078f196a7bfab3\"" Apr 24 23:38:15.024214 containerd[2018]: 2026-04-24 23:38:14.951 [WARNING][5854] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5f6667010837cd94ebbf3cdfb23db789fce4fcd741e17499e8078f196a7bfab3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--112-k8s-coredns--674b8bbfcf--sv25n-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"af04062c-5fa7-4ac4-a06d-12268ffd9166", ResourceVersion:"1014", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 37, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-112", ContainerID:"7fd7ec923b69746c1579ddf6cf0f4abc11b0e9bfd0b53a7b48c90bf3270f4318", Pod:"coredns-674b8bbfcf-sv25n", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.36.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali151bf1dcb99", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:38:15.024214 containerd[2018]: 2026-04-24 23:38:14.951 [INFO][5854] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="5f6667010837cd94ebbf3cdfb23db789fce4fcd741e17499e8078f196a7bfab3" Apr 24 23:38:15.024214 containerd[2018]: 2026-04-24 23:38:14.952 [INFO][5854] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5f6667010837cd94ebbf3cdfb23db789fce4fcd741e17499e8078f196a7bfab3" iface="eth0" netns="" Apr 24 23:38:15.024214 containerd[2018]: 2026-04-24 23:38:14.952 [INFO][5854] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="5f6667010837cd94ebbf3cdfb23db789fce4fcd741e17499e8078f196a7bfab3" Apr 24 23:38:15.024214 containerd[2018]: 2026-04-24 23:38:14.952 [INFO][5854] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="5f6667010837cd94ebbf3cdfb23db789fce4fcd741e17499e8078f196a7bfab3" Apr 24 23:38:15.024214 containerd[2018]: 2026-04-24 23:38:14.997 [INFO][5861] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="5f6667010837cd94ebbf3cdfb23db789fce4fcd741e17499e8078f196a7bfab3" HandleID="k8s-pod-network.5f6667010837cd94ebbf3cdfb23db789fce4fcd741e17499e8078f196a7bfab3" Workload="ip--172--31--17--112-k8s-coredns--674b8bbfcf--sv25n-eth0" Apr 24 23:38:15.024214 containerd[2018]: 2026-04-24 23:38:14.998 [INFO][5861] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:38:15.024214 containerd[2018]: 2026-04-24 23:38:14.998 [INFO][5861] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:38:15.024214 containerd[2018]: 2026-04-24 23:38:15.013 [WARNING][5861] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="5f6667010837cd94ebbf3cdfb23db789fce4fcd741e17499e8078f196a7bfab3" HandleID="k8s-pod-network.5f6667010837cd94ebbf3cdfb23db789fce4fcd741e17499e8078f196a7bfab3" Workload="ip--172--31--17--112-k8s-coredns--674b8bbfcf--sv25n-eth0" Apr 24 23:38:15.024214 containerd[2018]: 2026-04-24 23:38:15.013 [INFO][5861] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="5f6667010837cd94ebbf3cdfb23db789fce4fcd741e17499e8078f196a7bfab3" HandleID="k8s-pod-network.5f6667010837cd94ebbf3cdfb23db789fce4fcd741e17499e8078f196a7bfab3" Workload="ip--172--31--17--112-k8s-coredns--674b8bbfcf--sv25n-eth0" Apr 24 23:38:15.024214 containerd[2018]: 2026-04-24 23:38:15.016 [INFO][5861] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:38:15.024214 containerd[2018]: 2026-04-24 23:38:15.019 [INFO][5854] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="5f6667010837cd94ebbf3cdfb23db789fce4fcd741e17499e8078f196a7bfab3" Apr 24 23:38:15.024214 containerd[2018]: time="2026-04-24T23:38:15.023133084Z" level=info msg="TearDown network for sandbox \"5f6667010837cd94ebbf3cdfb23db789fce4fcd741e17499e8078f196a7bfab3\" successfully" Apr 24 23:38:15.042740 containerd[2018]: time="2026-04-24T23:38:15.042607152Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5f6667010837cd94ebbf3cdfb23db789fce4fcd741e17499e8078f196a7bfab3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 24 23:38:15.042740 containerd[2018]: time="2026-04-24T23:38:15.042738816Z" level=info msg="RemovePodSandbox \"5f6667010837cd94ebbf3cdfb23db789fce4fcd741e17499e8078f196a7bfab3\" returns successfully" Apr 24 23:38:15.043574 containerd[2018]: time="2026-04-24T23:38:15.043521600Z" level=info msg="StopPodSandbox for \"d87af8c771fb4a4090e53b4e52007664fd855acbbd729e12d75d54f763109684\"" Apr 24 23:38:15.224570 containerd[2018]: 2026-04-24 23:38:15.136 [WARNING][5875] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="d87af8c771fb4a4090e53b4e52007664fd855acbbd729e12d75d54f763109684" WorkloadEndpoint="ip--172--31--17--112-k8s-whisker--5ddfdf984--kwv9n-eth0" Apr 24 23:38:15.224570 containerd[2018]: 2026-04-24 23:38:15.137 [INFO][5875] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="d87af8c771fb4a4090e53b4e52007664fd855acbbd729e12d75d54f763109684" Apr 24 23:38:15.224570 containerd[2018]: 2026-04-24 23:38:15.137 [INFO][5875] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d87af8c771fb4a4090e53b4e52007664fd855acbbd729e12d75d54f763109684" iface="eth0" netns="" Apr 24 23:38:15.224570 containerd[2018]: 2026-04-24 23:38:15.137 [INFO][5875] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="d87af8c771fb4a4090e53b4e52007664fd855acbbd729e12d75d54f763109684" Apr 24 23:38:15.224570 containerd[2018]: 2026-04-24 23:38:15.137 [INFO][5875] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="d87af8c771fb4a4090e53b4e52007664fd855acbbd729e12d75d54f763109684" Apr 24 23:38:15.224570 containerd[2018]: 2026-04-24 23:38:15.189 [INFO][5882] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="d87af8c771fb4a4090e53b4e52007664fd855acbbd729e12d75d54f763109684" HandleID="k8s-pod-network.d87af8c771fb4a4090e53b4e52007664fd855acbbd729e12d75d54f763109684" Workload="ip--172--31--17--112-k8s-whisker--5ddfdf984--kwv9n-eth0" Apr 24 23:38:15.224570 containerd[2018]: 2026-04-24 23:38:15.190 [INFO][5882] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:38:15.224570 containerd[2018]: 2026-04-24 23:38:15.190 [INFO][5882] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:38:15.224570 containerd[2018]: 2026-04-24 23:38:15.206 [WARNING][5882] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="d87af8c771fb4a4090e53b4e52007664fd855acbbd729e12d75d54f763109684" HandleID="k8s-pod-network.d87af8c771fb4a4090e53b4e52007664fd855acbbd729e12d75d54f763109684" Workload="ip--172--31--17--112-k8s-whisker--5ddfdf984--kwv9n-eth0" Apr 24 23:38:15.224570 containerd[2018]: 2026-04-24 23:38:15.206 [INFO][5882] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="d87af8c771fb4a4090e53b4e52007664fd855acbbd729e12d75d54f763109684" HandleID="k8s-pod-network.d87af8c771fb4a4090e53b4e52007664fd855acbbd729e12d75d54f763109684" Workload="ip--172--31--17--112-k8s-whisker--5ddfdf984--kwv9n-eth0" Apr 24 23:38:15.224570 containerd[2018]: 2026-04-24 23:38:15.210 [INFO][5882] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:38:15.224570 containerd[2018]: 2026-04-24 23:38:15.213 [INFO][5875] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="d87af8c771fb4a4090e53b4e52007664fd855acbbd729e12d75d54f763109684" Apr 24 23:38:15.224570 containerd[2018]: time="2026-04-24T23:38:15.224378533Z" level=info msg="TearDown network for sandbox \"d87af8c771fb4a4090e53b4e52007664fd855acbbd729e12d75d54f763109684\" successfully" Apr 24 23:38:15.224570 containerd[2018]: time="2026-04-24T23:38:15.224414125Z" level=info msg="StopPodSandbox for \"d87af8c771fb4a4090e53b4e52007664fd855acbbd729e12d75d54f763109684\" returns successfully" Apr 24 23:38:15.228908 containerd[2018]: time="2026-04-24T23:38:15.228451321Z" level=info msg="RemovePodSandbox for \"d87af8c771fb4a4090e53b4e52007664fd855acbbd729e12d75d54f763109684\"" Apr 24 23:38:15.228908 containerd[2018]: time="2026-04-24T23:38:15.228507121Z" level=info msg="Forcibly stopping sandbox \"d87af8c771fb4a4090e53b4e52007664fd855acbbd729e12d75d54f763109684\"" Apr 24 23:38:15.461835 kubelet[3426]: I0424 23:38:15.461729 3426 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-5b85766d88-77cfg" podStartSLOduration=30.786907416 podStartE2EDuration="35.461704563s" podCreationTimestamp="2026-04-24 23:37:40 +0000 UTC" firstStartedPulling="2026-04-24 23:38:06.82169594 +0000 UTC m=+52.546279042" lastFinishedPulling="2026-04-24 23:38:11.496493087 +0000 UTC m=+57.221076189" observedRunningTime="2026-04-24 23:38:12.434734272 +0000 UTC m=+58.159317434" watchObservedRunningTime="2026-04-24 23:38:15.461704563 +0000 UTC m=+61.186287725" Apr 24 23:38:15.483119 containerd[2018]: 2026-04-24 23:38:15.315 [WARNING][5896] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="d87af8c771fb4a4090e53b4e52007664fd855acbbd729e12d75d54f763109684" WorkloadEndpoint="ip--172--31--17--112-k8s-whisker--5ddfdf984--kwv9n-eth0" Apr 24 23:38:15.483119 containerd[2018]: 2026-04-24 23:38:15.316 [INFO][5896] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="d87af8c771fb4a4090e53b4e52007664fd855acbbd729e12d75d54f763109684" Apr 24 23:38:15.483119 containerd[2018]: 2026-04-24 23:38:15.316 [INFO][5896] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d87af8c771fb4a4090e53b4e52007664fd855acbbd729e12d75d54f763109684" iface="eth0" netns="" Apr 24 23:38:15.483119 containerd[2018]: 2026-04-24 23:38:15.316 [INFO][5896] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="d87af8c771fb4a4090e53b4e52007664fd855acbbd729e12d75d54f763109684" Apr 24 23:38:15.483119 containerd[2018]: 2026-04-24 23:38:15.316 [INFO][5896] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="d87af8c771fb4a4090e53b4e52007664fd855acbbd729e12d75d54f763109684" Apr 24 23:38:15.483119 containerd[2018]: 2026-04-24 23:38:15.424 [INFO][5904] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="d87af8c771fb4a4090e53b4e52007664fd855acbbd729e12d75d54f763109684" HandleID="k8s-pod-network.d87af8c771fb4a4090e53b4e52007664fd855acbbd729e12d75d54f763109684" Workload="ip--172--31--17--112-k8s-whisker--5ddfdf984--kwv9n-eth0" Apr 24 23:38:15.483119 containerd[2018]: 2026-04-24 23:38:15.426 [INFO][5904] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:38:15.483119 containerd[2018]: 2026-04-24 23:38:15.427 [INFO][5904] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:38:15.483119 containerd[2018]: 2026-04-24 23:38:15.466 [WARNING][5904] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="d87af8c771fb4a4090e53b4e52007664fd855acbbd729e12d75d54f763109684" HandleID="k8s-pod-network.d87af8c771fb4a4090e53b4e52007664fd855acbbd729e12d75d54f763109684" Workload="ip--172--31--17--112-k8s-whisker--5ddfdf984--kwv9n-eth0" Apr 24 23:38:15.483119 containerd[2018]: 2026-04-24 23:38:15.466 [INFO][5904] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="d87af8c771fb4a4090e53b4e52007664fd855acbbd729e12d75d54f763109684" HandleID="k8s-pod-network.d87af8c771fb4a4090e53b4e52007664fd855acbbd729e12d75d54f763109684" Workload="ip--172--31--17--112-k8s-whisker--5ddfdf984--kwv9n-eth0" Apr 24 23:38:15.483119 containerd[2018]: 2026-04-24 23:38:15.471 [INFO][5904] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:38:15.483119 containerd[2018]: 2026-04-24 23:38:15.477 [INFO][5896] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="d87af8c771fb4a4090e53b4e52007664fd855acbbd729e12d75d54f763109684" Apr 24 23:38:15.487040 containerd[2018]: time="2026-04-24T23:38:15.485666823Z" level=info msg="TearDown network for sandbox \"d87af8c771fb4a4090e53b4e52007664fd855acbbd729e12d75d54f763109684\" successfully" Apr 24 23:38:15.495827 containerd[2018]: time="2026-04-24T23:38:15.495755703Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d87af8c771fb4a4090e53b4e52007664fd855acbbd729e12d75d54f763109684\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 24 23:38:15.495959 containerd[2018]: time="2026-04-24T23:38:15.495859179Z" level=info msg="RemovePodSandbox \"d87af8c771fb4a4090e53b4e52007664fd855acbbd729e12d75d54f763109684\" returns successfully" Apr 24 23:38:15.498999 containerd[2018]: time="2026-04-24T23:38:15.497633175Z" level=info msg="StopPodSandbox for \"b4fd534663ef2e41688818e71b439da31221a08a63e61924e40cb1b8056cdbca\"" Apr 24 23:38:15.826821 containerd[2018]: 2026-04-24 23:38:15.712 [WARNING][5925] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b4fd534663ef2e41688818e71b439da31221a08a63e61924e40cb1b8056cdbca" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--112-k8s-coredns--674b8bbfcf--nvwfk-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"4359df61-6ba9-4f39-beb9-94953c5d320d", ResourceVersion:"1018", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 37, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-112", ContainerID:"80613d10c1a5735a6ed7bcb4078765ba67427660c2e6ac46c6ad7436601d4010", Pod:"coredns-674b8bbfcf-nvwfk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.36.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali12e44058a6d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:38:15.826821 containerd[2018]: 2026-04-24 23:38:15.712 [INFO][5925] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b4fd534663ef2e41688818e71b439da31221a08a63e61924e40cb1b8056cdbca" Apr 24 23:38:15.826821 containerd[2018]: 2026-04-24 23:38:15.712 [INFO][5925] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b4fd534663ef2e41688818e71b439da31221a08a63e61924e40cb1b8056cdbca" iface="eth0" netns="" Apr 24 23:38:15.826821 containerd[2018]: 2026-04-24 23:38:15.712 [INFO][5925] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b4fd534663ef2e41688818e71b439da31221a08a63e61924e40cb1b8056cdbca" Apr 24 23:38:15.826821 containerd[2018]: 2026-04-24 23:38:15.712 [INFO][5925] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b4fd534663ef2e41688818e71b439da31221a08a63e61924e40cb1b8056cdbca" Apr 24 23:38:15.826821 containerd[2018]: 2026-04-24 23:38:15.797 [INFO][5940] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b4fd534663ef2e41688818e71b439da31221a08a63e61924e40cb1b8056cdbca" HandleID="k8s-pod-network.b4fd534663ef2e41688818e71b439da31221a08a63e61924e40cb1b8056cdbca" Workload="ip--172--31--17--112-k8s-coredns--674b8bbfcf--nvwfk-eth0" Apr 24 23:38:15.826821 containerd[2018]: 2026-04-24 23:38:15.798 [INFO][5940] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:38:15.826821 containerd[2018]: 2026-04-24 23:38:15.798 [INFO][5940] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:38:15.826821 containerd[2018]: 2026-04-24 23:38:15.814 [WARNING][5940] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b4fd534663ef2e41688818e71b439da31221a08a63e61924e40cb1b8056cdbca" HandleID="k8s-pod-network.b4fd534663ef2e41688818e71b439da31221a08a63e61924e40cb1b8056cdbca" Workload="ip--172--31--17--112-k8s-coredns--674b8bbfcf--nvwfk-eth0" Apr 24 23:38:15.826821 containerd[2018]: 2026-04-24 23:38:15.814 [INFO][5940] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b4fd534663ef2e41688818e71b439da31221a08a63e61924e40cb1b8056cdbca" HandleID="k8s-pod-network.b4fd534663ef2e41688818e71b439da31221a08a63e61924e40cb1b8056cdbca" Workload="ip--172--31--17--112-k8s-coredns--674b8bbfcf--nvwfk-eth0" Apr 24 23:38:15.826821 containerd[2018]: 2026-04-24 23:38:15.820 [INFO][5940] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:38:15.826821 containerd[2018]: 2026-04-24 23:38:15.823 [INFO][5925] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b4fd534663ef2e41688818e71b439da31221a08a63e61924e40cb1b8056cdbca" Apr 24 23:38:15.826821 containerd[2018]: time="2026-04-24T23:38:15.826548640Z" level=info msg="TearDown network for sandbox \"b4fd534663ef2e41688818e71b439da31221a08a63e61924e40cb1b8056cdbca\" successfully" Apr 24 23:38:15.826821 containerd[2018]: time="2026-04-24T23:38:15.826590448Z" level=info msg="StopPodSandbox for \"b4fd534663ef2e41688818e71b439da31221a08a63e61924e40cb1b8056cdbca\" returns successfully" Apr 24 23:38:15.830119 containerd[2018]: time="2026-04-24T23:38:15.828440788Z" level=info msg="RemovePodSandbox for \"b4fd534663ef2e41688818e71b439da31221a08a63e61924e40cb1b8056cdbca\"" Apr 24 23:38:15.830119 containerd[2018]: time="2026-04-24T23:38:15.828500896Z" level=info msg="Forcibly stopping sandbox \"b4fd534663ef2e41688818e71b439da31221a08a63e61924e40cb1b8056cdbca\"" Apr 24 23:38:15.988495 containerd[2018]: 2026-04-24 23:38:15.910 [WARNING][5956] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b4fd534663ef2e41688818e71b439da31221a08a63e61924e40cb1b8056cdbca" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--112-k8s-coredns--674b8bbfcf--nvwfk-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"4359df61-6ba9-4f39-beb9-94953c5d320d", ResourceVersion:"1018", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 37, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-112", ContainerID:"80613d10c1a5735a6ed7bcb4078765ba67427660c2e6ac46c6ad7436601d4010", Pod:"coredns-674b8bbfcf-nvwfk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.36.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali12e44058a6d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:38:15.988495 containerd[2018]: 2026-04-24 23:38:15.911 [INFO][5956] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b4fd534663ef2e41688818e71b439da31221a08a63e61924e40cb1b8056cdbca" Apr 24 23:38:15.988495 containerd[2018]: 2026-04-24 23:38:15.911 [INFO][5956] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b4fd534663ef2e41688818e71b439da31221a08a63e61924e40cb1b8056cdbca" iface="eth0" netns="" Apr 24 23:38:15.988495 containerd[2018]: 2026-04-24 23:38:15.911 [INFO][5956] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b4fd534663ef2e41688818e71b439da31221a08a63e61924e40cb1b8056cdbca" Apr 24 23:38:15.988495 containerd[2018]: 2026-04-24 23:38:15.911 [INFO][5956] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b4fd534663ef2e41688818e71b439da31221a08a63e61924e40cb1b8056cdbca" Apr 24 23:38:15.988495 containerd[2018]: 2026-04-24 23:38:15.960 [INFO][5963] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b4fd534663ef2e41688818e71b439da31221a08a63e61924e40cb1b8056cdbca" HandleID="k8s-pod-network.b4fd534663ef2e41688818e71b439da31221a08a63e61924e40cb1b8056cdbca" Workload="ip--172--31--17--112-k8s-coredns--674b8bbfcf--nvwfk-eth0" Apr 24 23:38:15.988495 containerd[2018]: 2026-04-24 23:38:15.960 [INFO][5963] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:38:15.988495 containerd[2018]: 2026-04-24 23:38:15.960 [INFO][5963] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:38:15.988495 containerd[2018]: 2026-04-24 23:38:15.975 [WARNING][5963] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b4fd534663ef2e41688818e71b439da31221a08a63e61924e40cb1b8056cdbca" HandleID="k8s-pod-network.b4fd534663ef2e41688818e71b439da31221a08a63e61924e40cb1b8056cdbca" Workload="ip--172--31--17--112-k8s-coredns--674b8bbfcf--nvwfk-eth0" Apr 24 23:38:15.988495 containerd[2018]: 2026-04-24 23:38:15.975 [INFO][5963] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b4fd534663ef2e41688818e71b439da31221a08a63e61924e40cb1b8056cdbca" HandleID="k8s-pod-network.b4fd534663ef2e41688818e71b439da31221a08a63e61924e40cb1b8056cdbca" Workload="ip--172--31--17--112-k8s-coredns--674b8bbfcf--nvwfk-eth0" Apr 24 23:38:15.988495 containerd[2018]: 2026-04-24 23:38:15.977 [INFO][5963] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:38:15.988495 containerd[2018]: 2026-04-24 23:38:15.982 [INFO][5956] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b4fd534663ef2e41688818e71b439da31221a08a63e61924e40cb1b8056cdbca" Apr 24 23:38:15.989658 containerd[2018]: time="2026-04-24T23:38:15.989216321Z" level=info msg="TearDown network for sandbox \"b4fd534663ef2e41688818e71b439da31221a08a63e61924e40cb1b8056cdbca\" successfully" Apr 24 23:38:15.997633 containerd[2018]: time="2026-04-24T23:38:15.997074173Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b4fd534663ef2e41688818e71b439da31221a08a63e61924e40cb1b8056cdbca\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 24 23:38:15.997633 containerd[2018]: time="2026-04-24T23:38:15.997175273Z" level=info msg="RemovePodSandbox \"b4fd534663ef2e41688818e71b439da31221a08a63e61924e40cb1b8056cdbca\" returns successfully" Apr 24 23:38:15.998530 containerd[2018]: time="2026-04-24T23:38:15.998056637Z" level=info msg="StopPodSandbox for \"24955415d6c11992198b2e474a1f3de2b1a74873953fdbd459948b28d4d9a850\"" Apr 24 23:38:16.170016 containerd[2018]: 2026-04-24 23:38:16.077 [WARNING][5977] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="24955415d6c11992198b2e474a1f3de2b1a74873953fdbd459948b28d4d9a850" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--112-k8s-goldmane--5b85766d88--77cfg-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"8d59be44-1022-4a6d-9634-d8c1fdfc4c4b", ResourceVersion:"1038", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 37, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-112", ContainerID:"945f363dcecabc0353b5849f480ea2cb873e26c787654b87b47a626785dd8f3d", Pod:"goldmane-5b85766d88-77cfg", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.36.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calidb205269d41", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:38:16.170016 containerd[2018]: 2026-04-24 23:38:16.078 [INFO][5977] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="24955415d6c11992198b2e474a1f3de2b1a74873953fdbd459948b28d4d9a850" Apr 24 23:38:16.170016 containerd[2018]: 2026-04-24 23:38:16.078 [INFO][5977] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="24955415d6c11992198b2e474a1f3de2b1a74873953fdbd459948b28d4d9a850" iface="eth0" netns="" Apr 24 23:38:16.170016 containerd[2018]: 2026-04-24 23:38:16.078 [INFO][5977] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="24955415d6c11992198b2e474a1f3de2b1a74873953fdbd459948b28d4d9a850" Apr 24 23:38:16.170016 containerd[2018]: 2026-04-24 23:38:16.078 [INFO][5977] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="24955415d6c11992198b2e474a1f3de2b1a74873953fdbd459948b28d4d9a850" Apr 24 23:38:16.170016 containerd[2018]: 2026-04-24 23:38:16.137 [INFO][5984] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="24955415d6c11992198b2e474a1f3de2b1a74873953fdbd459948b28d4d9a850" HandleID="k8s-pod-network.24955415d6c11992198b2e474a1f3de2b1a74873953fdbd459948b28d4d9a850" Workload="ip--172--31--17--112-k8s-goldmane--5b85766d88--77cfg-eth0" Apr 24 23:38:16.170016 containerd[2018]: 2026-04-24 23:38:16.137 [INFO][5984] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:38:16.170016 containerd[2018]: 2026-04-24 23:38:16.137 [INFO][5984] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:38:16.170016 containerd[2018]: 2026-04-24 23:38:16.154 [WARNING][5984] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="24955415d6c11992198b2e474a1f3de2b1a74873953fdbd459948b28d4d9a850" HandleID="k8s-pod-network.24955415d6c11992198b2e474a1f3de2b1a74873953fdbd459948b28d4d9a850" Workload="ip--172--31--17--112-k8s-goldmane--5b85766d88--77cfg-eth0" Apr 24 23:38:16.170016 containerd[2018]: 2026-04-24 23:38:16.154 [INFO][5984] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="24955415d6c11992198b2e474a1f3de2b1a74873953fdbd459948b28d4d9a850" HandleID="k8s-pod-network.24955415d6c11992198b2e474a1f3de2b1a74873953fdbd459948b28d4d9a850" Workload="ip--172--31--17--112-k8s-goldmane--5b85766d88--77cfg-eth0" Apr 24 23:38:16.170016 containerd[2018]: 2026-04-24 23:38:16.157 [INFO][5984] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:38:16.170016 containerd[2018]: 2026-04-24 23:38:16.165 [INFO][5977] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="24955415d6c11992198b2e474a1f3de2b1a74873953fdbd459948b28d4d9a850" Apr 24 23:38:16.171315 containerd[2018]: time="2026-04-24T23:38:16.171005690Z" level=info msg="TearDown network for sandbox \"24955415d6c11992198b2e474a1f3de2b1a74873953fdbd459948b28d4d9a850\" successfully" Apr 24 23:38:16.171315 containerd[2018]: time="2026-04-24T23:38:16.171055658Z" level=info msg="StopPodSandbox for \"24955415d6c11992198b2e474a1f3de2b1a74873953fdbd459948b28d4d9a850\" returns successfully" Apr 24 23:38:16.173206 containerd[2018]: time="2026-04-24T23:38:16.173059154Z" level=info msg="RemovePodSandbox for \"24955415d6c11992198b2e474a1f3de2b1a74873953fdbd459948b28d4d9a850\"" Apr 24 23:38:16.173650 containerd[2018]: time="2026-04-24T23:38:16.173417030Z" level=info msg="Forcibly stopping sandbox \"24955415d6c11992198b2e474a1f3de2b1a74873953fdbd459948b28d4d9a850\"" Apr 24 23:38:16.382730 containerd[2018]: 2026-04-24 23:38:16.284 [WARNING][5999] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="24955415d6c11992198b2e474a1f3de2b1a74873953fdbd459948b28d4d9a850" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--112-k8s-goldmane--5b85766d88--77cfg-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"8d59be44-1022-4a6d-9634-d8c1fdfc4c4b", ResourceVersion:"1038", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 37, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-112", ContainerID:"945f363dcecabc0353b5849f480ea2cb873e26c787654b87b47a626785dd8f3d", Pod:"goldmane-5b85766d88-77cfg", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.36.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calidb205269d41", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:38:16.382730 containerd[2018]: 2026-04-24 23:38:16.285 [INFO][5999] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="24955415d6c11992198b2e474a1f3de2b1a74873953fdbd459948b28d4d9a850" Apr 24 23:38:16.382730 containerd[2018]: 2026-04-24 23:38:16.285 [INFO][5999] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="24955415d6c11992198b2e474a1f3de2b1a74873953fdbd459948b28d4d9a850" iface="eth0" netns="" Apr 24 23:38:16.382730 containerd[2018]: 2026-04-24 23:38:16.285 [INFO][5999] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="24955415d6c11992198b2e474a1f3de2b1a74873953fdbd459948b28d4d9a850" Apr 24 23:38:16.382730 containerd[2018]: 2026-04-24 23:38:16.285 [INFO][5999] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="24955415d6c11992198b2e474a1f3de2b1a74873953fdbd459948b28d4d9a850" Apr 24 23:38:16.382730 containerd[2018]: 2026-04-24 23:38:16.340 [INFO][6006] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="24955415d6c11992198b2e474a1f3de2b1a74873953fdbd459948b28d4d9a850" HandleID="k8s-pod-network.24955415d6c11992198b2e474a1f3de2b1a74873953fdbd459948b28d4d9a850" Workload="ip--172--31--17--112-k8s-goldmane--5b85766d88--77cfg-eth0" Apr 24 23:38:16.382730 containerd[2018]: 2026-04-24 23:38:16.341 [INFO][6006] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:38:16.382730 containerd[2018]: 2026-04-24 23:38:16.341 [INFO][6006] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:38:16.382730 containerd[2018]: 2026-04-24 23:38:16.365 [WARNING][6006] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="24955415d6c11992198b2e474a1f3de2b1a74873953fdbd459948b28d4d9a850" HandleID="k8s-pod-network.24955415d6c11992198b2e474a1f3de2b1a74873953fdbd459948b28d4d9a850" Workload="ip--172--31--17--112-k8s-goldmane--5b85766d88--77cfg-eth0" Apr 24 23:38:16.382730 containerd[2018]: 2026-04-24 23:38:16.365 [INFO][6006] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="24955415d6c11992198b2e474a1f3de2b1a74873953fdbd459948b28d4d9a850" HandleID="k8s-pod-network.24955415d6c11992198b2e474a1f3de2b1a74873953fdbd459948b28d4d9a850" Workload="ip--172--31--17--112-k8s-goldmane--5b85766d88--77cfg-eth0" Apr 24 23:38:16.382730 containerd[2018]: 2026-04-24 23:38:16.370 [INFO][6006] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:38:16.382730 containerd[2018]: 2026-04-24 23:38:16.378 [INFO][5999] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="24955415d6c11992198b2e474a1f3de2b1a74873953fdbd459948b28d4d9a850" Apr 24 23:38:16.386399 containerd[2018]: time="2026-04-24T23:38:16.382838187Z" level=info msg="TearDown network for sandbox \"24955415d6c11992198b2e474a1f3de2b1a74873953fdbd459948b28d4d9a850\" successfully" Apr 24 23:38:16.402939 containerd[2018]: time="2026-04-24T23:38:16.402759159Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"24955415d6c11992198b2e474a1f3de2b1a74873953fdbd459948b28d4d9a850\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 24 23:38:16.402939 containerd[2018]: time="2026-04-24T23:38:16.402874911Z" level=info msg="RemovePodSandbox \"24955415d6c11992198b2e474a1f3de2b1a74873953fdbd459948b28d4d9a850\" returns successfully" Apr 24 23:38:16.404606 containerd[2018]: time="2026-04-24T23:38:16.403840707Z" level=info msg="StopPodSandbox for \"c6865e68965e0f323af2659e1f94cfc606429b30a9a728e50f0f2c8690d3f9b8\"" Apr 24 23:38:16.793057 containerd[2018]: 2026-04-24 23:38:16.655 [WARNING][6020] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c6865e68965e0f323af2659e1f94cfc606429b30a9a728e50f0f2c8690d3f9b8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--112-k8s-csi--node--driver--d77dn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6eaa5414-466a-4dc1-ab02-5e5914bb112e", ResourceVersion:"988", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 37, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-112", ContainerID:"23a184c55b23263ac7d131c8cb1b5e59201fe1e7d25b63176d6bfb88a040df22", Pod:"csi-node-driver-d77dn", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.36.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali19901b59e85", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:38:16.793057 containerd[2018]: 2026-04-24 23:38:16.655 [INFO][6020] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="c6865e68965e0f323af2659e1f94cfc606429b30a9a728e50f0f2c8690d3f9b8" Apr 24 23:38:16.793057 containerd[2018]: 2026-04-24 23:38:16.655 [INFO][6020] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c6865e68965e0f323af2659e1f94cfc606429b30a9a728e50f0f2c8690d3f9b8" iface="eth0" netns="" Apr 24 23:38:16.793057 containerd[2018]: 2026-04-24 23:38:16.655 [INFO][6020] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="c6865e68965e0f323af2659e1f94cfc606429b30a9a728e50f0f2c8690d3f9b8" Apr 24 23:38:16.793057 containerd[2018]: 2026-04-24 23:38:16.655 [INFO][6020] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="c6865e68965e0f323af2659e1f94cfc606429b30a9a728e50f0f2c8690d3f9b8" Apr 24 23:38:16.793057 containerd[2018]: 2026-04-24 23:38:16.747 [INFO][6029] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="c6865e68965e0f323af2659e1f94cfc606429b30a9a728e50f0f2c8690d3f9b8" HandleID="k8s-pod-network.c6865e68965e0f323af2659e1f94cfc606429b30a9a728e50f0f2c8690d3f9b8" Workload="ip--172--31--17--112-k8s-csi--node--driver--d77dn-eth0" Apr 24 23:38:16.793057 containerd[2018]: 2026-04-24 23:38:16.747 [INFO][6029] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:38:16.793057 containerd[2018]: 2026-04-24 23:38:16.747 [INFO][6029] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:38:16.793057 containerd[2018]: 2026-04-24 23:38:16.779 [WARNING][6029] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="c6865e68965e0f323af2659e1f94cfc606429b30a9a728e50f0f2c8690d3f9b8" HandleID="k8s-pod-network.c6865e68965e0f323af2659e1f94cfc606429b30a9a728e50f0f2c8690d3f9b8" Workload="ip--172--31--17--112-k8s-csi--node--driver--d77dn-eth0" Apr 24 23:38:16.793057 containerd[2018]: 2026-04-24 23:38:16.780 [INFO][6029] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="c6865e68965e0f323af2659e1f94cfc606429b30a9a728e50f0f2c8690d3f9b8" HandleID="k8s-pod-network.c6865e68965e0f323af2659e1f94cfc606429b30a9a728e50f0f2c8690d3f9b8" Workload="ip--172--31--17--112-k8s-csi--node--driver--d77dn-eth0" Apr 24 23:38:16.793057 containerd[2018]: 2026-04-24 23:38:16.782 [INFO][6029] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:38:16.793057 containerd[2018]: 2026-04-24 23:38:16.789 [INFO][6020] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="c6865e68965e0f323af2659e1f94cfc606429b30a9a728e50f0f2c8690d3f9b8" Apr 24 23:38:16.795929 containerd[2018]: time="2026-04-24T23:38:16.795731129Z" level=info msg="TearDown network for sandbox \"c6865e68965e0f323af2659e1f94cfc606429b30a9a728e50f0f2c8690d3f9b8\" successfully" Apr 24 23:38:16.795929 containerd[2018]: time="2026-04-24T23:38:16.795788657Z" level=info msg="StopPodSandbox for \"c6865e68965e0f323af2659e1f94cfc606429b30a9a728e50f0f2c8690d3f9b8\" returns successfully" Apr 24 23:38:16.796920 containerd[2018]: time="2026-04-24T23:38:16.796860737Z" level=info msg="RemovePodSandbox for \"c6865e68965e0f323af2659e1f94cfc606429b30a9a728e50f0f2c8690d3f9b8\"" Apr 24 23:38:16.797042 containerd[2018]: time="2026-04-24T23:38:16.796944677Z" level=info msg="Forcibly stopping sandbox \"c6865e68965e0f323af2659e1f94cfc606429b30a9a728e50f0f2c8690d3f9b8\"" Apr 24 23:38:16.971807 containerd[2018]: 2026-04-24 23:38:16.894 [WARNING][6048] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c6865e68965e0f323af2659e1f94cfc606429b30a9a728e50f0f2c8690d3f9b8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--112-k8s-csi--node--driver--d77dn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6eaa5414-466a-4dc1-ab02-5e5914bb112e", ResourceVersion:"988", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 37, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-112", ContainerID:"23a184c55b23263ac7d131c8cb1b5e59201fe1e7d25b63176d6bfb88a040df22", Pod:"csi-node-driver-d77dn", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.36.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali19901b59e85", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:38:16.971807 containerd[2018]: 2026-04-24 23:38:16.895 [INFO][6048] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="c6865e68965e0f323af2659e1f94cfc606429b30a9a728e50f0f2c8690d3f9b8" Apr 24 23:38:16.971807 containerd[2018]: 2026-04-24 23:38:16.896 [INFO][6048] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c6865e68965e0f323af2659e1f94cfc606429b30a9a728e50f0f2c8690d3f9b8" iface="eth0" netns="" Apr 24 23:38:16.971807 containerd[2018]: 2026-04-24 23:38:16.896 [INFO][6048] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="c6865e68965e0f323af2659e1f94cfc606429b30a9a728e50f0f2c8690d3f9b8" Apr 24 23:38:16.971807 containerd[2018]: 2026-04-24 23:38:16.896 [INFO][6048] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="c6865e68965e0f323af2659e1f94cfc606429b30a9a728e50f0f2c8690d3f9b8" Apr 24 23:38:16.971807 containerd[2018]: 2026-04-24 23:38:16.946 [INFO][6060] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="c6865e68965e0f323af2659e1f94cfc606429b30a9a728e50f0f2c8690d3f9b8" HandleID="k8s-pod-network.c6865e68965e0f323af2659e1f94cfc606429b30a9a728e50f0f2c8690d3f9b8" Workload="ip--172--31--17--112-k8s-csi--node--driver--d77dn-eth0" Apr 24 23:38:16.971807 containerd[2018]: 2026-04-24 23:38:16.946 [INFO][6060] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:38:16.971807 containerd[2018]: 2026-04-24 23:38:16.946 [INFO][6060] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:38:16.971807 containerd[2018]: 2026-04-24 23:38:16.962 [WARNING][6060] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="c6865e68965e0f323af2659e1f94cfc606429b30a9a728e50f0f2c8690d3f9b8" HandleID="k8s-pod-network.c6865e68965e0f323af2659e1f94cfc606429b30a9a728e50f0f2c8690d3f9b8" Workload="ip--172--31--17--112-k8s-csi--node--driver--d77dn-eth0" Apr 24 23:38:16.971807 containerd[2018]: 2026-04-24 23:38:16.962 [INFO][6060] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="c6865e68965e0f323af2659e1f94cfc606429b30a9a728e50f0f2c8690d3f9b8" HandleID="k8s-pod-network.c6865e68965e0f323af2659e1f94cfc606429b30a9a728e50f0f2c8690d3f9b8" Workload="ip--172--31--17--112-k8s-csi--node--driver--d77dn-eth0" Apr 24 23:38:16.971807 containerd[2018]: 2026-04-24 23:38:16.964 [INFO][6060] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:38:16.971807 containerd[2018]: 2026-04-24 23:38:16.968 [INFO][6048] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="c6865e68965e0f323af2659e1f94cfc606429b30a9a728e50f0f2c8690d3f9b8" Apr 24 23:38:16.972672 containerd[2018]: time="2026-04-24T23:38:16.971861682Z" level=info msg="TearDown network for sandbox \"c6865e68965e0f323af2659e1f94cfc606429b30a9a728e50f0f2c8690d3f9b8\" successfully" Apr 24 23:38:16.979688 containerd[2018]: time="2026-04-24T23:38:16.979610550Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c6865e68965e0f323af2659e1f94cfc606429b30a9a728e50f0f2c8690d3f9b8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 24 23:38:16.979885 containerd[2018]: time="2026-04-24T23:38:16.979705122Z" level=info msg="RemovePodSandbox \"c6865e68965e0f323af2659e1f94cfc606429b30a9a728e50f0f2c8690d3f9b8\" returns successfully" Apr 24 23:38:16.981041 containerd[2018]: time="2026-04-24T23:38:16.980548134Z" level=info msg="StopPodSandbox for \"cd91ce50f98b3389d0cdb159560eea51cefdfb2df53e1173d8b7056a89b1f0cb\"" Apr 24 23:38:17.124627 containerd[2018]: 2026-04-24 23:38:17.046 [WARNING][6074] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cd91ce50f98b3389d0cdb159560eea51cefdfb2df53e1173d8b7056a89b1f0cb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--112-k8s-calico--apiserver--df9f5595c--2967j-eth0", GenerateName:"calico-apiserver-df9f5595c-", Namespace:"calico-system", SelfLink:"", UID:"65e053c6-4f3f-427f-8a78-6c78634b5620", ResourceVersion:"982", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 37, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"df9f5595c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-112", ContainerID:"25d24863abf6207fbc84750c17022dad3d07b7f8eef932d80c00f3877cd06816", Pod:"calico-apiserver-df9f5595c-2967j", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.36.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali91e7ca06fa7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:38:17.124627 containerd[2018]: 2026-04-24 23:38:17.047 [INFO][6074] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="cd91ce50f98b3389d0cdb159560eea51cefdfb2df53e1173d8b7056a89b1f0cb" Apr 24 23:38:17.124627 containerd[2018]: 2026-04-24 23:38:17.047 [INFO][6074] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cd91ce50f98b3389d0cdb159560eea51cefdfb2df53e1173d8b7056a89b1f0cb" iface="eth0" netns="" Apr 24 23:38:17.124627 containerd[2018]: 2026-04-24 23:38:17.047 [INFO][6074] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="cd91ce50f98b3389d0cdb159560eea51cefdfb2df53e1173d8b7056a89b1f0cb" Apr 24 23:38:17.124627 containerd[2018]: 2026-04-24 23:38:17.047 [INFO][6074] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="cd91ce50f98b3389d0cdb159560eea51cefdfb2df53e1173d8b7056a89b1f0cb" Apr 24 23:38:17.124627 containerd[2018]: 2026-04-24 23:38:17.092 [INFO][6082] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="cd91ce50f98b3389d0cdb159560eea51cefdfb2df53e1173d8b7056a89b1f0cb" HandleID="k8s-pod-network.cd91ce50f98b3389d0cdb159560eea51cefdfb2df53e1173d8b7056a89b1f0cb" Workload="ip--172--31--17--112-k8s-calico--apiserver--df9f5595c--2967j-eth0" Apr 24 23:38:17.124627 containerd[2018]: 2026-04-24 23:38:17.092 [INFO][6082] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:38:17.124627 containerd[2018]: 2026-04-24 23:38:17.093 [INFO][6082] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:38:17.124627 containerd[2018]: 2026-04-24 23:38:17.114 [WARNING][6082] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="cd91ce50f98b3389d0cdb159560eea51cefdfb2df53e1173d8b7056a89b1f0cb" HandleID="k8s-pod-network.cd91ce50f98b3389d0cdb159560eea51cefdfb2df53e1173d8b7056a89b1f0cb" Workload="ip--172--31--17--112-k8s-calico--apiserver--df9f5595c--2967j-eth0" Apr 24 23:38:17.124627 containerd[2018]: 2026-04-24 23:38:17.114 [INFO][6082] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="cd91ce50f98b3389d0cdb159560eea51cefdfb2df53e1173d8b7056a89b1f0cb" HandleID="k8s-pod-network.cd91ce50f98b3389d0cdb159560eea51cefdfb2df53e1173d8b7056a89b1f0cb" Workload="ip--172--31--17--112-k8s-calico--apiserver--df9f5595c--2967j-eth0" Apr 24 23:38:17.124627 containerd[2018]: 2026-04-24 23:38:17.117 [INFO][6082] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:38:17.124627 containerd[2018]: 2026-04-24 23:38:17.121 [INFO][6074] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="cd91ce50f98b3389d0cdb159560eea51cefdfb2df53e1173d8b7056a89b1f0cb" Apr 24 23:38:17.125766 containerd[2018]: time="2026-04-24T23:38:17.124682139Z" level=info msg="TearDown network for sandbox \"cd91ce50f98b3389d0cdb159560eea51cefdfb2df53e1173d8b7056a89b1f0cb\" successfully" Apr 24 23:38:17.125766 containerd[2018]: time="2026-04-24T23:38:17.124720659Z" level=info msg="StopPodSandbox for \"cd91ce50f98b3389d0cdb159560eea51cefdfb2df53e1173d8b7056a89b1f0cb\" returns successfully" Apr 24 23:38:17.125766 containerd[2018]: time="2026-04-24T23:38:17.125735907Z" level=info msg="RemovePodSandbox for \"cd91ce50f98b3389d0cdb159560eea51cefdfb2df53e1173d8b7056a89b1f0cb\"" Apr 24 23:38:17.126052 containerd[2018]: time="2026-04-24T23:38:17.125784975Z" level=info msg="Forcibly stopping sandbox \"cd91ce50f98b3389d0cdb159560eea51cefdfb2df53e1173d8b7056a89b1f0cb\"" Apr 24 23:38:17.290174 containerd[2018]: 2026-04-24 23:38:17.201 [WARNING][6096] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cd91ce50f98b3389d0cdb159560eea51cefdfb2df53e1173d8b7056a89b1f0cb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--112-k8s-calico--apiserver--df9f5595c--2967j-eth0", GenerateName:"calico-apiserver-df9f5595c-", Namespace:"calico-system", SelfLink:"", UID:"65e053c6-4f3f-427f-8a78-6c78634b5620", ResourceVersion:"982", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 37, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"df9f5595c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-112", ContainerID:"25d24863abf6207fbc84750c17022dad3d07b7f8eef932d80c00f3877cd06816", Pod:"calico-apiserver-df9f5595c-2967j", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.36.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali91e7ca06fa7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:38:17.290174 containerd[2018]: 2026-04-24 23:38:17.203 [INFO][6096] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="cd91ce50f98b3389d0cdb159560eea51cefdfb2df53e1173d8b7056a89b1f0cb" Apr 24 23:38:17.290174 containerd[2018]: 2026-04-24 23:38:17.204 [INFO][6096] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cd91ce50f98b3389d0cdb159560eea51cefdfb2df53e1173d8b7056a89b1f0cb" iface="eth0" netns="" Apr 24 23:38:17.290174 containerd[2018]: 2026-04-24 23:38:17.204 [INFO][6096] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="cd91ce50f98b3389d0cdb159560eea51cefdfb2df53e1173d8b7056a89b1f0cb" Apr 24 23:38:17.290174 containerd[2018]: 2026-04-24 23:38:17.204 [INFO][6096] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="cd91ce50f98b3389d0cdb159560eea51cefdfb2df53e1173d8b7056a89b1f0cb" Apr 24 23:38:17.290174 containerd[2018]: 2026-04-24 23:38:17.253 [INFO][6104] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="cd91ce50f98b3389d0cdb159560eea51cefdfb2df53e1173d8b7056a89b1f0cb" HandleID="k8s-pod-network.cd91ce50f98b3389d0cdb159560eea51cefdfb2df53e1173d8b7056a89b1f0cb" Workload="ip--172--31--17--112-k8s-calico--apiserver--df9f5595c--2967j-eth0" Apr 24 23:38:17.290174 containerd[2018]: 2026-04-24 23:38:17.256 [INFO][6104] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:38:17.290174 containerd[2018]: 2026-04-24 23:38:17.256 [INFO][6104] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:38:17.290174 containerd[2018]: 2026-04-24 23:38:17.273 [WARNING][6104] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="cd91ce50f98b3389d0cdb159560eea51cefdfb2df53e1173d8b7056a89b1f0cb" HandleID="k8s-pod-network.cd91ce50f98b3389d0cdb159560eea51cefdfb2df53e1173d8b7056a89b1f0cb" Workload="ip--172--31--17--112-k8s-calico--apiserver--df9f5595c--2967j-eth0" Apr 24 23:38:17.290174 containerd[2018]: 2026-04-24 23:38:17.273 [INFO][6104] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="cd91ce50f98b3389d0cdb159560eea51cefdfb2df53e1173d8b7056a89b1f0cb" HandleID="k8s-pod-network.cd91ce50f98b3389d0cdb159560eea51cefdfb2df53e1173d8b7056a89b1f0cb" Workload="ip--172--31--17--112-k8s-calico--apiserver--df9f5595c--2967j-eth0" Apr 24 23:38:17.290174 containerd[2018]: 2026-04-24 23:38:17.276 [INFO][6104] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:38:17.290174 containerd[2018]: 2026-04-24 23:38:17.284 [INFO][6096] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="cd91ce50f98b3389d0cdb159560eea51cefdfb2df53e1173d8b7056a89b1f0cb" Apr 24 23:38:17.290174 containerd[2018]: time="2026-04-24T23:38:17.289317604Z" level=info msg="TearDown network for sandbox \"cd91ce50f98b3389d0cdb159560eea51cefdfb2df53e1173d8b7056a89b1f0cb\" successfully" Apr 24 23:38:17.293026 systemd[1]: Started sshd@7-172.31.17.112:22-20.229.252.112:47294.service - OpenSSH per-connection server daemon (20.229.252.112:47294). Apr 24 23:38:17.303727 containerd[2018]: time="2026-04-24T23:38:17.303535108Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cd91ce50f98b3389d0cdb159560eea51cefdfb2df53e1173d8b7056a89b1f0cb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 24 23:38:17.303727 containerd[2018]: time="2026-04-24T23:38:17.303647884Z" level=info msg="RemovePodSandbox \"cd91ce50f98b3389d0cdb159560eea51cefdfb2df53e1173d8b7056a89b1f0cb\" returns successfully" Apr 24 23:38:17.305331 containerd[2018]: time="2026-04-24T23:38:17.305250388Z" level=info msg="StopPodSandbox for \"51e6c7175af89438d549e4773eccba71da85969522309a807e57c019d02ba2fc\"" Apr 24 23:38:17.548824 kubelet[3426]: I0424 23:38:17.548662 3426 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 24 23:38:17.596307 containerd[2018]: 2026-04-24 23:38:17.474 [WARNING][6121] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="51e6c7175af89438d549e4773eccba71da85969522309a807e57c019d02ba2fc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--112-k8s-calico--kube--controllers--8696996ff8--qz98n-eth0", GenerateName:"calico-kube-controllers-8696996ff8-", Namespace:"calico-system", SelfLink:"", UID:"484f4df4-9a9a-41cc-87e4-b75a5e07666e", ResourceVersion:"957", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 37, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"8696996ff8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-112", ContainerID:"ad914eed2b59c59276ebf6f1027ad20c822c5a424be19db35328cd8f442d3e28", Pod:"calico-kube-controllers-8696996ff8-qz98n", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.36.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3b5848d13c2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:38:17.596307 containerd[2018]: 2026-04-24 23:38:17.474 [INFO][6121] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="51e6c7175af89438d549e4773eccba71da85969522309a807e57c019d02ba2fc" Apr 24 23:38:17.596307 containerd[2018]: 2026-04-24 23:38:17.474 [INFO][6121] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="51e6c7175af89438d549e4773eccba71da85969522309a807e57c019d02ba2fc" iface="eth0" netns="" Apr 24 23:38:17.596307 containerd[2018]: 2026-04-24 23:38:17.474 [INFO][6121] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="51e6c7175af89438d549e4773eccba71da85969522309a807e57c019d02ba2fc" Apr 24 23:38:17.596307 containerd[2018]: 2026-04-24 23:38:17.474 [INFO][6121] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="51e6c7175af89438d549e4773eccba71da85969522309a807e57c019d02ba2fc" Apr 24 23:38:17.596307 containerd[2018]: 2026-04-24 23:38:17.553 [INFO][6130] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="51e6c7175af89438d549e4773eccba71da85969522309a807e57c019d02ba2fc" HandleID="k8s-pod-network.51e6c7175af89438d549e4773eccba71da85969522309a807e57c019d02ba2fc" Workload="ip--172--31--17--112-k8s-calico--kube--controllers--8696996ff8--qz98n-eth0" Apr 24 23:38:17.596307 containerd[2018]: 2026-04-24 23:38:17.554 [INFO][6130] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:38:17.596307 containerd[2018]: 2026-04-24 23:38:17.554 [INFO][6130] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:38:17.596307 containerd[2018]: 2026-04-24 23:38:17.573 [WARNING][6130] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="51e6c7175af89438d549e4773eccba71da85969522309a807e57c019d02ba2fc" HandleID="k8s-pod-network.51e6c7175af89438d549e4773eccba71da85969522309a807e57c019d02ba2fc" Workload="ip--172--31--17--112-k8s-calico--kube--controllers--8696996ff8--qz98n-eth0" Apr 24 23:38:17.596307 containerd[2018]: 2026-04-24 23:38:17.573 [INFO][6130] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="51e6c7175af89438d549e4773eccba71da85969522309a807e57c019d02ba2fc" HandleID="k8s-pod-network.51e6c7175af89438d549e4773eccba71da85969522309a807e57c019d02ba2fc" Workload="ip--172--31--17--112-k8s-calico--kube--controllers--8696996ff8--qz98n-eth0" Apr 24 23:38:17.596307 containerd[2018]: 2026-04-24 23:38:17.581 [INFO][6130] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:38:17.596307 containerd[2018]: 2026-04-24 23:38:17.589 [INFO][6121] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="51e6c7175af89438d549e4773eccba71da85969522309a807e57c019d02ba2fc" Apr 24 23:38:17.598139 containerd[2018]: time="2026-04-24T23:38:17.596355449Z" level=info msg="TearDown network for sandbox \"51e6c7175af89438d549e4773eccba71da85969522309a807e57c019d02ba2fc\" successfully" Apr 24 23:38:17.598139 containerd[2018]: time="2026-04-24T23:38:17.596397113Z" level=info msg="StopPodSandbox for \"51e6c7175af89438d549e4773eccba71da85969522309a807e57c019d02ba2fc\" returns successfully" Apr 24 23:38:17.598139 containerd[2018]: time="2026-04-24T23:38:17.597602417Z" level=info msg="RemovePodSandbox for \"51e6c7175af89438d549e4773eccba71da85969522309a807e57c019d02ba2fc\"" Apr 24 23:38:17.598139 containerd[2018]: time="2026-04-24T23:38:17.597655289Z" level=info msg="Forcibly stopping sandbox \"51e6c7175af89438d549e4773eccba71da85969522309a807e57c019d02ba2fc\"" Apr 24 23:38:17.899023 containerd[2018]: 2026-04-24 23:38:17.756 [WARNING][6145] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="51e6c7175af89438d549e4773eccba71da85969522309a807e57c019d02ba2fc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--112-k8s-calico--kube--controllers--8696996ff8--qz98n-eth0", GenerateName:"calico-kube-controllers-8696996ff8-", Namespace:"calico-system", SelfLink:"", UID:"484f4df4-9a9a-41cc-87e4-b75a5e07666e", ResourceVersion:"957", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 37, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"8696996ff8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-112", ContainerID:"ad914eed2b59c59276ebf6f1027ad20c822c5a424be19db35328cd8f442d3e28", Pod:"calico-kube-controllers-8696996ff8-qz98n", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.36.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3b5848d13c2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:38:17.899023 containerd[2018]: 2026-04-24 23:38:17.756 [INFO][6145] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="51e6c7175af89438d549e4773eccba71da85969522309a807e57c019d02ba2fc" Apr 24 23:38:17.899023 containerd[2018]: 2026-04-24 23:38:17.756 [INFO][6145] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="51e6c7175af89438d549e4773eccba71da85969522309a807e57c019d02ba2fc" iface="eth0" netns="" Apr 24 23:38:17.899023 containerd[2018]: 2026-04-24 23:38:17.756 [INFO][6145] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="51e6c7175af89438d549e4773eccba71da85969522309a807e57c019d02ba2fc" Apr 24 23:38:17.899023 containerd[2018]: 2026-04-24 23:38:17.756 [INFO][6145] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="51e6c7175af89438d549e4773eccba71da85969522309a807e57c019d02ba2fc" Apr 24 23:38:17.899023 containerd[2018]: 2026-04-24 23:38:17.848 [INFO][6156] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="51e6c7175af89438d549e4773eccba71da85969522309a807e57c019d02ba2fc" HandleID="k8s-pod-network.51e6c7175af89438d549e4773eccba71da85969522309a807e57c019d02ba2fc" Workload="ip--172--31--17--112-k8s-calico--kube--controllers--8696996ff8--qz98n-eth0" Apr 24 23:38:17.899023 containerd[2018]: 2026-04-24 23:38:17.849 [INFO][6156] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:38:17.899023 containerd[2018]: 2026-04-24 23:38:17.849 [INFO][6156] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:38:17.899023 containerd[2018]: 2026-04-24 23:38:17.877 [WARNING][6156] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="51e6c7175af89438d549e4773eccba71da85969522309a807e57c019d02ba2fc" HandleID="k8s-pod-network.51e6c7175af89438d549e4773eccba71da85969522309a807e57c019d02ba2fc" Workload="ip--172--31--17--112-k8s-calico--kube--controllers--8696996ff8--qz98n-eth0" Apr 24 23:38:17.899023 containerd[2018]: 2026-04-24 23:38:17.878 [INFO][6156] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="51e6c7175af89438d549e4773eccba71da85969522309a807e57c019d02ba2fc" HandleID="k8s-pod-network.51e6c7175af89438d549e4773eccba71da85969522309a807e57c019d02ba2fc" Workload="ip--172--31--17--112-k8s-calico--kube--controllers--8696996ff8--qz98n-eth0" Apr 24 23:38:17.899023 containerd[2018]: 2026-04-24 23:38:17.886 [INFO][6156] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:38:17.899023 containerd[2018]: 2026-04-24 23:38:17.892 [INFO][6145] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="51e6c7175af89438d549e4773eccba71da85969522309a807e57c019d02ba2fc" Apr 24 23:38:17.900298 containerd[2018]: time="2026-04-24T23:38:17.899074423Z" level=info msg="TearDown network for sandbox \"51e6c7175af89438d549e4773eccba71da85969522309a807e57c019d02ba2fc\" successfully" Apr 24 23:38:17.918685 containerd[2018]: time="2026-04-24T23:38:17.916941427Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"51e6c7175af89438d549e4773eccba71da85969522309a807e57c019d02ba2fc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 24 23:38:17.918685 containerd[2018]: time="2026-04-24T23:38:17.917157571Z" level=info msg="RemovePodSandbox \"51e6c7175af89438d549e4773eccba71da85969522309a807e57c019d02ba2fc\" returns successfully" Apr 24 23:38:17.918685 containerd[2018]: time="2026-04-24T23:38:17.917853967Z" level=info msg="StopPodSandbox for \"d611da877bf47a0689ed7698eb7d95eac66730db14e5ef9bc32069502d8da731\"" Apr 24 23:38:18.210387 containerd[2018]: 2026-04-24 23:38:18.056 [WARNING][6171] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d611da877bf47a0689ed7698eb7d95eac66730db14e5ef9bc32069502d8da731" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--112-k8s-calico--apiserver--df9f5595c--hn778-eth0", GenerateName:"calico-apiserver-df9f5595c-", Namespace:"calico-system", SelfLink:"", UID:"b3876f02-9c03-4cf9-883e-82b297082dbe", ResourceVersion:"1051", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 37, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"df9f5595c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-112", ContainerID:"e0422e01fa6e6da1721fffed0b680c22f4b88d5fa9b67c767a1ddbd8b5081159", Pod:"calico-apiserver-df9f5595c-hn778", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.36.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali3b1306e0af3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:38:18.210387 containerd[2018]: 2026-04-24 23:38:18.056 [INFO][6171] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="d611da877bf47a0689ed7698eb7d95eac66730db14e5ef9bc32069502d8da731" Apr 24 23:38:18.210387 containerd[2018]: 2026-04-24 23:38:18.056 [INFO][6171] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d611da877bf47a0689ed7698eb7d95eac66730db14e5ef9bc32069502d8da731" iface="eth0" netns="" Apr 24 23:38:18.210387 containerd[2018]: 2026-04-24 23:38:18.056 [INFO][6171] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="d611da877bf47a0689ed7698eb7d95eac66730db14e5ef9bc32069502d8da731" Apr 24 23:38:18.210387 containerd[2018]: 2026-04-24 23:38:18.056 [INFO][6171] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="d611da877bf47a0689ed7698eb7d95eac66730db14e5ef9bc32069502d8da731" Apr 24 23:38:18.210387 containerd[2018]: 2026-04-24 23:38:18.154 [INFO][6178] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="d611da877bf47a0689ed7698eb7d95eac66730db14e5ef9bc32069502d8da731" HandleID="k8s-pod-network.d611da877bf47a0689ed7698eb7d95eac66730db14e5ef9bc32069502d8da731" Workload="ip--172--31--17--112-k8s-calico--apiserver--df9f5595c--hn778-eth0" Apr 24 23:38:18.210387 containerd[2018]: 2026-04-24 23:38:18.157 [INFO][6178] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:38:18.210387 containerd[2018]: 2026-04-24 23:38:18.158 [INFO][6178] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:38:18.210387 containerd[2018]: 2026-04-24 23:38:18.191 [WARNING][6178] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="d611da877bf47a0689ed7698eb7d95eac66730db14e5ef9bc32069502d8da731" HandleID="k8s-pod-network.d611da877bf47a0689ed7698eb7d95eac66730db14e5ef9bc32069502d8da731" Workload="ip--172--31--17--112-k8s-calico--apiserver--df9f5595c--hn778-eth0" Apr 24 23:38:18.210387 containerd[2018]: 2026-04-24 23:38:18.192 [INFO][6178] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="d611da877bf47a0689ed7698eb7d95eac66730db14e5ef9bc32069502d8da731" HandleID="k8s-pod-network.d611da877bf47a0689ed7698eb7d95eac66730db14e5ef9bc32069502d8da731" Workload="ip--172--31--17--112-k8s-calico--apiserver--df9f5595c--hn778-eth0" Apr 24 23:38:18.210387 containerd[2018]: 2026-04-24 23:38:18.196 [INFO][6178] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:38:18.210387 containerd[2018]: 2026-04-24 23:38:18.201 [INFO][6171] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="d611da877bf47a0689ed7698eb7d95eac66730db14e5ef9bc32069502d8da731" Apr 24 23:38:18.213931 containerd[2018]: time="2026-04-24T23:38:18.213243556Z" level=info msg="TearDown network for sandbox \"d611da877bf47a0689ed7698eb7d95eac66730db14e5ef9bc32069502d8da731\" successfully" Apr 24 23:38:18.213931 containerd[2018]: time="2026-04-24T23:38:18.213293620Z" level=info msg="StopPodSandbox for \"d611da877bf47a0689ed7698eb7d95eac66730db14e5ef9bc32069502d8da731\" returns successfully" Apr 24 23:38:18.214822 containerd[2018]: time="2026-04-24T23:38:18.214759276Z" level=info msg="RemovePodSandbox for \"d611da877bf47a0689ed7698eb7d95eac66730db14e5ef9bc32069502d8da731\"" Apr 24 23:38:18.214921 containerd[2018]: time="2026-04-24T23:38:18.214821592Z" level=info msg="Forcibly stopping sandbox \"d611da877bf47a0689ed7698eb7d95eac66730db14e5ef9bc32069502d8da731\"" Apr 24 23:38:18.402997 sshd[6111]: Accepted publickey for core from 20.229.252.112 port 47294 ssh2: RSA SHA256:EpOBCscCvamodiF49drNiIRDMxdv0LtYbixE7WaoRrA Apr 24 23:38:18.409345 sshd[6111]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:38:18.427346 systemd-logind[1993]: New session 8 of user core. Apr 24 23:38:18.461527 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 24 23:38:18.619944 containerd[2018]: 2026-04-24 23:38:18.422 [WARNING][6192] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d611da877bf47a0689ed7698eb7d95eac66730db14e5ef9bc32069502d8da731" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--112-k8s-calico--apiserver--df9f5595c--hn778-eth0", GenerateName:"calico-apiserver-df9f5595c-", Namespace:"calico-system", SelfLink:"", UID:"b3876f02-9c03-4cf9-883e-82b297082dbe", ResourceVersion:"1051", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 37, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"df9f5595c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-112", ContainerID:"e0422e01fa6e6da1721fffed0b680c22f4b88d5fa9b67c767a1ddbd8b5081159", Pod:"calico-apiserver-df9f5595c-hn778", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.36.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali3b1306e0af3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:38:18.619944 containerd[2018]: 2026-04-24 23:38:18.424 [INFO][6192] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="d611da877bf47a0689ed7698eb7d95eac66730db14e5ef9bc32069502d8da731" Apr 24 23:38:18.619944 containerd[2018]: 2026-04-24 23:38:18.424 [INFO][6192] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d611da877bf47a0689ed7698eb7d95eac66730db14e5ef9bc32069502d8da731" iface="eth0" netns="" Apr 24 23:38:18.619944 containerd[2018]: 2026-04-24 23:38:18.425 [INFO][6192] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="d611da877bf47a0689ed7698eb7d95eac66730db14e5ef9bc32069502d8da731" Apr 24 23:38:18.619944 containerd[2018]: 2026-04-24 23:38:18.425 [INFO][6192] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="d611da877bf47a0689ed7698eb7d95eac66730db14e5ef9bc32069502d8da731" Apr 24 23:38:18.619944 containerd[2018]: 2026-04-24 23:38:18.539 [INFO][6200] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="d611da877bf47a0689ed7698eb7d95eac66730db14e5ef9bc32069502d8da731" HandleID="k8s-pod-network.d611da877bf47a0689ed7698eb7d95eac66730db14e5ef9bc32069502d8da731" Workload="ip--172--31--17--112-k8s-calico--apiserver--df9f5595c--hn778-eth0" Apr 24 23:38:18.619944 containerd[2018]: 2026-04-24 23:38:18.540 [INFO][6200] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:38:18.619944 containerd[2018]: 2026-04-24 23:38:18.540 [INFO][6200] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:38:18.619944 containerd[2018]: 2026-04-24 23:38:18.592 [WARNING][6200] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="d611da877bf47a0689ed7698eb7d95eac66730db14e5ef9bc32069502d8da731" HandleID="k8s-pod-network.d611da877bf47a0689ed7698eb7d95eac66730db14e5ef9bc32069502d8da731" Workload="ip--172--31--17--112-k8s-calico--apiserver--df9f5595c--hn778-eth0" Apr 24 23:38:18.619944 containerd[2018]: 2026-04-24 23:38:18.592 [INFO][6200] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="d611da877bf47a0689ed7698eb7d95eac66730db14e5ef9bc32069502d8da731" HandleID="k8s-pod-network.d611da877bf47a0689ed7698eb7d95eac66730db14e5ef9bc32069502d8da731" Workload="ip--172--31--17--112-k8s-calico--apiserver--df9f5595c--hn778-eth0" Apr 24 23:38:18.619944 containerd[2018]: 2026-04-24 23:38:18.598 [INFO][6200] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:38:18.619944 containerd[2018]: 2026-04-24 23:38:18.611 [INFO][6192] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="d611da877bf47a0689ed7698eb7d95eac66730db14e5ef9bc32069502d8da731" Apr 24 23:38:18.619944 containerd[2018]: time="2026-04-24T23:38:18.619494594Z" level=info msg="TearDown network for sandbox \"d611da877bf47a0689ed7698eb7d95eac66730db14e5ef9bc32069502d8da731\" successfully" Apr 24 23:38:18.634195 containerd[2018]: time="2026-04-24T23:38:18.633580506Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d611da877bf47a0689ed7698eb7d95eac66730db14e5ef9bc32069502d8da731\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 24 23:38:18.634195 containerd[2018]: time="2026-04-24T23:38:18.633673974Z" level=info msg="RemovePodSandbox \"d611da877bf47a0689ed7698eb7d95eac66730db14e5ef9bc32069502d8da731\" returns successfully" Apr 24 23:38:19.402413 sshd[6111]: pam_unix(sshd:session): session closed for user core Apr 24 23:38:19.415111 systemd[1]: sshd@7-172.31.17.112:22-20.229.252.112:47294.service: Deactivated successfully. Apr 24 23:38:19.425295 systemd[1]: session-8.scope: Deactivated successfully. Apr 24 23:38:19.432179 systemd-logind[1993]: Session 8 logged out. Waiting for processes to exit. Apr 24 23:38:19.437084 systemd-logind[1993]: Removed session 8. Apr 24 23:38:19.650421 kubelet[3426]: I0424 23:38:19.650318 3426 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-df9f5595c-hn778" podStartSLOduration=34.368980967 podStartE2EDuration="41.650297515s" podCreationTimestamp="2026-04-24 23:37:38 +0000 UTC" firstStartedPulling="2026-04-24 23:38:07.061357637 +0000 UTC m=+52.785940751" lastFinishedPulling="2026-04-24 23:38:14.342674197 +0000 UTC m=+60.067257299" observedRunningTime="2026-04-24 23:38:15.466935915 +0000 UTC m=+61.191519041" watchObservedRunningTime="2026-04-24 23:38:19.650297515 +0000 UTC m=+65.374880629" Apr 24 23:38:20.109208 containerd[2018]: time="2026-04-24T23:38:20.108072462Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:38:20.113431 containerd[2018]: time="2026-04-24T23:38:20.111433626Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=49189955" Apr 24 23:38:20.117342 containerd[2018]: time="2026-04-24T23:38:20.116949750Z" level=info msg="ImageCreate event name:\"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:38:20.128554 containerd[2018]: time="2026-04-24T23:38:20.127744926Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:38:20.132775 containerd[2018]: time="2026-04-24T23:38:20.132676686Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"50587448\" in 5.789449613s" Apr 24 23:38:20.133231 containerd[2018]: time="2026-04-24T23:38:20.133150266Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\"" Apr 24 23:38:20.136205 containerd[2018]: time="2026-04-24T23:38:20.135473082Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Apr 24 23:38:20.193442 containerd[2018]: time="2026-04-24T23:38:20.193357110Z" level=info msg="CreateContainer within sandbox \"ad914eed2b59c59276ebf6f1027ad20c822c5a424be19db35328cd8f442d3e28\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Apr 24 23:38:20.225635 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2788347474.mount: Deactivated successfully. Apr 24 23:38:20.228614 containerd[2018]: time="2026-04-24T23:38:20.228195990Z" level=info msg="CreateContainer within sandbox \"ad914eed2b59c59276ebf6f1027ad20c822c5a424be19db35328cd8f442d3e28\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"6589bb1d69e6a90a8dbacc572665c2f034f473c79e49009939ad6789f63da617\"" Apr 24 23:38:20.231515 containerd[2018]: time="2026-04-24T23:38:20.230724942Z" level=info msg="StartContainer for \"6589bb1d69e6a90a8dbacc572665c2f034f473c79e49009939ad6789f63da617\"" Apr 24 23:38:20.298274 systemd[1]: Started cri-containerd-6589bb1d69e6a90a8dbacc572665c2f034f473c79e49009939ad6789f63da617.scope - libcontainer container 6589bb1d69e6a90a8dbacc572665c2f034f473c79e49009939ad6789f63da617. Apr 24 23:38:20.383332 containerd[2018]: time="2026-04-24T23:38:20.381753847Z" level=info msg="StartContainer for \"6589bb1d69e6a90a8dbacc572665c2f034f473c79e49009939ad6789f63da617\" returns successfully" Apr 24 23:38:21.478589 containerd[2018]: time="2026-04-24T23:38:21.478506716Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:38:21.480852 containerd[2018]: time="2026-04-24T23:38:21.480571472Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8261497" Apr 24 23:38:21.483610 containerd[2018]: time="2026-04-24T23:38:21.483085604Z" level=info msg="ImageCreate event name:\"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:38:21.488644 containerd[2018]: time="2026-04-24T23:38:21.488591097Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:38:21.490669 containerd[2018]: time="2026-04-24T23:38:21.490604481Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"9659022\" in 1.355050015s" Apr 24 23:38:21.490886 containerd[2018]: time="2026-04-24T23:38:21.490851525Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\"" Apr 24 23:38:21.497691 containerd[2018]: time="2026-04-24T23:38:21.497620353Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 24 23:38:21.506569 containerd[2018]: time="2026-04-24T23:38:21.506099229Z" level=info msg="CreateContainer within sandbox \"23a184c55b23263ac7d131c8cb1b5e59201fe1e7d25b63176d6bfb88a040df22\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Apr 24 23:38:21.543948 containerd[2018]: time="2026-04-24T23:38:21.543868401Z" level=info msg="CreateContainer within sandbox \"23a184c55b23263ac7d131c8cb1b5e59201fe1e7d25b63176d6bfb88a040df22\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"cdc8411c04727da9e2f380f98d90b9f5196257c99ceaed93da7d9741d38d1446\"" Apr 24 23:38:21.547549 containerd[2018]: time="2026-04-24T23:38:21.546944121Z" level=info msg="StartContainer for \"cdc8411c04727da9e2f380f98d90b9f5196257c99ceaed93da7d9741d38d1446\"" Apr 24 23:38:21.626347 systemd[1]: Started cri-containerd-cdc8411c04727da9e2f380f98d90b9f5196257c99ceaed93da7d9741d38d1446.scope - libcontainer container cdc8411c04727da9e2f380f98d90b9f5196257c99ceaed93da7d9741d38d1446. Apr 24 23:38:21.713180 containerd[2018]: time="2026-04-24T23:38:21.711852466Z" level=info msg="StartContainer for \"cdc8411c04727da9e2f380f98d90b9f5196257c99ceaed93da7d9741d38d1446\" returns successfully" Apr 24 23:38:21.758346 kubelet[3426]: I0424 23:38:21.757549 3426 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-8696996ff8-qz98n" podStartSLOduration=25.773726541 podStartE2EDuration="38.757524658s" podCreationTimestamp="2026-04-24 23:37:43 +0000 UTC" firstStartedPulling="2026-04-24 23:38:07.151384289 +0000 UTC m=+52.875967403" lastFinishedPulling="2026-04-24 23:38:20.135182406 +0000 UTC m=+65.859765520" observedRunningTime="2026-04-24 23:38:20.644566676 +0000 UTC m=+66.369149802" watchObservedRunningTime="2026-04-24 23:38:21.757524658 +0000 UTC m=+67.482107772" Apr 24 23:38:21.829000 containerd[2018]: time="2026-04-24T23:38:21.827585194Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:38:21.829656 containerd[2018]: time="2026-04-24T23:38:21.829615270Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Apr 24 23:38:21.834650 containerd[2018]: time="2026-04-24T23:38:21.834593434Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"46949856\" in 335.784421ms" Apr 24 23:38:21.834847 containerd[2018]: time="2026-04-24T23:38:21.834816298Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\"" Apr 24 23:38:21.838207 containerd[2018]: time="2026-04-24T23:38:21.838139962Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Apr 24 23:38:21.846285 containerd[2018]: time="2026-04-24T23:38:21.846203626Z" level=info msg="CreateContainer within sandbox \"25d24863abf6207fbc84750c17022dad3d07b7f8eef932d80c00f3877cd06816\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 24 23:38:21.876900 containerd[2018]: time="2026-04-24T23:38:21.876730822Z" level=info msg="CreateContainer within sandbox \"25d24863abf6207fbc84750c17022dad3d07b7f8eef932d80c00f3877cd06816\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"9c67ab9f8b599b902136eab89fc46f76da55ea010b86b7b24b12a32ade858793\"" Apr 24 23:38:21.880685 containerd[2018]: time="2026-04-24T23:38:21.878359846Z" level=info msg="StartContainer for \"9c67ab9f8b599b902136eab89fc46f76da55ea010b86b7b24b12a32ade858793\"" Apr 24 23:38:21.927342 systemd[1]: Started cri-containerd-9c67ab9f8b599b902136eab89fc46f76da55ea010b86b7b24b12a32ade858793.scope - libcontainer container 9c67ab9f8b599b902136eab89fc46f76da55ea010b86b7b24b12a32ade858793. Apr 24 23:38:22.005807 containerd[2018]: time="2026-04-24T23:38:22.005727739Z" level=info msg="StartContainer for \"9c67ab9f8b599b902136eab89fc46f76da55ea010b86b7b24b12a32ade858793\" returns successfully" Apr 24 23:38:22.630005 kubelet[3426]: I0424 23:38:22.628842 3426 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-df9f5595c-2967j" podStartSLOduration=30.190888399 podStartE2EDuration="44.628819426s" podCreationTimestamp="2026-04-24 23:37:38 +0000 UTC" firstStartedPulling="2026-04-24 23:38:07.399193171 +0000 UTC m=+53.123776285" lastFinishedPulling="2026-04-24 23:38:21.837124186 +0000 UTC m=+67.561707312" observedRunningTime="2026-04-24 23:38:22.626610922 +0000 UTC m=+68.351194048" watchObservedRunningTime="2026-04-24 23:38:22.628819426 +0000 UTC m=+68.353402540" Apr 24 23:38:23.338462 containerd[2018]: time="2026-04-24T23:38:23.338385334Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:38:23.342813 containerd[2018]: time="2026-04-24T23:38:23.342731806Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=5882804" Apr 24 23:38:23.345808 containerd[2018]: time="2026-04-24T23:38:23.345734758Z" level=info msg="ImageCreate event name:\"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:38:23.354696 containerd[2018]: time="2026-04-24T23:38:23.354606922Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:38:23.359366 containerd[2018]: time="2026-04-24T23:38:23.359271502Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7280321\" in 1.521060116s" Apr 24 23:38:23.359366 containerd[2018]: time="2026-04-24T23:38:23.359351446Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\"" Apr 24 23:38:23.363964 containerd[2018]: time="2026-04-24T23:38:23.363881446Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Apr 24 23:38:23.374732 containerd[2018]: time="2026-04-24T23:38:23.374095930Z" level=info msg="CreateContainer within sandbox \"45f2c145111b999bdc4b7024b93f6b1db6a3a6a7d292600738c631e8976aa787\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Apr 24 23:38:23.430532 containerd[2018]: time="2026-04-24T23:38:23.430473202Z" level=info msg="CreateContainer within sandbox \"45f2c145111b999bdc4b7024b93f6b1db6a3a6a7d292600738c631e8976aa787\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"e2bc5f2aeaeab50a5d66294fe56f73e4e2a97f8fd748964ae3d304e6cbb00e8e\"" Apr 24 23:38:23.432653 containerd[2018]: time="2026-04-24T23:38:23.432588802Z" level=info msg="StartContainer for \"e2bc5f2aeaeab50a5d66294fe56f73e4e2a97f8fd748964ae3d304e6cbb00e8e\"" Apr 24 23:38:23.515795 systemd[1]: Started cri-containerd-e2bc5f2aeaeab50a5d66294fe56f73e4e2a97f8fd748964ae3d304e6cbb00e8e.scope - libcontainer container e2bc5f2aeaeab50a5d66294fe56f73e4e2a97f8fd748964ae3d304e6cbb00e8e. Apr 24 23:38:23.622491 kubelet[3426]: I0424 23:38:23.621287 3426 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 24 23:38:23.639419 containerd[2018]: time="2026-04-24T23:38:23.639353711Z" level=info msg="StartContainer for \"e2bc5f2aeaeab50a5d66294fe56f73e4e2a97f8fd748964ae3d304e6cbb00e8e\" returns successfully" Apr 24 23:38:24.600074 systemd[1]: Started sshd@8-172.31.17.112:22-20.229.252.112:47302.service - OpenSSH per-connection server daemon (20.229.252.112:47302). Apr 24 23:38:25.085813 containerd[2018]: time="2026-04-24T23:38:25.085729930Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:38:25.088038 containerd[2018]: time="2026-04-24T23:38:25.087617686Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=13766291" Apr 24 23:38:25.090288 containerd[2018]: time="2026-04-24T23:38:25.090208030Z" level=info msg="ImageCreate event name:\"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:38:25.095744 containerd[2018]: time="2026-04-24T23:38:25.095649706Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:38:25.098273 containerd[2018]: time="2026-04-24T23:38:25.098063434Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"15163768\" in 1.734099668s" Apr 24 23:38:25.098273 containerd[2018]: time="2026-04-24T23:38:25.098129302Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\"" Apr 24 23:38:25.101891 containerd[2018]: time="2026-04-24T23:38:25.101082370Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Apr 24 23:38:25.111644 containerd[2018]: time="2026-04-24T23:38:25.110665210Z" level=info msg="CreateContainer within sandbox \"23a184c55b23263ac7d131c8cb1b5e59201fe1e7d25b63176d6bfb88a040df22\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Apr 24 23:38:25.154004 containerd[2018]: time="2026-04-24T23:38:25.153291059Z" level=info msg="CreateContainer within sandbox \"23a184c55b23263ac7d131c8cb1b5e59201fe1e7d25b63176d6bfb88a040df22\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"6dcd167c6811e52aa328a828784a18a2ed17523b70aa3b124d46b9e9f3a2ca22\"" Apr 24 23:38:25.153799 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1948563278.mount: Deactivated successfully. Apr 24 23:38:25.158006 containerd[2018]: time="2026-04-24T23:38:25.157646687Z" level=info msg="StartContainer for \"6dcd167c6811e52aa328a828784a18a2ed17523b70aa3b124d46b9e9f3a2ca22\"" Apr 24 23:38:25.238335 systemd[1]: Started cri-containerd-6dcd167c6811e52aa328a828784a18a2ed17523b70aa3b124d46b9e9f3a2ca22.scope - libcontainer container 6dcd167c6811e52aa328a828784a18a2ed17523b70aa3b124d46b9e9f3a2ca22. Apr 24 23:38:25.305529 containerd[2018]: time="2026-04-24T23:38:25.305413055Z" level=info msg="StartContainer for \"6dcd167c6811e52aa328a828784a18a2ed17523b70aa3b124d46b9e9f3a2ca22\" returns successfully" Apr 24 23:38:25.682678 sshd[6450]: Accepted publickey for core from 20.229.252.112 port 47302 ssh2: RSA SHA256:EpOBCscCvamodiF49drNiIRDMxdv0LtYbixE7WaoRrA Apr 24 23:38:25.686378 sshd[6450]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:38:25.695938 systemd-logind[1993]: New session 9 of user core. Apr 24 23:38:25.703293 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 24 23:38:25.820217 kubelet[3426]: I0424 23:38:25.819947 3426 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Apr 24 23:38:25.820217 kubelet[3426]: I0424 23:38:25.820054 3426 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Apr 24 23:38:26.635890 sshd[6450]: pam_unix(sshd:session): session closed for user core Apr 24 23:38:26.651728 systemd[1]: sshd@8-172.31.17.112:22-20.229.252.112:47302.service: Deactivated successfully. Apr 24 23:38:26.659522 systemd[1]: session-9.scope: Deactivated successfully. Apr 24 23:38:26.662374 systemd-logind[1993]: Session 9 logged out. Waiting for processes to exit. Apr 24 23:38:26.667050 systemd-logind[1993]: Removed session 9. Apr 24 23:38:26.836775 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4246057949.mount: Deactivated successfully. Apr 24 23:38:26.881715 containerd[2018]: time="2026-04-24T23:38:26.881629407Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:38:26.884048 containerd[2018]: time="2026-04-24T23:38:26.883620255Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=16426594" Apr 24 23:38:26.886610 containerd[2018]: time="2026-04-24T23:38:26.886192179Z" level=info msg="ImageCreate event name:\"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:38:26.893578 containerd[2018]: time="2026-04-24T23:38:26.893419515Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:38:26.895760 containerd[2018]: time="2026-04-24T23:38:26.895540695Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"16426424\" in 1.794381765s" Apr 24 23:38:26.895760 containerd[2018]: time="2026-04-24T23:38:26.895611747Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\"" Apr 24 23:38:26.905904 containerd[2018]: time="2026-04-24T23:38:26.905676711Z" level=info msg="CreateContainer within sandbox \"45f2c145111b999bdc4b7024b93f6b1db6a3a6a7d292600738c631e8976aa787\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Apr 24 23:38:26.939882 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1974043655.mount: Deactivated successfully. Apr 24 23:38:26.940623 containerd[2018]: time="2026-04-24T23:38:26.940543516Z" level=info msg="CreateContainer within sandbox \"45f2c145111b999bdc4b7024b93f6b1db6a3a6a7d292600738c631e8976aa787\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"0d747af0a1623c0824198dfde16b945d478be03d8fc7f1df3e9803b924df5740\"" Apr 24 23:38:26.944257 containerd[2018]: time="2026-04-24T23:38:26.944157064Z" level=info msg="StartContainer for \"0d747af0a1623c0824198dfde16b945d478be03d8fc7f1df3e9803b924df5740\"" Apr 24 23:38:27.008323 systemd[1]: Started cri-containerd-0d747af0a1623c0824198dfde16b945d478be03d8fc7f1df3e9803b924df5740.scope - libcontainer container 0d747af0a1623c0824198dfde16b945d478be03d8fc7f1df3e9803b924df5740. Apr 24 23:38:27.083766 containerd[2018]: time="2026-04-24T23:38:27.083684724Z" level=info msg="StartContainer for \"0d747af0a1623c0824198dfde16b945d478be03d8fc7f1df3e9803b924df5740\" returns successfully" Apr 24 23:38:27.087409 kubelet[3426]: I0424 23:38:27.087353 3426 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 24 23:38:27.151298 kubelet[3426]: I0424 23:38:27.151093 3426 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-d77dn" podStartSLOduration=26.223798952 podStartE2EDuration="44.150939229s" podCreationTimestamp="2026-04-24 23:37:43 +0000 UTC" firstStartedPulling="2026-04-24 23:38:07.173208521 +0000 UTC m=+52.897791635" lastFinishedPulling="2026-04-24 23:38:25.100348714 +0000 UTC m=+70.824931912" observedRunningTime="2026-04-24 23:38:25.667333765 +0000 UTC m=+71.391916903" watchObservedRunningTime="2026-04-24 23:38:27.150939229 +0000 UTC m=+72.875522343" Apr 24 23:38:27.691020 kubelet[3426]: I0424 23:38:27.689890 3426 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-bc76d45f7-hgqwn" podStartSLOduration=3.94773655 podStartE2EDuration="22.689863899s" podCreationTimestamp="2026-04-24 23:38:05 +0000 UTC" firstStartedPulling="2026-04-24 23:38:08.156162654 +0000 UTC m=+53.880745768" lastFinishedPulling="2026-04-24 23:38:26.898290003 +0000 UTC m=+72.622873117" observedRunningTime="2026-04-24 23:38:27.687900963 +0000 UTC m=+73.412484113" watchObservedRunningTime="2026-04-24 23:38:27.689863899 +0000 UTC m=+73.414447013" Apr 24 23:38:31.823614 systemd[1]: Started sshd@9-172.31.17.112:22-20.229.252.112:48002.service - OpenSSH per-connection server daemon (20.229.252.112:48002). Apr 24 23:38:32.888047 sshd[6592]: Accepted publickey for core from 20.229.252.112 port 48002 ssh2: RSA SHA256:EpOBCscCvamodiF49drNiIRDMxdv0LtYbixE7WaoRrA Apr 24 23:38:32.891466 sshd[6592]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:38:32.901410 systemd-logind[1993]: New session 10 of user core. Apr 24 23:38:32.907279 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 24 23:38:33.747580 sshd[6592]: pam_unix(sshd:session): session closed for user core Apr 24 23:38:33.759429 systemd[1]: sshd@9-172.31.17.112:22-20.229.252.112:48002.service: Deactivated successfully. Apr 24 23:38:33.766125 systemd[1]: session-10.scope: Deactivated successfully. Apr 24 23:38:33.768446 systemd-logind[1993]: Session 10 logged out. Waiting for processes to exit. Apr 24 23:38:33.770871 systemd-logind[1993]: Removed session 10. Apr 24 23:38:38.940913 systemd[1]: Started sshd@10-172.31.17.112:22-20.229.252.112:60264.service - OpenSSH per-connection server daemon (20.229.252.112:60264). Apr 24 23:38:39.989956 sshd[6634]: Accepted publickey for core from 20.229.252.112 port 60264 ssh2: RSA SHA256:EpOBCscCvamodiF49drNiIRDMxdv0LtYbixE7WaoRrA Apr 24 23:38:39.993124 sshd[6634]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:38:40.002106 systemd-logind[1993]: New session 11 of user core. Apr 24 23:38:40.009300 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 24 23:38:40.900226 sshd[6634]: pam_unix(sshd:session): session closed for user core Apr 24 23:38:40.906701 systemd-logind[1993]: Session 11 logged out. Waiting for processes to exit. Apr 24 23:38:40.907125 systemd[1]: sshd@10-172.31.17.112:22-20.229.252.112:60264.service: Deactivated successfully. Apr 24 23:38:40.911128 systemd[1]: session-11.scope: Deactivated successfully. Apr 24 23:38:40.915510 systemd-logind[1993]: Removed session 11. Apr 24 23:38:41.091468 systemd[1]: Started sshd@11-172.31.17.112:22-20.229.252.112:60270.service - OpenSSH per-connection server daemon (20.229.252.112:60270). Apr 24 23:38:42.131520 sshd[6663]: Accepted publickey for core from 20.229.252.112 port 60270 ssh2: RSA SHA256:EpOBCscCvamodiF49drNiIRDMxdv0LtYbixE7WaoRrA Apr 24 23:38:42.133430 sshd[6663]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:38:42.142507 systemd-logind[1993]: New session 12 of user core. Apr 24 23:38:42.150305 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 24 23:38:43.068025 sshd[6663]: pam_unix(sshd:session): session closed for user core Apr 24 23:38:43.075242 systemd-logind[1993]: Session 12 logged out. Waiting for processes to exit. Apr 24 23:38:43.075650 systemd[1]: sshd@11-172.31.17.112:22-20.229.252.112:60270.service: Deactivated successfully. Apr 24 23:38:43.085085 systemd[1]: session-12.scope: Deactivated successfully. Apr 24 23:38:43.091249 systemd-logind[1993]: Removed session 12. Apr 24 23:38:43.254541 systemd[1]: Started sshd@12-172.31.17.112:22-20.229.252.112:60274.service - OpenSSH per-connection server daemon (20.229.252.112:60274). Apr 24 23:38:44.302035 sshd[6674]: Accepted publickey for core from 20.229.252.112 port 60274 ssh2: RSA SHA256:EpOBCscCvamodiF49drNiIRDMxdv0LtYbixE7WaoRrA Apr 24 23:38:44.304155 sshd[6674]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:38:44.314735 systemd-logind[1993]: New session 13 of user core. Apr 24 23:38:44.322292 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 24 23:38:45.152057 sshd[6674]: pam_unix(sshd:session): session closed for user core Apr 24 23:38:45.158195 systemd[1]: sshd@12-172.31.17.112:22-20.229.252.112:60274.service: Deactivated successfully. Apr 24 23:38:45.162698 systemd[1]: session-13.scope: Deactivated successfully. Apr 24 23:38:45.165314 systemd-logind[1993]: Session 13 logged out. Waiting for processes to exit. Apr 24 23:38:45.169192 systemd-logind[1993]: Removed session 13. Apr 24 23:38:50.339195 systemd[1]: Started sshd@13-172.31.17.112:22-20.229.252.112:57302.service - OpenSSH per-connection server daemon (20.229.252.112:57302). Apr 24 23:38:51.375024 sshd[6716]: Accepted publickey for core from 20.229.252.112 port 57302 ssh2: RSA SHA256:EpOBCscCvamodiF49drNiIRDMxdv0LtYbixE7WaoRrA Apr 24 23:38:51.379361 sshd[6716]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:38:51.392354 systemd-logind[1993]: New session 14 of user core. Apr 24 23:38:51.402389 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 24 23:38:52.281881 sshd[6716]: pam_unix(sshd:session): session closed for user core Apr 24 23:38:52.293330 systemd[1]: sshd@13-172.31.17.112:22-20.229.252.112:57302.service: Deactivated successfully. Apr 24 23:38:52.299371 systemd[1]: session-14.scope: Deactivated successfully. Apr 24 23:38:52.306038 systemd-logind[1993]: Session 14 logged out. Waiting for processes to exit. Apr 24 23:38:52.308377 systemd-logind[1993]: Removed session 14. Apr 24 23:38:52.470553 systemd[1]: Started sshd@14-172.31.17.112:22-20.229.252.112:57310.service - OpenSSH per-connection server daemon (20.229.252.112:57310). Apr 24 23:38:53.525302 sshd[6763]: Accepted publickey for core from 20.229.252.112 port 57310 ssh2: RSA SHA256:EpOBCscCvamodiF49drNiIRDMxdv0LtYbixE7WaoRrA Apr 24 23:38:53.528834 sshd[6763]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:38:53.541789 systemd-logind[1993]: New session 15 of user core. Apr 24 23:38:53.551363 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 24 23:38:54.780435 sshd[6763]: pam_unix(sshd:session): session closed for user core Apr 24 23:38:54.788572 systemd[1]: sshd@14-172.31.17.112:22-20.229.252.112:57310.service: Deactivated successfully. Apr 24 23:38:54.798059 systemd[1]: session-15.scope: Deactivated successfully. Apr 24 23:38:54.802478 systemd-logind[1993]: Session 15 logged out. Waiting for processes to exit. Apr 24 23:38:54.809058 systemd-logind[1993]: Removed session 15. Apr 24 23:38:54.958503 systemd[1]: Started sshd@15-172.31.17.112:22-20.229.252.112:57326.service - OpenSSH per-connection server daemon (20.229.252.112:57326). Apr 24 23:38:55.983438 sshd[6773]: Accepted publickey for core from 20.229.252.112 port 57326 ssh2: RSA SHA256:EpOBCscCvamodiF49drNiIRDMxdv0LtYbixE7WaoRrA Apr 24 23:38:55.986356 sshd[6773]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:38:55.996330 systemd-logind[1993]: New session 16 of user core. Apr 24 23:38:56.002334 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 24 23:38:57.621616 sshd[6773]: pam_unix(sshd:session): session closed for user core Apr 24 23:38:57.629724 systemd-logind[1993]: Session 16 logged out. Waiting for processes to exit. Apr 24 23:38:57.632087 systemd[1]: sshd@15-172.31.17.112:22-20.229.252.112:57326.service: Deactivated successfully. Apr 24 23:38:57.637767 systemd[1]: session-16.scope: Deactivated successfully. Apr 24 23:38:57.639286 systemd-logind[1993]: Removed session 16. Apr 24 23:38:57.805557 systemd[1]: Started sshd@16-172.31.17.112:22-20.229.252.112:56516.service - OpenSSH per-connection server daemon (20.229.252.112:56516). Apr 24 23:38:58.846080 sshd[6799]: Accepted publickey for core from 20.229.252.112 port 56516 ssh2: RSA SHA256:EpOBCscCvamodiF49drNiIRDMxdv0LtYbixE7WaoRrA Apr 24 23:38:58.849084 sshd[6799]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:38:58.861052 systemd-logind[1993]: New session 17 of user core. Apr 24 23:38:58.867257 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 24 23:38:59.946691 sshd[6799]: pam_unix(sshd:session): session closed for user core Apr 24 23:38:59.954936 systemd[1]: sshd@16-172.31.17.112:22-20.229.252.112:56516.service: Deactivated successfully. Apr 24 23:38:59.959476 systemd[1]: session-17.scope: Deactivated successfully. Apr 24 23:38:59.961855 systemd-logind[1993]: Session 17 logged out. Waiting for processes to exit. Apr 24 23:38:59.965444 systemd-logind[1993]: Removed session 17. Apr 24 23:39:00.134545 systemd[1]: Started sshd@17-172.31.17.112:22-20.229.252.112:56528.service - OpenSSH per-connection server daemon (20.229.252.112:56528). Apr 24 23:39:01.178185 sshd[6810]: Accepted publickey for core from 20.229.252.112 port 56528 ssh2: RSA SHA256:EpOBCscCvamodiF49drNiIRDMxdv0LtYbixE7WaoRrA Apr 24 23:39:01.183705 sshd[6810]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:39:01.200276 systemd-logind[1993]: New session 18 of user core. Apr 24 23:39:01.207305 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 24 23:39:02.026445 sshd[6810]: pam_unix(sshd:session): session closed for user core Apr 24 23:39:02.033556 systemd[1]: sshd@17-172.31.17.112:22-20.229.252.112:56528.service: Deactivated successfully. Apr 24 23:39:02.040028 systemd[1]: session-18.scope: Deactivated successfully. Apr 24 23:39:02.041402 systemd-logind[1993]: Session 18 logged out. Waiting for processes to exit. Apr 24 23:39:02.043258 systemd-logind[1993]: Removed session 18. Apr 24 23:39:07.210495 systemd[1]: Started sshd@18-172.31.17.112:22-20.229.252.112:49430.service - OpenSSH per-connection server daemon (20.229.252.112:49430). Apr 24 23:39:08.253560 sshd[6866]: Accepted publickey for core from 20.229.252.112 port 49430 ssh2: RSA SHA256:EpOBCscCvamodiF49drNiIRDMxdv0LtYbixE7WaoRrA Apr 24 23:39:08.257107 sshd[6866]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:39:08.264805 systemd-logind[1993]: New session 19 of user core. Apr 24 23:39:08.273370 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 24 23:39:09.101264 sshd[6866]: pam_unix(sshd:session): session closed for user core Apr 24 23:39:09.111869 systemd[1]: sshd@18-172.31.17.112:22-20.229.252.112:49430.service: Deactivated successfully. Apr 24 23:39:09.116770 systemd[1]: session-19.scope: Deactivated successfully. Apr 24 23:39:09.120378 systemd-logind[1993]: Session 19 logged out. Waiting for processes to exit. Apr 24 23:39:09.123268 systemd-logind[1993]: Removed session 19. Apr 24 23:39:14.288499 systemd[1]: Started sshd@19-172.31.17.112:22-20.229.252.112:49442.service - OpenSSH per-connection server daemon (20.229.252.112:49442). Apr 24 23:39:15.325918 sshd[6878]: Accepted publickey for core from 20.229.252.112 port 49442 ssh2: RSA SHA256:EpOBCscCvamodiF49drNiIRDMxdv0LtYbixE7WaoRrA Apr 24 23:39:15.328740 sshd[6878]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:39:15.337780 systemd-logind[1993]: New session 20 of user core. Apr 24 23:39:15.341312 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 24 23:39:16.167833 sshd[6878]: pam_unix(sshd:session): session closed for user core Apr 24 23:39:16.177214 systemd[1]: sshd@19-172.31.17.112:22-20.229.252.112:49442.service: Deactivated successfully. Apr 24 23:39:16.182811 systemd[1]: session-20.scope: Deactivated successfully. Apr 24 23:39:16.184920 systemd-logind[1993]: Session 20 logged out. Waiting for processes to exit. Apr 24 23:39:16.187021 systemd-logind[1993]: Removed session 20. Apr 24 23:39:21.353841 systemd[1]: Started sshd@20-172.31.17.112:22-20.229.252.112:38456.service - OpenSSH per-connection server daemon (20.229.252.112:38456). Apr 24 23:39:22.366236 sshd[6913]: Accepted publickey for core from 20.229.252.112 port 38456 ssh2: RSA SHA256:EpOBCscCvamodiF49drNiIRDMxdv0LtYbixE7WaoRrA Apr 24 23:39:22.370059 sshd[6913]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:39:22.379192 systemd-logind[1993]: New session 21 of user core. Apr 24 23:39:22.385247 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 24 23:39:23.194482 sshd[6913]: pam_unix(sshd:session): session closed for user core Apr 24 23:39:23.203494 systemd[1]: sshd@20-172.31.17.112:22-20.229.252.112:38456.service: Deactivated successfully. Apr 24 23:39:23.208050 systemd[1]: session-21.scope: Deactivated successfully. Apr 24 23:39:23.211550 systemd-logind[1993]: Session 21 logged out. Waiting for processes to exit. Apr 24 23:39:23.213802 systemd-logind[1993]: Removed session 21. Apr 24 23:39:28.381521 systemd[1]: Started sshd@21-172.31.17.112:22-20.229.252.112:52904.service - OpenSSH per-connection server daemon (20.229.252.112:52904). Apr 24 23:39:29.419916 sshd[6946]: Accepted publickey for core from 20.229.252.112 port 52904 ssh2: RSA SHA256:EpOBCscCvamodiF49drNiIRDMxdv0LtYbixE7WaoRrA Apr 24 23:39:29.423721 sshd[6946]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:39:29.432307 systemd-logind[1993]: New session 22 of user core. Apr 24 23:39:29.437399 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 24 23:39:30.263150 sshd[6946]: pam_unix(sshd:session): session closed for user core Apr 24 23:39:30.269902 systemd[1]: sshd@21-172.31.17.112:22-20.229.252.112:52904.service: Deactivated successfully. Apr 24 23:39:30.275006 systemd[1]: session-22.scope: Deactivated successfully. Apr 24 23:39:30.278621 systemd-logind[1993]: Session 22 logged out. Waiting for processes to exit. Apr 24 23:39:30.280612 systemd-logind[1993]: Removed session 22.