Apr 21 10:05:25.266437 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Apr 21 10:05:25.266485 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Tue Apr 21 08:40:46 -00 2026 Apr 21 10:05:25.266551 kernel: KASLR disabled due to lack of seed Apr 21 10:05:25.266571 kernel: efi: EFI v2.7 by EDK II Apr 21 10:05:25.266588 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7b001a98 MEMRESERVE=0x7852ee18 Apr 21 10:05:25.266605 kernel: ACPI: Early table checksum verification disabled Apr 21 10:05:25.266623 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Apr 21 10:05:25.266639 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Apr 21 10:05:25.266656 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Apr 21 10:05:25.266672 kernel: ACPI: DSDT 0x0000000078640000 0013D2 (v02 AMAZON AMZNDSDT 00000001 AMZN 00000001) Apr 21 10:05:25.266696 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Apr 21 10:05:25.266713 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Apr 21 10:05:25.266729 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Apr 21 10:05:25.266746 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Apr 21 10:05:25.266765 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Apr 21 10:05:25.266785 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Apr 21 10:05:25.266804 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Apr 21 10:05:25.266821 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Apr 21 10:05:25.266838 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Apr 21 10:05:25.266855 kernel: printk: bootconsole [uart0] enabled Apr 21 10:05:25.266872 kernel: NUMA: Failed to initialise from firmware Apr 21 10:05:25.266890 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Apr 21 10:05:25.266907 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Apr 21 10:05:25.266924 kernel: Zone ranges: Apr 21 10:05:25.266941 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Apr 21 10:05:25.266959 kernel: DMA32 empty Apr 21 10:05:25.266981 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Apr 21 10:05:25.266998 kernel: Movable zone start for each node Apr 21 10:05:25.267015 kernel: Early memory node ranges Apr 21 10:05:25.267032 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Apr 21 10:05:25.267049 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Apr 21 10:05:25.267065 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Apr 21 10:05:25.267105 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Apr 21 10:05:25.267123 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Apr 21 10:05:25.267140 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Apr 21 10:05:25.267157 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Apr 21 10:05:25.267175 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Apr 21 10:05:25.267193 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Apr 21 10:05:25.267217 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Apr 21 10:05:25.267234 kernel: psci: probing for conduit method from ACPI. Apr 21 10:05:25.267259 kernel: psci: PSCIv1.0 detected in firmware. Apr 21 10:05:25.267277 kernel: psci: Using standard PSCI v0.2 function IDs Apr 21 10:05:25.267296 kernel: psci: Trusted OS migration not required Apr 21 10:05:25.267317 kernel: psci: SMC Calling Convention v1.1 Apr 21 10:05:25.267336 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000001) Apr 21 10:05:25.267354 kernel: percpu: Embedded 30 pages/cpu s85736 r8192 d28952 u122880 Apr 21 10:05:25.267372 kernel: pcpu-alloc: s85736 r8192 d28952 u122880 alloc=30*4096 Apr 21 10:05:25.267390 kernel: pcpu-alloc: [0] 0 [0] 1 Apr 21 10:05:25.267408 kernel: Detected PIPT I-cache on CPU0 Apr 21 10:05:25.267426 kernel: CPU features: detected: GIC system register CPU interface Apr 21 10:05:25.267444 kernel: CPU features: detected: Spectre-v2 Apr 21 10:05:25.267461 kernel: CPU features: detected: Spectre-v3a Apr 21 10:05:25.267479 kernel: CPU features: detected: Spectre-BHB Apr 21 10:05:25.267497 kernel: CPU features: detected: ARM erratum 1742098 Apr 21 10:05:25.270835 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Apr 21 10:05:25.270862 kernel: alternatives: applying boot alternatives Apr 21 10:05:25.270885 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=406dfa58472aa4d4545d9757071aae8c3923de73d7e3cb8f6327066fa2449407 Apr 21 10:05:25.270906 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 21 10:05:25.270925 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 21 10:05:25.270944 kernel: Fallback order for Node 0: 0 Apr 21 10:05:25.270963 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Apr 21 10:05:25.270981 kernel: Policy zone: Normal Apr 21 10:05:25.271000 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 21 10:05:25.271018 kernel: software IO TLB: area num 2. Apr 21 10:05:25.271038 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Apr 21 10:05:25.271064 kernel: Memory: 3820096K/4030464K available (10304K kernel code, 2180K rwdata, 8116K rodata, 39424K init, 897K bss, 210368K reserved, 0K cma-reserved) Apr 21 10:05:25.271103 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 21 10:05:25.271122 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 21 10:05:25.271142 kernel: rcu: RCU event tracing is enabled. Apr 21 10:05:25.271161 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 21 10:05:25.271180 kernel: Trampoline variant of Tasks RCU enabled. Apr 21 10:05:25.271198 kernel: Tracing variant of Tasks RCU enabled. Apr 21 10:05:25.271216 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 21 10:05:25.271234 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 21 10:05:25.271252 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Apr 21 10:05:25.271269 kernel: GICv3: 96 SPIs implemented Apr 21 10:05:25.271293 kernel: GICv3: 0 Extended SPIs implemented Apr 21 10:05:25.271311 kernel: Root IRQ handler: gic_handle_irq Apr 21 10:05:25.271330 kernel: GICv3: GICv3 features: 16 PPIs Apr 21 10:05:25.271348 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Apr 21 10:05:25.271367 kernel: ITS [mem 0x10080000-0x1009ffff] Apr 21 10:05:25.271386 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Apr 21 10:05:25.271405 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Apr 21 10:05:25.271425 kernel: GICv3: using LPI property table @0x00000004000d0000 Apr 21 10:05:25.271443 kernel: ITS: Using hypervisor restricted LPI range [128] Apr 21 10:05:25.271462 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Apr 21 10:05:25.271481 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 21 10:05:25.271499 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Apr 21 10:05:25.271581 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Apr 21 10:05:25.271600 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Apr 21 10:05:25.271640 kernel: Console: colour dummy device 80x25 Apr 21 10:05:25.271679 kernel: printk: console [tty1] enabled Apr 21 10:05:25.271701 kernel: ACPI: Core revision 20230628 Apr 21 10:05:25.271721 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Apr 21 10:05:25.271739 kernel: pid_max: default: 32768 minimum: 301 Apr 21 10:05:25.271757 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 21 10:05:25.271776 kernel: landlock: Up and running. Apr 21 10:05:25.271800 kernel: SELinux: Initializing. Apr 21 10:05:25.271818 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 21 10:05:25.271838 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 21 10:05:25.271856 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 21 10:05:25.271875 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 21 10:05:25.271894 kernel: rcu: Hierarchical SRCU implementation. Apr 21 10:05:25.271912 kernel: rcu: Max phase no-delay instances is 400. Apr 21 10:05:25.271930 kernel: Platform MSI: ITS@0x10080000 domain created Apr 21 10:05:25.271948 kernel: PCI/MSI: ITS@0x10080000 domain created Apr 21 10:05:25.271970 kernel: Remapping and enabling EFI services. Apr 21 10:05:25.271988 kernel: smp: Bringing up secondary CPUs ... Apr 21 10:05:25.272007 kernel: Detected PIPT I-cache on CPU1 Apr 21 10:05:25.272026 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Apr 21 10:05:25.272045 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Apr 21 10:05:25.272064 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Apr 21 10:05:25.272083 kernel: smp: Brought up 1 node, 2 CPUs Apr 21 10:05:25.272102 kernel: SMP: Total of 2 processors activated. Apr 21 10:05:25.272121 kernel: CPU features: detected: 32-bit EL0 Support Apr 21 10:05:25.272144 kernel: CPU features: detected: 32-bit EL1 Support Apr 21 10:05:25.272162 kernel: CPU features: detected: CRC32 instructions Apr 21 10:05:25.272181 kernel: CPU: All CPU(s) started at EL1 Apr 21 10:05:25.272211 kernel: alternatives: applying system-wide alternatives Apr 21 10:05:25.272235 kernel: devtmpfs: initialized Apr 21 10:05:25.272255 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 21 10:05:25.272275 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 21 10:05:25.272294 kernel: pinctrl core: initialized pinctrl subsystem Apr 21 10:05:25.272313 kernel: SMBIOS 3.0.0 present. Apr 21 10:05:25.272336 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Apr 21 10:05:25.272355 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 21 10:05:25.272374 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Apr 21 10:05:25.272393 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Apr 21 10:05:25.272412 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Apr 21 10:05:25.272431 kernel: audit: initializing netlink subsys (disabled) Apr 21 10:05:25.272451 kernel: audit: type=2000 audit(0.290:1): state=initialized audit_enabled=0 res=1 Apr 21 10:05:25.272471 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 21 10:05:25.272494 kernel: cpuidle: using governor menu Apr 21 10:05:25.275257 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Apr 21 10:05:25.275282 kernel: ASID allocator initialised with 65536 entries Apr 21 10:05:25.275301 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 21 10:05:25.275320 kernel: Serial: AMBA PL011 UART driver Apr 21 10:05:25.275339 kernel: Modules: 17488 pages in range for non-PLT usage Apr 21 10:05:25.275358 kernel: Modules: 509008 pages in range for PLT usage Apr 21 10:05:25.275377 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 21 10:05:25.275395 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Apr 21 10:05:25.275423 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Apr 21 10:05:25.275442 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Apr 21 10:05:25.275461 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 21 10:05:25.275480 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Apr 21 10:05:25.275499 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Apr 21 10:05:25.275538 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Apr 21 10:05:25.275558 kernel: ACPI: Added _OSI(Module Device) Apr 21 10:05:25.275577 kernel: ACPI: Added _OSI(Processor Device) Apr 21 10:05:25.275597 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 21 10:05:25.275623 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 21 10:05:25.275643 kernel: ACPI: Interpreter enabled Apr 21 10:05:25.275662 kernel: ACPI: Using GIC for interrupt routing Apr 21 10:05:25.275682 kernel: ACPI: MCFG table detected, 1 entries Apr 21 10:05:25.275701 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00]) Apr 21 10:05:25.276008 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 21 10:05:25.276238 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Apr 21 10:05:25.276453 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Apr 21 10:05:25.276699 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x200fffff] reserved by PNP0C02:00 Apr 21 10:05:25.276910 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x200fffff] for [bus 00] Apr 21 10:05:25.276936 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Apr 21 10:05:25.276955 kernel: acpiphp: Slot [1] registered Apr 21 10:05:25.276974 kernel: acpiphp: Slot [2] registered Apr 21 10:05:25.276993 kernel: acpiphp: Slot [3] registered Apr 21 10:05:25.277011 kernel: acpiphp: Slot [4] registered Apr 21 10:05:25.277030 kernel: acpiphp: Slot [5] registered Apr 21 10:05:25.277055 kernel: acpiphp: Slot [6] registered Apr 21 10:05:25.277074 kernel: acpiphp: Slot [7] registered Apr 21 10:05:25.277092 kernel: acpiphp: Slot [8] registered Apr 21 10:05:25.277111 kernel: acpiphp: Slot [9] registered Apr 21 10:05:25.277129 kernel: acpiphp: Slot [10] registered Apr 21 10:05:25.277148 kernel: acpiphp: Slot [11] registered Apr 21 10:05:25.277167 kernel: acpiphp: Slot [12] registered Apr 21 10:05:25.277185 kernel: acpiphp: Slot [13] registered Apr 21 10:05:25.277204 kernel: acpiphp: Slot [14] registered Apr 21 10:05:25.277222 kernel: acpiphp: Slot [15] registered Apr 21 10:05:25.277246 kernel: acpiphp: Slot [16] registered Apr 21 10:05:25.277265 kernel: acpiphp: Slot [17] registered Apr 21 10:05:25.277283 kernel: acpiphp: Slot [18] registered Apr 21 10:05:25.277302 kernel: acpiphp: Slot [19] registered Apr 21 10:05:25.277320 kernel: acpiphp: Slot [20] registered Apr 21 10:05:25.277339 kernel: acpiphp: Slot [21] registered Apr 21 10:05:25.277357 kernel: acpiphp: Slot [22] registered Apr 21 10:05:25.277376 kernel: acpiphp: Slot [23] registered Apr 21 10:05:25.277394 kernel: acpiphp: Slot [24] registered Apr 21 10:05:25.277418 kernel: acpiphp: Slot [25] registered Apr 21 10:05:25.277437 kernel: acpiphp: Slot [26] registered Apr 21 10:05:25.277455 kernel: acpiphp: Slot [27] registered Apr 21 10:05:25.277474 kernel: acpiphp: Slot [28] registered Apr 21 10:05:25.277492 kernel: acpiphp: Slot [29] registered Apr 21 10:05:25.278074 kernel: acpiphp: Slot [30] registered Apr 21 10:05:25.278097 kernel: acpiphp: Slot [31] registered Apr 21 10:05:25.278117 kernel: PCI host bridge to bus 0000:00 Apr 21 10:05:25.278346 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Apr 21 10:05:25.278578 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Apr 21 10:05:25.278776 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Apr 21 10:05:25.278966 kernel: pci_bus 0000:00: root bus resource [bus 00] Apr 21 10:05:25.279232 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Apr 21 10:05:25.279467 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Apr 21 10:05:25.282431 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Apr 21 10:05:25.283611 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Apr 21 10:05:25.283849 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Apr 21 10:05:25.284062 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Apr 21 10:05:25.284290 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Apr 21 10:05:25.284501 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Apr 21 10:05:25.284771 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Apr 21 10:05:25.284982 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Apr 21 10:05:25.285223 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Apr 21 10:05:25.285436 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Apr 21 10:05:25.287828 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Apr 21 10:05:25.288045 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Apr 21 10:05:25.288072 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Apr 21 10:05:25.288092 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Apr 21 10:05:25.288112 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Apr 21 10:05:25.288131 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Apr 21 10:05:25.288160 kernel: iommu: Default domain type: Translated Apr 21 10:05:25.288180 kernel: iommu: DMA domain TLB invalidation policy: strict mode Apr 21 10:05:25.288199 kernel: efivars: Registered efivars operations Apr 21 10:05:25.288217 kernel: vgaarb: loaded Apr 21 10:05:25.288236 kernel: clocksource: Switched to clocksource arch_sys_counter Apr 21 10:05:25.288255 kernel: VFS: Disk quotas dquot_6.6.0 Apr 21 10:05:25.288274 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 21 10:05:25.288292 kernel: pnp: PnP ACPI init Apr 21 10:05:25.288537 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Apr 21 10:05:25.288575 kernel: pnp: PnP ACPI: found 1 devices Apr 21 10:05:25.288595 kernel: NET: Registered PF_INET protocol family Apr 21 10:05:25.288614 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 21 10:05:25.288634 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 21 10:05:25.288653 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 21 10:05:25.288672 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 21 10:05:25.288691 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 21 10:05:25.288710 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 21 10:05:25.288733 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 21 10:05:25.288753 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 21 10:05:25.288772 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 21 10:05:25.288790 kernel: PCI: CLS 0 bytes, default 64 Apr 21 10:05:25.288809 kernel: kvm [1]: HYP mode not available Apr 21 10:05:25.288828 kernel: Initialise system trusted keyrings Apr 21 10:05:25.288846 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 21 10:05:25.288865 kernel: Key type asymmetric registered Apr 21 10:05:25.288884 kernel: Asymmetric key parser 'x509' registered Apr 21 10:05:25.288908 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Apr 21 10:05:25.288927 kernel: io scheduler mq-deadline registered Apr 21 10:05:25.288945 kernel: io scheduler kyber registered Apr 21 10:05:25.288964 kernel: io scheduler bfq registered Apr 21 10:05:25.289204 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Apr 21 10:05:25.289234 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Apr 21 10:05:25.289254 kernel: ACPI: button: Power Button [PWRB] Apr 21 10:05:25.289273 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Apr 21 10:05:25.289300 kernel: ACPI: button: Sleep Button [SLPB] Apr 21 10:05:25.289319 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 21 10:05:25.289339 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Apr 21 10:05:25.293060 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Apr 21 10:05:25.293105 kernel: printk: console [ttyS0] disabled Apr 21 10:05:25.293128 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Apr 21 10:05:25.293147 kernel: printk: console [ttyS0] enabled Apr 21 10:05:25.293167 kernel: printk: bootconsole [uart0] disabled Apr 21 10:05:25.293217 kernel: thunder_xcv, ver 1.0 Apr 21 10:05:25.293268 kernel: thunder_bgx, ver 1.0 Apr 21 10:05:25.293309 kernel: nicpf, ver 1.0 Apr 21 10:05:25.293327 kernel: nicvf, ver 1.0 Apr 21 10:05:25.293741 kernel: rtc-efi rtc-efi.0: registered as rtc0 Apr 21 10:05:25.293949 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-04-21T10:05:24 UTC (1776765924) Apr 21 10:05:25.293976 kernel: hid: raw HID events driver (C) Jiri Kosina Apr 21 10:05:25.293996 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Apr 21 10:05:25.294016 kernel: watchdog: Delayed init of the lockup detector failed: -19 Apr 21 10:05:25.294042 kernel: watchdog: Hard watchdog permanently disabled Apr 21 10:05:25.294061 kernel: NET: Registered PF_INET6 protocol family Apr 21 10:05:25.294080 kernel: Segment Routing with IPv6 Apr 21 10:05:25.294099 kernel: In-situ OAM (IOAM) with IPv6 Apr 21 10:05:25.294118 kernel: NET: Registered PF_PACKET protocol family Apr 21 10:05:25.294137 kernel: Key type dns_resolver registered Apr 21 10:05:25.294156 kernel: registered taskstats version 1 Apr 21 10:05:25.294175 kernel: Loading compiled-in X.509 certificates Apr 21 10:05:25.294195 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 3383becb6d31527ac15d01269e47e8fdf1030cd4' Apr 21 10:05:25.294213 kernel: Key type .fscrypt registered Apr 21 10:05:25.294237 kernel: Key type fscrypt-provisioning registered Apr 21 10:05:25.294256 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 21 10:05:25.294275 kernel: ima: Allocated hash algorithm: sha1 Apr 21 10:05:25.294294 kernel: ima: No architecture policies found Apr 21 10:05:25.294313 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Apr 21 10:05:25.294331 kernel: clk: Disabling unused clocks Apr 21 10:05:25.294350 kernel: Freeing unused kernel memory: 39424K Apr 21 10:05:25.294369 kernel: Run /init as init process Apr 21 10:05:25.294388 kernel: with arguments: Apr 21 10:05:25.294413 kernel: /init Apr 21 10:05:25.294432 kernel: with environment: Apr 21 10:05:25.294451 kernel: HOME=/ Apr 21 10:05:25.294471 kernel: TERM=linux Apr 21 10:05:25.294496 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 21 10:05:25.296498 systemd[1]: Detected virtualization amazon. Apr 21 10:05:25.296553 systemd[1]: Detected architecture arm64. Apr 21 10:05:25.296602 systemd[1]: Running in initrd. Apr 21 10:05:25.296626 systemd[1]: No hostname configured, using default hostname. Apr 21 10:05:25.296650 systemd[1]: Hostname set to . Apr 21 10:05:25.296673 systemd[1]: Initializing machine ID from VM UUID. Apr 21 10:05:25.296695 systemd[1]: Queued start job for default target initrd.target. Apr 21 10:05:25.296718 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 21 10:05:25.296741 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 21 10:05:25.296764 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 21 10:05:25.296793 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 21 10:05:25.296815 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 21 10:05:25.296838 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 21 10:05:25.296866 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 21 10:05:25.296891 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 21 10:05:25.296913 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 21 10:05:25.296936 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 21 10:05:25.296967 systemd[1]: Reached target paths.target - Path Units. Apr 21 10:05:25.296989 systemd[1]: Reached target slices.target - Slice Units. Apr 21 10:05:25.297010 systemd[1]: Reached target swap.target - Swaps. Apr 21 10:05:25.297032 systemd[1]: Reached target timers.target - Timer Units. Apr 21 10:05:25.297053 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 21 10:05:25.297074 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 21 10:05:25.297095 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 21 10:05:25.297117 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 21 10:05:25.297138 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 21 10:05:25.297166 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 21 10:05:25.297187 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 21 10:05:25.297207 systemd[1]: Reached target sockets.target - Socket Units. Apr 21 10:05:25.297229 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 21 10:05:25.297250 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 21 10:05:25.297273 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 21 10:05:25.297296 systemd[1]: Starting systemd-fsck-usr.service... Apr 21 10:05:25.297316 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 21 10:05:25.297342 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 21 10:05:25.297363 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 10:05:25.297384 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 21 10:05:25.297405 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 21 10:05:25.297425 systemd[1]: Finished systemd-fsck-usr.service. Apr 21 10:05:25.297561 systemd-journald[251]: Collecting audit messages is disabled. Apr 21 10:05:25.297631 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 21 10:05:25.297653 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 10:05:25.297674 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 21 10:05:25.297699 systemd-journald[251]: Journal started Apr 21 10:05:25.297738 systemd-journald[251]: Runtime Journal (/run/log/journal/ec20f2b86711d6cd996e6e90ef7d06ac) is 8.0M, max 75.3M, 67.3M free. Apr 21 10:05:25.298979 kernel: Bridge firewalling registered Apr 21 10:05:25.264220 systemd-modules-load[252]: Inserted module 'overlay' Apr 21 10:05:25.301496 systemd-modules-load[252]: Inserted module 'br_netfilter' Apr 21 10:05:25.310579 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 21 10:05:25.316007 systemd[1]: Started systemd-journald.service - Journal Service. Apr 21 10:05:25.316107 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 21 10:05:25.326881 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 21 10:05:25.333780 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 21 10:05:25.342825 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 21 10:05:25.359764 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 21 10:05:25.389189 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 21 10:05:25.402112 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 21 10:05:25.409725 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 21 10:05:25.424906 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 21 10:05:25.435317 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 21 10:05:25.450037 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 21 10:05:25.459825 dracut-cmdline[285]: dracut-dracut-053 Apr 21 10:05:25.467567 dracut-cmdline[285]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=406dfa58472aa4d4545d9757071aae8c3923de73d7e3cb8f6327066fa2449407 Apr 21 10:05:25.534647 systemd-resolved[291]: Positive Trust Anchors: Apr 21 10:05:25.534675 systemd-resolved[291]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 21 10:05:25.534737 systemd-resolved[291]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 21 10:05:25.632548 kernel: SCSI subsystem initialized Apr 21 10:05:25.640543 kernel: Loading iSCSI transport class v2.0-870. Apr 21 10:05:25.653566 kernel: iscsi: registered transport (tcp) Apr 21 10:05:25.676242 kernel: iscsi: registered transport (qla4xxx) Apr 21 10:05:25.676330 kernel: QLogic iSCSI HBA Driver Apr 21 10:05:25.771555 kernel: random: crng init done Apr 21 10:05:25.772044 systemd-resolved[291]: Defaulting to hostname 'linux'. Apr 21 10:05:25.776466 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 21 10:05:25.783321 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 21 10:05:25.807654 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 21 10:05:25.820804 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 21 10:05:25.858312 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 21 10:05:25.858394 kernel: device-mapper: uevent: version 1.0.3 Apr 21 10:05:25.860538 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 21 10:05:25.929584 kernel: raid6: neonx8 gen() 6638 MB/s Apr 21 10:05:25.946556 kernel: raid6: neonx4 gen() 6480 MB/s Apr 21 10:05:25.964558 kernel: raid6: neonx2 gen() 5409 MB/s Apr 21 10:05:25.981562 kernel: raid6: neonx1 gen() 3918 MB/s Apr 21 10:05:25.998552 kernel: raid6: int64x8 gen() 3758 MB/s Apr 21 10:05:26.015554 kernel: raid6: int64x4 gen() 3698 MB/s Apr 21 10:05:26.032551 kernel: raid6: int64x2 gen() 3580 MB/s Apr 21 10:05:26.050650 kernel: raid6: int64x1 gen() 2756 MB/s Apr 21 10:05:26.050728 kernel: raid6: using algorithm neonx8 gen() 6638 MB/s Apr 21 10:05:26.069756 kernel: raid6: .... xor() 4927 MB/s, rmw enabled Apr 21 10:05:26.069830 kernel: raid6: using neon recovery algorithm Apr 21 10:05:26.078912 kernel: xor: measuring software checksum speed Apr 21 10:05:26.078993 kernel: 8regs : 10974 MB/sec Apr 21 10:05:26.080164 kernel: 32regs : 11485 MB/sec Apr 21 10:05:26.080540 kernel: arm64_neon : 8757 MB/sec Apr 21 10:05:26.082606 kernel: xor: using function: 32regs (11485 MB/sec) Apr 21 10:05:26.167575 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 21 10:05:26.187797 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 21 10:05:26.199867 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 21 10:05:26.247200 systemd-udevd[471]: Using default interface naming scheme 'v255'. Apr 21 10:05:26.257713 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 21 10:05:26.275054 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 21 10:05:26.308605 dracut-pre-trigger[475]: rd.md=0: removing MD RAID activation Apr 21 10:05:26.370213 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 21 10:05:26.384049 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 21 10:05:26.512487 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 21 10:05:26.533868 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 21 10:05:26.584432 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 21 10:05:26.594411 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 21 10:05:26.602447 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 21 10:05:26.605308 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 21 10:05:26.626230 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 21 10:05:26.669910 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 21 10:05:26.752313 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Apr 21 10:05:26.752395 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Apr 21 10:05:26.758417 kernel: ena 0000:00:05.0: ENA device version: 0.10 Apr 21 10:05:26.758841 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Apr 21 10:05:26.770540 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80110000, mac addr 06:c4:9b:b4:82:97 Apr 21 10:05:26.770632 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 21 10:05:26.770872 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 21 10:05:26.773947 (udev-worker)[527]: Network interface NamePolicy= disabled on kernel command line. Apr 21 10:05:26.783385 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 21 10:05:26.793788 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 21 10:05:26.794128 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 10:05:26.799262 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 10:05:26.818109 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 10:05:26.833567 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Apr 21 10:05:26.833606 kernel: nvme nvme0: pci function 0000:00:04.0 Apr 21 10:05:26.844691 kernel: nvme nvme0: 2/0/0 default/read/poll queues Apr 21 10:05:26.859530 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 21 10:05:26.859598 kernel: GPT:9289727 != 33554431 Apr 21 10:05:26.859625 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 21 10:05:26.859650 kernel: GPT:9289727 != 33554431 Apr 21 10:05:26.859781 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 21 10:05:26.862087 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 21 10:05:26.865101 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 10:05:26.879787 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 21 10:05:26.937017 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 21 10:05:26.959804 kernel: BTRFS: device fsid be2a029c-0ccf-4981-91f9-c6e4b4ef2fb8 devid 1 transid 32 /dev/nvme0n1p3 scanned by (udev-worker) (536) Apr 21 10:05:26.985596 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/nvme0n1p6 scanned by (udev-worker) (530) Apr 21 10:05:27.024611 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Apr 21 10:05:27.060696 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Apr 21 10:05:27.064420 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Apr 21 10:05:27.082981 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 21 10:05:27.115624 disk-uuid[660]: Primary Header is updated. Apr 21 10:05:27.115624 disk-uuid[660]: Secondary Entries is updated. Apr 21 10:05:27.115624 disk-uuid[660]: Secondary Header is updated. Apr 21 10:05:27.141262 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Apr 21 10:05:27.195441 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Apr 21 10:05:28.147567 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 21 10:05:28.151585 disk-uuid[662]: The operation has completed successfully. Apr 21 10:05:28.340376 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 21 10:05:28.344629 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 21 10:05:28.396851 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 21 10:05:28.410029 sh[923]: Success Apr 21 10:05:28.437541 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Apr 21 10:05:28.567417 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 21 10:05:28.576366 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 21 10:05:28.587750 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 21 10:05:28.629678 kernel: BTRFS info (device dm-0): first mount of filesystem be2a029c-0ccf-4981-91f9-c6e4b4ef2fb8 Apr 21 10:05:28.629742 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Apr 21 10:05:28.629769 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 21 10:05:28.631235 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 21 10:05:28.632525 kernel: BTRFS info (device dm-0): using free space tree Apr 21 10:05:28.658529 kernel: BTRFS info (device dm-0): enabling ssd optimizations Apr 21 10:05:28.674493 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 21 10:05:28.679160 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 21 10:05:28.693860 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 21 10:05:28.698279 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 21 10:05:28.740909 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 271cc9ce-9bef-4147-844b-0996375babde Apr 21 10:05:28.740989 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Apr 21 10:05:28.742762 kernel: BTRFS info (device nvme0n1p6): using free space tree Apr 21 10:05:28.765567 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Apr 21 10:05:28.789157 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 21 10:05:28.791954 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 271cc9ce-9bef-4147-844b-0996375babde Apr 21 10:05:28.804838 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 21 10:05:28.820925 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 21 10:05:28.933912 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 21 10:05:28.951913 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 21 10:05:29.039986 systemd-networkd[1129]: lo: Link UP Apr 21 10:05:29.042139 systemd-networkd[1129]: lo: Gained carrier Apr 21 10:05:29.047816 systemd-networkd[1129]: Enumeration completed Apr 21 10:05:29.050492 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 21 10:05:29.055443 systemd[1]: Reached target network.target - Network. Apr 21 10:05:29.069024 systemd-networkd[1129]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 10:05:29.074730 systemd-networkd[1129]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 21 10:05:29.080201 ignition[1070]: Ignition 2.19.0 Apr 21 10:05:29.080242 ignition[1070]: Stage: fetch-offline Apr 21 10:05:29.084634 ignition[1070]: no configs at "/usr/lib/ignition/base.d" Apr 21 10:05:29.084660 ignition[1070]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 21 10:05:29.085233 ignition[1070]: Ignition finished successfully Apr 21 10:05:29.092760 systemd-networkd[1129]: eth0: Link UP Apr 21 10:05:29.092769 systemd-networkd[1129]: eth0: Gained carrier Apr 21 10:05:29.092787 systemd-networkd[1129]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 10:05:29.098947 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 21 10:05:29.132780 systemd-networkd[1129]: eth0: DHCPv4 address 172.31.20.11/20, gateway 172.31.16.1 acquired from 172.31.16.1 Apr 21 10:05:29.133282 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 21 10:05:29.163895 ignition[1137]: Ignition 2.19.0 Apr 21 10:05:29.165758 ignition[1137]: Stage: fetch Apr 21 10:05:29.167884 ignition[1137]: no configs at "/usr/lib/ignition/base.d" Apr 21 10:05:29.167920 ignition[1137]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 21 10:05:29.168086 ignition[1137]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 21 10:05:29.198490 ignition[1137]: PUT result: OK Apr 21 10:05:29.204131 ignition[1137]: parsed url from cmdline: "" Apr 21 10:05:29.204169 ignition[1137]: no config URL provided Apr 21 10:05:29.204191 ignition[1137]: reading system config file "/usr/lib/ignition/user.ign" Apr 21 10:05:29.204224 ignition[1137]: no config at "/usr/lib/ignition/user.ign" Apr 21 10:05:29.204273 ignition[1137]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 21 10:05:29.214675 ignition[1137]: PUT result: OK Apr 21 10:05:29.214807 ignition[1137]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Apr 21 10:05:29.219785 ignition[1137]: GET result: OK Apr 21 10:05:29.219955 ignition[1137]: parsing config with SHA512: 5a12a0403da533477e1f13cdc2292b64cdad3d0e09c4fd4b3630c333f46f5bfd0dc802eba78a3b1875778c8ab684c6d56417b4d6dc0d17c1892e8046d5ba1dad Apr 21 10:05:29.231213 unknown[1137]: fetched base config from "system" Apr 21 10:05:29.235478 unknown[1137]: fetched base config from "system" Apr 21 10:05:29.237713 unknown[1137]: fetched user config from "aws" Apr 21 10:05:29.240555 ignition[1137]: fetch: fetch complete Apr 21 10:05:29.242541 ignition[1137]: fetch: fetch passed Apr 21 10:05:29.244404 ignition[1137]: Ignition finished successfully Apr 21 10:05:29.250626 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 21 10:05:29.264018 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 21 10:05:29.295181 ignition[1145]: Ignition 2.19.0 Apr 21 10:05:29.295212 ignition[1145]: Stage: kargs Apr 21 10:05:29.297306 ignition[1145]: no configs at "/usr/lib/ignition/base.d" Apr 21 10:05:29.297338 ignition[1145]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 21 10:05:29.298761 ignition[1145]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 21 10:05:29.304448 ignition[1145]: PUT result: OK Apr 21 10:05:29.312822 ignition[1145]: kargs: kargs passed Apr 21 10:05:29.314664 ignition[1145]: Ignition finished successfully Apr 21 10:05:29.319159 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 21 10:05:29.330828 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 21 10:05:29.360426 ignition[1152]: Ignition 2.19.0 Apr 21 10:05:29.360450 ignition[1152]: Stage: disks Apr 21 10:05:29.361174 ignition[1152]: no configs at "/usr/lib/ignition/base.d" Apr 21 10:05:29.361202 ignition[1152]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 21 10:05:29.361376 ignition[1152]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 21 10:05:29.373947 ignition[1152]: PUT result: OK Apr 21 10:05:29.379392 ignition[1152]: disks: disks passed Apr 21 10:05:29.379810 ignition[1152]: Ignition finished successfully Apr 21 10:05:29.386668 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 21 10:05:29.392495 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 21 10:05:29.396414 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 21 10:05:29.397243 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 21 10:05:29.398074 systemd[1]: Reached target sysinit.target - System Initialization. Apr 21 10:05:29.398526 systemd[1]: Reached target basic.target - Basic System. Apr 21 10:05:29.420934 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 21 10:05:29.467266 systemd-fsck[1161]: ROOT: clean, 14/553520 files, 52654/553472 blocks Apr 21 10:05:29.475161 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 21 10:05:29.496377 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 21 10:05:29.594544 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 97544627-6598-4a50-85bf-78c13463f4bd r/w with ordered data mode. Quota mode: none. Apr 21 10:05:29.595784 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 21 10:05:29.603959 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 21 10:05:29.620725 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 21 10:05:29.633792 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 21 10:05:29.639073 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 21 10:05:29.642449 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 21 10:05:29.642541 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 21 10:05:29.664548 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/nvme0n1p6 scanned by mount (1180) Apr 21 10:05:29.670723 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 271cc9ce-9bef-4147-844b-0996375babde Apr 21 10:05:29.670812 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Apr 21 10:05:29.670842 kernel: BTRFS info (device nvme0n1p6): using free space tree Apr 21 10:05:29.674039 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 21 10:05:29.688942 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 21 10:05:29.701103 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Apr 21 10:05:29.703235 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 21 10:05:29.798759 initrd-setup-root[1205]: cut: /sysroot/etc/passwd: No such file or directory Apr 21 10:05:29.811803 initrd-setup-root[1212]: cut: /sysroot/etc/group: No such file or directory Apr 21 10:05:29.826664 initrd-setup-root[1219]: cut: /sysroot/etc/shadow: No such file or directory Apr 21 10:05:29.840255 initrd-setup-root[1226]: cut: /sysroot/etc/gshadow: No such file or directory Apr 21 10:05:30.006415 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 21 10:05:30.021701 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 21 10:05:30.027791 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 21 10:05:30.051607 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 21 10:05:30.057579 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 271cc9ce-9bef-4147-844b-0996375babde Apr 21 10:05:30.089780 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 21 10:05:30.106148 ignition[1294]: INFO : Ignition 2.19.0 Apr 21 10:05:30.106148 ignition[1294]: INFO : Stage: mount Apr 21 10:05:30.109985 ignition[1294]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 21 10:05:30.109985 ignition[1294]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 21 10:05:30.109985 ignition[1294]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 21 10:05:30.118287 ignition[1294]: INFO : PUT result: OK Apr 21 10:05:30.127158 ignition[1294]: INFO : mount: mount passed Apr 21 10:05:30.129690 ignition[1294]: INFO : Ignition finished successfully Apr 21 10:05:30.132665 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 21 10:05:30.144947 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 21 10:05:30.168963 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 21 10:05:30.196590 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 scanned by mount (1306) Apr 21 10:05:30.200642 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 271cc9ce-9bef-4147-844b-0996375babde Apr 21 10:05:30.200694 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Apr 21 10:05:30.200721 kernel: BTRFS info (device nvme0n1p6): using free space tree Apr 21 10:05:30.207555 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Apr 21 10:05:30.211895 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 21 10:05:30.248532 ignition[1323]: INFO : Ignition 2.19.0 Apr 21 10:05:30.248532 ignition[1323]: INFO : Stage: files Apr 21 10:05:30.255270 ignition[1323]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 21 10:05:30.255270 ignition[1323]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 21 10:05:30.255270 ignition[1323]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 21 10:05:30.264005 ignition[1323]: INFO : PUT result: OK Apr 21 10:05:30.269897 ignition[1323]: DEBUG : files: compiled without relabeling support, skipping Apr 21 10:05:30.273279 ignition[1323]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 21 10:05:30.273279 ignition[1323]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 21 10:05:30.285915 ignition[1323]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 21 10:05:30.290851 ignition[1323]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 21 10:05:30.294881 unknown[1323]: wrote ssh authorized keys file for user: core Apr 21 10:05:30.297683 ignition[1323]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 21 10:05:30.302026 ignition[1323]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Apr 21 10:05:30.302026 ignition[1323]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Apr 21 10:05:30.370636 systemd-networkd[1129]: eth0: Gained IPv6LL Apr 21 10:05:30.395731 ignition[1323]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 21 10:05:30.725742 ignition[1323]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Apr 21 10:05:30.730699 ignition[1323]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 21 10:05:30.735738 ignition[1323]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 21 10:05:30.735738 ignition[1323]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 21 10:05:30.746743 ignition[1323]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 21 10:05:30.746743 ignition[1323]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 21 10:05:30.746743 ignition[1323]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 21 10:05:30.746743 ignition[1323]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 21 10:05:30.746743 ignition[1323]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 21 10:05:30.746743 ignition[1323]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 21 10:05:30.746743 ignition[1323]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 21 10:05:30.746743 ignition[1323]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-arm64.raw" Apr 21 10:05:30.746743 ignition[1323]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-arm64.raw" Apr 21 10:05:30.746743 ignition[1323]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-arm64.raw" Apr 21 10:05:30.746743 ignition[1323]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.35.1-arm64.raw: attempt #1 Apr 21 10:05:31.115933 ignition[1323]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 21 10:05:31.565140 ignition[1323]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-arm64.raw" Apr 21 10:05:31.565140 ignition[1323]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 21 10:05:31.574696 ignition[1323]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 21 10:05:31.574696 ignition[1323]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 21 10:05:31.574696 ignition[1323]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 21 10:05:31.574696 ignition[1323]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Apr 21 10:05:31.574696 ignition[1323]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Apr 21 10:05:31.574696 ignition[1323]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 21 10:05:31.574696 ignition[1323]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 21 10:05:31.574696 ignition[1323]: INFO : files: files passed Apr 21 10:05:31.574696 ignition[1323]: INFO : Ignition finished successfully Apr 21 10:05:31.611646 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 21 10:05:31.625816 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 21 10:05:31.634950 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 21 10:05:31.649379 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 21 10:05:31.653645 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 21 10:05:31.689668 initrd-setup-root-after-ignition[1351]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 21 10:05:31.689668 initrd-setup-root-after-ignition[1351]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 21 10:05:31.698938 initrd-setup-root-after-ignition[1355]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 21 10:05:31.707623 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 21 10:05:31.715705 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 21 10:05:31.728803 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 21 10:05:31.787766 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 21 10:05:31.790015 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 21 10:05:31.796706 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 21 10:05:31.799262 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 21 10:05:31.802001 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 21 10:05:31.815890 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 21 10:05:31.862454 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 21 10:05:31.879858 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 21 10:05:31.911478 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 21 10:05:31.914649 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 21 10:05:31.918746 systemd[1]: Stopped target timers.target - Timer Units. Apr 21 10:05:31.927652 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 21 10:05:31.930243 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 21 10:05:31.936767 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 21 10:05:31.942062 systemd[1]: Stopped target basic.target - Basic System. Apr 21 10:05:31.946081 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 21 10:05:31.954976 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 21 10:05:31.961483 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 21 10:05:31.970932 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 21 10:05:31.973762 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 21 10:05:31.981808 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 21 10:05:31.986136 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 21 10:05:31.991327 systemd[1]: Stopped target swap.target - Swaps. Apr 21 10:05:31.994637 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 21 10:05:31.994894 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 21 10:05:32.004617 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 21 10:05:32.008333 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 21 10:05:32.015635 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 21 10:05:32.019200 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 21 10:05:32.023024 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 21 10:05:32.023426 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 21 10:05:32.034901 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 21 10:05:32.035417 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 21 10:05:32.043871 systemd[1]: ignition-files.service: Deactivated successfully. Apr 21 10:05:32.044315 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 21 10:05:32.059880 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 21 10:05:32.062120 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 21 10:05:32.062387 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 21 10:05:32.068295 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 21 10:05:32.079623 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 21 10:05:32.082425 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 21 10:05:32.089721 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 21 10:05:32.092179 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 21 10:05:32.108449 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 21 10:05:32.110926 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 21 10:05:32.138669 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 21 10:05:32.143926 ignition[1375]: INFO : Ignition 2.19.0 Apr 21 10:05:32.143926 ignition[1375]: INFO : Stage: umount Apr 21 10:05:32.147805 ignition[1375]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 21 10:05:32.147805 ignition[1375]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 21 10:05:32.152779 ignition[1375]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 21 10:05:32.157401 ignition[1375]: INFO : PUT result: OK Apr 21 10:05:32.162287 ignition[1375]: INFO : umount: umount passed Apr 21 10:05:32.166524 ignition[1375]: INFO : Ignition finished successfully Apr 21 10:05:32.168218 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 21 10:05:32.168948 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 21 10:05:32.175656 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 21 10:05:32.175757 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 21 10:05:32.179916 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 21 10:05:32.180008 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 21 10:05:32.183601 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 21 10:05:32.183690 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 21 10:05:32.189473 systemd[1]: Stopped target network.target - Network. Apr 21 10:05:32.191568 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 21 10:05:32.191680 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 21 10:05:32.194579 systemd[1]: Stopped target paths.target - Path Units. Apr 21 10:05:32.198734 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 21 10:05:32.203085 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 21 10:05:32.208684 systemd[1]: Stopped target slices.target - Slice Units. Apr 21 10:05:32.210723 systemd[1]: Stopped target sockets.target - Socket Units. Apr 21 10:05:32.213687 systemd[1]: iscsid.socket: Deactivated successfully. Apr 21 10:05:32.213792 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 21 10:05:32.216954 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 21 10:05:32.217032 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 21 10:05:32.226666 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 21 10:05:32.226792 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 21 10:05:32.229783 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 21 10:05:32.229875 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 21 10:05:32.236297 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 21 10:05:32.241555 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 21 10:05:32.251580 systemd-networkd[1129]: eth0: DHCPv6 lease lost Apr 21 10:05:32.260033 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 21 10:05:32.260392 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 21 10:05:32.277601 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 21 10:05:32.277854 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 21 10:05:32.285729 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 21 10:05:32.285849 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 21 10:05:32.319456 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 21 10:05:32.324275 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 21 10:05:32.324403 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 21 10:05:32.327882 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 21 10:05:32.328001 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 21 10:05:32.334091 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 21 10:05:32.334216 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 21 10:05:32.355352 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 21 10:05:32.355476 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 21 10:05:32.358938 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 21 10:05:32.391406 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 21 10:05:32.395245 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 21 10:05:32.399373 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 21 10:05:32.399491 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 21 10:05:32.409400 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 21 10:05:32.409778 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 21 10:05:32.423595 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 21 10:05:32.423757 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 21 10:05:32.426780 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 21 10:05:32.426872 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 21 10:05:32.429666 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 21 10:05:32.432132 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 21 10:05:32.441441 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 21 10:05:32.441702 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 21 10:05:32.447211 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 21 10:05:32.447350 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 21 10:05:32.472923 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 21 10:05:32.475601 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 21 10:05:32.475761 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 21 10:05:32.478936 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 21 10:05:32.479070 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 10:05:32.483248 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 21 10:05:32.483486 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 21 10:05:32.526178 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 21 10:05:32.526775 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 21 10:05:32.535274 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 21 10:05:32.550118 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 21 10:05:32.571828 systemd[1]: Switching root. Apr 21 10:05:32.612680 systemd-journald[251]: Journal stopped Apr 21 10:05:34.582980 systemd-journald[251]: Received SIGTERM from PID 1 (systemd). Apr 21 10:05:34.583131 kernel: SELinux: policy capability network_peer_controls=1 Apr 21 10:05:34.583177 kernel: SELinux: policy capability open_perms=1 Apr 21 10:05:34.583211 kernel: SELinux: policy capability extended_socket_class=1 Apr 21 10:05:34.583242 kernel: SELinux: policy capability always_check_network=0 Apr 21 10:05:34.583271 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 21 10:05:34.583308 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 21 10:05:34.583337 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 21 10:05:34.583365 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 21 10:05:34.583396 kernel: audit: type=1403 audit(1776765932.872:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 21 10:05:34.583432 systemd[1]: Successfully loaded SELinux policy in 54.197ms. Apr 21 10:05:34.583479 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 27.259ms. Apr 21 10:05:34.583536 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 21 10:05:34.583583 systemd[1]: Detected virtualization amazon. Apr 21 10:05:34.583617 systemd[1]: Detected architecture arm64. Apr 21 10:05:34.583653 systemd[1]: Detected first boot. Apr 21 10:05:34.583685 systemd[1]: Initializing machine ID from VM UUID. Apr 21 10:05:34.583719 zram_generator::config[1418]: No configuration found. Apr 21 10:05:34.583756 systemd[1]: Populated /etc with preset unit settings. Apr 21 10:05:34.583785 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 21 10:05:34.583818 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 21 10:05:34.583849 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 21 10:05:34.583880 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 21 10:05:34.583914 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 21 10:05:34.583946 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 21 10:05:34.583978 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 21 10:05:34.584041 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 21 10:05:34.584089 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 21 10:05:34.584125 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 21 10:05:34.584188 systemd[1]: Created slice user.slice - User and Session Slice. Apr 21 10:05:34.584224 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 21 10:05:34.584256 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 21 10:05:34.584293 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 21 10:05:34.584325 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 21 10:05:34.584356 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 21 10:05:34.584388 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 21 10:05:34.584420 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 21 10:05:34.584453 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 21 10:05:34.584485 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 21 10:05:34.584535 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 21 10:05:34.584569 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 21 10:05:34.584609 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 21 10:05:34.584642 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 21 10:05:34.584674 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 21 10:05:34.584712 systemd[1]: Reached target slices.target - Slice Units. Apr 21 10:05:34.584746 systemd[1]: Reached target swap.target - Swaps. Apr 21 10:05:34.584778 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 21 10:05:34.584811 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 21 10:05:34.584844 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 21 10:05:34.584880 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 21 10:05:34.584912 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 21 10:05:34.584943 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 21 10:05:34.584973 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 21 10:05:34.585003 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 21 10:05:34.585033 systemd[1]: Mounting media.mount - External Media Directory... Apr 21 10:05:34.585066 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 21 10:05:34.585099 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 21 10:05:34.585131 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 21 10:05:34.585175 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 21 10:05:34.591128 systemd[1]: Reached target machines.target - Containers. Apr 21 10:05:34.591200 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 21 10:05:34.591233 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 21 10:05:34.591268 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 21 10:05:34.591298 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 21 10:05:34.591330 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 21 10:05:34.591361 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 21 10:05:34.591400 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 21 10:05:34.591432 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 21 10:05:34.591464 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 21 10:05:34.591498 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 21 10:05:34.591570 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 21 10:05:34.591601 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 21 10:05:34.591632 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 21 10:05:34.591663 systemd[1]: Stopped systemd-fsck-usr.service. Apr 21 10:05:34.591701 kernel: fuse: init (API version 7.39) Apr 21 10:05:34.591732 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 21 10:05:34.591780 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 21 10:05:34.591817 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 21 10:05:34.591847 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 21 10:05:34.591881 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 21 10:05:34.591913 systemd[1]: verity-setup.service: Deactivated successfully. Apr 21 10:05:34.591943 systemd[1]: Stopped verity-setup.service. Apr 21 10:05:34.591977 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 21 10:05:34.592010 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 21 10:05:34.592051 systemd[1]: Mounted media.mount - External Media Directory. Apr 21 10:05:34.592081 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 21 10:05:34.592123 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 21 10:05:34.592153 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 21 10:05:34.592238 systemd-journald[1496]: Collecting audit messages is disabled. Apr 21 10:05:34.592292 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 21 10:05:34.592324 systemd-journald[1496]: Journal started Apr 21 10:05:34.592374 systemd-journald[1496]: Runtime Journal (/run/log/journal/ec20f2b86711d6cd996e6e90ef7d06ac) is 8.0M, max 75.3M, 67.3M free. Apr 21 10:05:34.600257 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 21 10:05:34.006752 systemd[1]: Queued start job for default target multi-user.target. Apr 21 10:05:34.039915 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Apr 21 10:05:34.040812 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 21 10:05:34.609823 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 21 10:05:34.615614 systemd[1]: Started systemd-journald.service - Journal Service. Apr 21 10:05:34.615468 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 21 10:05:34.616881 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 21 10:05:34.620179 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 21 10:05:34.620462 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 21 10:05:34.625408 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 21 10:05:34.625832 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 21 10:05:34.629213 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 21 10:05:34.650670 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 21 10:05:34.670895 kernel: loop: module loaded Apr 21 10:05:34.668033 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 21 10:05:34.669576 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 21 10:05:34.691697 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 21 10:05:34.696461 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 21 10:05:34.707706 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 21 10:05:34.714533 kernel: ACPI: bus type drm_connector registered Apr 21 10:05:34.718642 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 21 10:05:34.721108 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 21 10:05:34.721164 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 21 10:05:34.725356 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 21 10:05:34.736355 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 21 10:05:34.748843 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 21 10:05:34.753309 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 21 10:05:34.761912 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 21 10:05:34.769867 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 21 10:05:34.772774 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 21 10:05:34.781399 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 21 10:05:34.783964 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 21 10:05:34.787524 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 21 10:05:34.796879 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 21 10:05:34.806201 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 21 10:05:34.808696 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 21 10:05:34.811429 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 21 10:05:34.818246 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 21 10:05:34.821669 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 21 10:05:34.872625 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 21 10:05:34.876130 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 21 10:05:34.887077 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 21 10:05:34.898688 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 21 10:05:34.911752 kernel: loop0: detected capacity change from 0 to 114328 Apr 21 10:05:34.914831 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 21 10:05:34.930399 systemd-journald[1496]: Time spent on flushing to /var/log/journal/ec20f2b86711d6cd996e6e90ef7d06ac is 61.172ms for 900 entries. Apr 21 10:05:34.930399 systemd-journald[1496]: System Journal (/var/log/journal/ec20f2b86711d6cd996e6e90ef7d06ac) is 8.0M, max 195.6M, 187.6M free. Apr 21 10:05:35.026915 systemd-journald[1496]: Received client request to flush runtime journal. Apr 21 10:05:35.027058 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 21 10:05:34.985370 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 21 10:05:34.994631 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 21 10:05:35.039053 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 21 10:05:35.046274 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 21 10:05:35.059608 kernel: loop1: detected capacity change from 0 to 52536 Apr 21 10:05:35.132630 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 21 10:05:35.147846 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 21 10:05:35.274111 kernel: loop2: detected capacity change from 0 to 114432 Apr 21 10:05:35.295881 systemd-tmpfiles[1564]: ACLs are not supported, ignoring. Apr 21 10:05:35.295923 systemd-tmpfiles[1564]: ACLs are not supported, ignoring. Apr 21 10:05:35.320577 kernel: loop3: detected capacity change from 0 to 197488 Apr 21 10:05:35.322624 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 21 10:05:35.329749 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 21 10:05:35.348149 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 21 10:05:35.390566 udevadm[1571]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Apr 21 10:05:35.698926 kernel: loop4: detected capacity change from 0 to 114328 Apr 21 10:05:35.737892 kernel: loop5: detected capacity change from 0 to 52536 Apr 21 10:05:35.765928 kernel: loop6: detected capacity change from 0 to 114432 Apr 21 10:05:35.781038 ldconfig[1537]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 21 10:05:35.790989 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 21 10:05:35.794571 kernel: loop7: detected capacity change from 0 to 197488 Apr 21 10:05:35.819089 (sd-merge)[1573]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Apr 21 10:05:35.820339 (sd-merge)[1573]: Merged extensions into '/usr'. Apr 21 10:05:35.829181 systemd[1]: Reloading requested from client PID 1545 ('systemd-sysext') (unit systemd-sysext.service)... Apr 21 10:05:35.829224 systemd[1]: Reloading... Apr 21 10:05:35.941549 zram_generator::config[1599]: No configuration found. Apr 21 10:05:36.285455 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 21 10:05:36.403010 systemd[1]: Reloading finished in 572 ms. Apr 21 10:05:36.448585 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 21 10:05:36.453571 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 21 10:05:36.466991 systemd[1]: Starting ensure-sysext.service... Apr 21 10:05:36.478053 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 21 10:05:36.485913 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 21 10:05:36.510804 systemd[1]: Reloading requested from client PID 1651 ('systemctl') (unit ensure-sysext.service)... Apr 21 10:05:36.510845 systemd[1]: Reloading... Apr 21 10:05:36.545607 systemd-tmpfiles[1652]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 21 10:05:36.546305 systemd-tmpfiles[1652]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 21 10:05:36.550013 systemd-tmpfiles[1652]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 21 10:05:36.551420 systemd-tmpfiles[1652]: ACLs are not supported, ignoring. Apr 21 10:05:36.553742 systemd-tmpfiles[1652]: ACLs are not supported, ignoring. Apr 21 10:05:36.565968 systemd-tmpfiles[1652]: Detected autofs mount point /boot during canonicalization of boot. Apr 21 10:05:36.565993 systemd-tmpfiles[1652]: Skipping /boot Apr 21 10:05:36.606074 systemd-tmpfiles[1652]: Detected autofs mount point /boot during canonicalization of boot. Apr 21 10:05:36.606251 systemd-tmpfiles[1652]: Skipping /boot Apr 21 10:05:36.612100 systemd-udevd[1653]: Using default interface naming scheme 'v255'. Apr 21 10:05:36.701579 zram_generator::config[1679]: No configuration found. Apr 21 10:05:36.905871 (udev-worker)[1686]: Network interface NamePolicy= disabled on kernel command line. Apr 21 10:05:37.064852 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 21 10:05:37.205349 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 21 10:05:37.206748 systemd[1]: Reloading finished in 695 ms. Apr 21 10:05:37.261677 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 21 10:05:37.266613 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 21 10:05:37.296044 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 32 scanned by (udev-worker) (1688) Apr 21 10:05:37.389412 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 21 10:05:37.398761 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 21 10:05:37.401792 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 21 10:05:37.408211 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 21 10:05:37.416042 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 21 10:05:37.422280 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 21 10:05:37.425290 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 21 10:05:37.432054 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 21 10:05:37.443253 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 21 10:05:37.461950 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 21 10:05:37.468899 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 21 10:05:37.479075 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 10:05:37.497456 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 21 10:05:37.500992 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 21 10:05:37.506101 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 21 10:05:37.509613 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 21 10:05:37.520104 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 21 10:05:37.525951 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 21 10:05:37.526264 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 21 10:05:37.541131 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 21 10:05:37.544707 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 21 10:05:37.551098 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 21 10:05:37.556739 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 21 10:05:37.559362 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 21 10:05:37.559786 systemd[1]: Reached target time-set.target - System Time Set. Apr 21 10:05:37.577230 systemd[1]: Finished ensure-sysext.service. Apr 21 10:05:37.620847 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 21 10:05:37.659189 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 21 10:05:37.661664 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 21 10:05:37.665011 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 21 10:05:37.732073 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Apr 21 10:05:37.742836 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 21 10:05:37.764719 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 21 10:05:37.772868 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 21 10:05:37.795252 augenrules[1884]: No rules Apr 21 10:05:37.796130 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 21 10:05:37.798057 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 21 10:05:37.807765 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 21 10:05:37.812903 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 21 10:05:37.813247 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 21 10:05:37.831978 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 21 10:05:37.833603 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 21 10:05:37.836817 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 21 10:05:37.837608 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 21 10:05:37.861369 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 21 10:05:37.880252 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 21 10:05:37.898354 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 10:05:37.922253 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 21 10:05:37.935236 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 21 10:05:37.950764 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 21 10:05:37.954297 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 21 10:05:37.992527 lvm[1902]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 21 10:05:38.038649 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 21 10:05:38.042037 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 21 10:05:38.058811 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 21 10:05:38.091552 lvm[1910]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 21 10:05:38.122712 systemd-networkd[1831]: lo: Link UP Apr 21 10:05:38.122738 systemd-networkd[1831]: lo: Gained carrier Apr 21 10:05:38.131161 systemd-networkd[1831]: Enumeration completed Apr 21 10:05:38.131370 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 21 10:05:38.134173 systemd-networkd[1831]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 10:05:38.134182 systemd-networkd[1831]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 21 10:05:38.141762 systemd-resolved[1835]: Positive Trust Anchors: Apr 21 10:05:38.142090 systemd-resolved[1835]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 21 10:05:38.142156 systemd-resolved[1835]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 21 10:05:38.147970 systemd-networkd[1831]: eth0: Link UP Apr 21 10:05:38.148360 systemd-networkd[1831]: eth0: Gained carrier Apr 21 10:05:38.148397 systemd-networkd[1831]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 10:05:38.148866 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 21 10:05:38.161219 systemd-resolved[1835]: Defaulting to hostname 'linux'. Apr 21 10:05:38.162603 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 21 10:05:38.166832 systemd-networkd[1831]: eth0: DHCPv4 address 172.31.20.11/20, gateway 172.31.16.1 acquired from 172.31.16.1 Apr 21 10:05:38.167761 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 21 10:05:38.170619 systemd[1]: Reached target network.target - Network. Apr 21 10:05:38.172731 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 21 10:05:38.175733 systemd[1]: Reached target sysinit.target - System Initialization. Apr 21 10:05:38.179299 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 21 10:05:38.182787 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 21 10:05:38.186021 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 21 10:05:38.188955 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 21 10:05:38.191913 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 21 10:05:38.194979 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 21 10:05:38.195083 systemd[1]: Reached target paths.target - Path Units. Apr 21 10:05:38.197265 systemd[1]: Reached target timers.target - Timer Units. Apr 21 10:05:38.201421 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 21 10:05:38.207149 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 21 10:05:38.221879 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 21 10:05:38.225621 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 21 10:05:38.228276 systemd[1]: Reached target sockets.target - Socket Units. Apr 21 10:05:38.230546 systemd[1]: Reached target basic.target - Basic System. Apr 21 10:05:38.232732 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 21 10:05:38.232793 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 21 10:05:38.242702 systemd[1]: Starting containerd.service - containerd container runtime... Apr 21 10:05:38.249961 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Apr 21 10:05:38.259864 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 21 10:05:38.278921 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 21 10:05:38.291854 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 21 10:05:38.299573 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 21 10:05:38.312034 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 21 10:05:38.327496 jq[1919]: false Apr 21 10:05:38.328878 systemd[1]: Started ntpd.service - Network Time Service. Apr 21 10:05:38.341289 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 21 10:05:38.348548 systemd[1]: Starting setup-oem.service - Setup OEM... Apr 21 10:05:38.360115 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 21 10:05:38.370074 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 21 10:05:38.381933 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 21 10:05:38.393395 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 21 10:05:38.394364 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 21 10:05:38.397065 systemd[1]: Starting update-engine.service - Update Engine... Apr 21 10:05:38.405097 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 21 10:05:38.414183 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 21 10:05:38.418723 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 21 10:05:38.479369 systemd[1]: motdgen.service: Deactivated successfully. Apr 21 10:05:38.479912 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 21 10:05:38.509559 extend-filesystems[1920]: Found loop4 Apr 21 10:05:38.509559 extend-filesystems[1920]: Found loop5 Apr 21 10:05:38.509559 extend-filesystems[1920]: Found loop6 Apr 21 10:05:38.509559 extend-filesystems[1920]: Found loop7 Apr 21 10:05:38.509559 extend-filesystems[1920]: Found nvme0n1 Apr 21 10:05:38.509559 extend-filesystems[1920]: Found nvme0n1p1 Apr 21 10:05:38.509559 extend-filesystems[1920]: Found nvme0n1p2 Apr 21 10:05:38.509559 extend-filesystems[1920]: Found nvme0n1p3 Apr 21 10:05:38.509559 extend-filesystems[1920]: Found usr Apr 21 10:05:38.509559 extend-filesystems[1920]: Found nvme0n1p4 Apr 21 10:05:38.509559 extend-filesystems[1920]: Found nvme0n1p6 Apr 21 10:05:38.509559 extend-filesystems[1920]: Found nvme0n1p7 Apr 21 10:05:38.555667 jq[1932]: true Apr 21 10:05:38.526142 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 21 10:05:38.573415 extend-filesystems[1920]: Found nvme0n1p9 Apr 21 10:05:38.573415 extend-filesystems[1920]: Checking size of /dev/nvme0n1p9 Apr 21 10:05:38.521423 dbus-daemon[1918]: [system] SELinux support is enabled Apr 21 10:05:38.578253 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 21 10:05:38.620776 ntpd[1924]: 21 Apr 10:05:38 ntpd[1924]: ntpd 4.2.8p17@1.4004-o Tue Apr 21 08:09:52 UTC 2026 (1): Starting Apr 21 10:05:38.620776 ntpd[1924]: 21 Apr 10:05:38 ntpd[1924]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Apr 21 10:05:38.620776 ntpd[1924]: 21 Apr 10:05:38 ntpd[1924]: ---------------------------------------------------- Apr 21 10:05:38.620776 ntpd[1924]: 21 Apr 10:05:38 ntpd[1924]: ntp-4 is maintained by Network Time Foundation, Apr 21 10:05:38.620776 ntpd[1924]: 21 Apr 10:05:38 ntpd[1924]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Apr 21 10:05:38.620776 ntpd[1924]: 21 Apr 10:05:38 ntpd[1924]: corporation. Support and training for ntp-4 are Apr 21 10:05:38.620776 ntpd[1924]: 21 Apr 10:05:38 ntpd[1924]: available at https://www.nwtime.org/support Apr 21 10:05:38.620776 ntpd[1924]: 21 Apr 10:05:38 ntpd[1924]: ---------------------------------------------------- Apr 21 10:05:38.525525 dbus-daemon[1918]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1831 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Apr 21 10:05:38.578695 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 21 10:05:38.605187 ntpd[1924]: ntpd 4.2.8p17@1.4004-o Tue Apr 21 08:09:52 UTC 2026 (1): Starting Apr 21 10:05:38.596844 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 21 10:05:38.605246 ntpd[1924]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Apr 21 10:05:38.596930 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 21 10:05:38.605268 ntpd[1924]: ---------------------------------------------------- Apr 21 10:05:38.600461 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 21 10:05:38.605289 ntpd[1924]: ntp-4 is maintained by Network Time Foundation, Apr 21 10:05:38.600536 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 21 10:05:38.605310 ntpd[1924]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Apr 21 10:05:38.628908 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Apr 21 10:05:38.605331 ntpd[1924]: corporation. Support and training for ntp-4 are Apr 21 10:05:38.649334 (ntainerd)[1955]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 21 10:05:38.605351 ntpd[1924]: available at https://www.nwtime.org/support Apr 21 10:05:38.666920 ntpd[1924]: 21 Apr 10:05:38 ntpd[1924]: proto: precision = 0.096 usec (-23) Apr 21 10:05:38.666920 ntpd[1924]: 21 Apr 10:05:38 ntpd[1924]: basedate set to 2026-04-09 Apr 21 10:05:38.666920 ntpd[1924]: 21 Apr 10:05:38 ntpd[1924]: gps base set to 2026-04-12 (week 2414) Apr 21 10:05:38.605370 ntpd[1924]: ---------------------------------------------------- Apr 21 10:05:38.619718 dbus-daemon[1918]: [system] Successfully activated service 'org.freedesktop.systemd1' Apr 21 10:05:38.652783 ntpd[1924]: proto: precision = 0.096 usec (-23) Apr 21 10:05:38.653253 ntpd[1924]: basedate set to 2026-04-09 Apr 21 10:05:38.653285 ntpd[1924]: gps base set to 2026-04-12 (week 2414) Apr 21 10:05:38.670059 ntpd[1924]: Listen and drop on 0 v6wildcard [::]:123 Apr 21 10:05:38.677440 ntpd[1924]: 21 Apr 10:05:38 ntpd[1924]: Listen and drop on 0 v6wildcard [::]:123 Apr 21 10:05:38.677440 ntpd[1924]: 21 Apr 10:05:38 ntpd[1924]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Apr 21 10:05:38.677079 ntpd[1924]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Apr 21 10:05:38.681246 ntpd[1924]: Listen normally on 2 lo 127.0.0.1:123 Apr 21 10:05:38.687758 ntpd[1924]: 21 Apr 10:05:38 ntpd[1924]: Listen normally on 2 lo 127.0.0.1:123 Apr 21 10:05:38.687758 ntpd[1924]: 21 Apr 10:05:38 ntpd[1924]: Listen normally on 3 eth0 172.31.20.11:123 Apr 21 10:05:38.687758 ntpd[1924]: 21 Apr 10:05:38 ntpd[1924]: Listen normally on 4 lo [::1]:123 Apr 21 10:05:38.687758 ntpd[1924]: 21 Apr 10:05:38 ntpd[1924]: bind(21) AF_INET6 fe80::4c4:9bff:feb4:8297%2#123 flags 0x11 failed: Cannot assign requested address Apr 21 10:05:38.687758 ntpd[1924]: 21 Apr 10:05:38 ntpd[1924]: unable to create socket on eth0 (5) for fe80::4c4:9bff:feb4:8297%2#123 Apr 21 10:05:38.687758 ntpd[1924]: 21 Apr 10:05:38 ntpd[1924]: failed to init interface for address fe80::4c4:9bff:feb4:8297%2 Apr 21 10:05:38.687758 ntpd[1924]: 21 Apr 10:05:38 ntpd[1924]: Listening on routing socket on fd #21 for interface updates Apr 21 10:05:38.681338 ntpd[1924]: Listen normally on 3 eth0 172.31.20.11:123 Apr 21 10:05:38.681409 ntpd[1924]: Listen normally on 4 lo [::1]:123 Apr 21 10:05:38.681534 ntpd[1924]: bind(21) AF_INET6 fe80::4c4:9bff:feb4:8297%2#123 flags 0x11 failed: Cannot assign requested address Apr 21 10:05:38.681582 ntpd[1924]: unable to create socket on eth0 (5) for fe80::4c4:9bff:feb4:8297%2#123 Apr 21 10:05:38.681614 ntpd[1924]: failed to init interface for address fe80::4c4:9bff:feb4:8297%2 Apr 21 10:05:38.681681 ntpd[1924]: Listening on routing socket on fd #21 for interface updates Apr 21 10:05:38.707826 extend-filesystems[1920]: Resized partition /dev/nvme0n1p9 Apr 21 10:05:38.726550 tar[1938]: linux-arm64/LICENSE Apr 21 10:05:38.726550 tar[1938]: linux-arm64/helm Apr 21 10:05:38.741543 extend-filesystems[1969]: resize2fs 1.47.1 (20-May-2024) Apr 21 10:05:38.757345 ntpd[1924]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 21 10:05:38.779979 jq[1952]: true Apr 21 10:05:38.807327 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 3587067 blocks Apr 21 10:05:38.807380 ntpd[1924]: 21 Apr 10:05:38 ntpd[1924]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 21 10:05:38.807380 ntpd[1924]: 21 Apr 10:05:38 ntpd[1924]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 21 10:05:38.757407 ntpd[1924]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 21 10:05:38.804625 systemd[1]: Finished setup-oem.service - Setup OEM. Apr 21 10:05:38.824734 update_engine[1931]: I20260421 10:05:38.815667 1931 main.cc:92] Flatcar Update Engine starting Apr 21 10:05:38.824734 update_engine[1931]: I20260421 10:05:38.822185 1931 update_check_scheduler.cc:74] Next update check in 3m55s Apr 21 10:05:38.822068 systemd[1]: Started update-engine.service - Update Engine. Apr 21 10:05:38.842174 coreos-metadata[1917]: Apr 21 10:05:38.837 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Apr 21 10:05:38.842174 coreos-metadata[1917]: Apr 21 10:05:38.840 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Apr 21 10:05:38.842174 coreos-metadata[1917]: Apr 21 10:05:38.841 INFO Fetch successful Apr 21 10:05:38.842174 coreos-metadata[1917]: Apr 21 10:05:38.841 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Apr 21 10:05:38.860053 coreos-metadata[1917]: Apr 21 10:05:38.842 INFO Fetch successful Apr 21 10:05:38.860053 coreos-metadata[1917]: Apr 21 10:05:38.842 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Apr 21 10:05:38.860053 coreos-metadata[1917]: Apr 21 10:05:38.843 INFO Fetch successful Apr 21 10:05:38.860053 coreos-metadata[1917]: Apr 21 10:05:38.843 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Apr 21 10:05:38.860053 coreos-metadata[1917]: Apr 21 10:05:38.844 INFO Fetch successful Apr 21 10:05:38.860053 coreos-metadata[1917]: Apr 21 10:05:38.844 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Apr 21 10:05:38.860053 coreos-metadata[1917]: Apr 21 10:05:38.845 INFO Fetch failed with 404: resource not found Apr 21 10:05:38.860053 coreos-metadata[1917]: Apr 21 10:05:38.845 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Apr 21 10:05:38.860053 coreos-metadata[1917]: Apr 21 10:05:38.846 INFO Fetch successful Apr 21 10:05:38.860053 coreos-metadata[1917]: Apr 21 10:05:38.846 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Apr 21 10:05:38.860053 coreos-metadata[1917]: Apr 21 10:05:38.847 INFO Fetch successful Apr 21 10:05:38.860053 coreos-metadata[1917]: Apr 21 10:05:38.847 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Apr 21 10:05:38.860053 coreos-metadata[1917]: Apr 21 10:05:38.848 INFO Fetch successful Apr 21 10:05:38.860053 coreos-metadata[1917]: Apr 21 10:05:38.848 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Apr 21 10:05:38.860053 coreos-metadata[1917]: Apr 21 10:05:38.848 INFO Fetch successful Apr 21 10:05:38.860053 coreos-metadata[1917]: Apr 21 10:05:38.848 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Apr 21 10:05:38.860053 coreos-metadata[1917]: Apr 21 10:05:38.849 INFO Fetch successful Apr 21 10:05:38.857294 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 21 10:05:38.910569 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 3587067 Apr 21 10:05:38.939851 extend-filesystems[1969]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Apr 21 10:05:38.939851 extend-filesystems[1969]: old_desc_blocks = 1, new_desc_blocks = 2 Apr 21 10:05:38.939851 extend-filesystems[1969]: The filesystem on /dev/nvme0n1p9 is now 3587067 (4k) blocks long. Apr 21 10:05:38.949804 extend-filesystems[1920]: Resized filesystem in /dev/nvme0n1p9 Apr 21 10:05:38.956323 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 21 10:05:38.957964 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 21 10:05:38.991653 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Apr 21 10:05:38.994814 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 21 10:05:39.055543 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 32 scanned by (udev-worker) (1684) Apr 21 10:05:39.093291 bash[2005]: Updated "/home/core/.ssh/authorized_keys" Apr 21 10:05:39.105344 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 21 10:05:39.138845 systemd[1]: Starting sshkeys.service... Apr 21 10:05:39.157694 systemd-logind[1930]: Watching system buttons on /dev/input/event0 (Power Button) Apr 21 10:05:39.157765 systemd-logind[1930]: Watching system buttons on /dev/input/event1 (Sleep Button) Apr 21 10:05:39.165930 systemd-logind[1930]: New seat seat0. Apr 21 10:05:39.181592 systemd[1]: Started systemd-logind.service - User Login Management. Apr 21 10:05:39.220704 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Apr 21 10:05:39.235424 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Apr 21 10:05:39.393973 systemd-networkd[1831]: eth0: Gained IPv6LL Apr 21 10:05:39.419366 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 21 10:05:39.432885 systemd[1]: Reached target network-online.target - Network is Online. Apr 21 10:05:39.449866 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Apr 21 10:05:39.460058 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:05:39.466357 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 21 10:05:39.476057 coreos-metadata[2044]: Apr 21 10:05:39.473 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Apr 21 10:05:39.492775 coreos-metadata[2044]: Apr 21 10:05:39.484 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Apr 21 10:05:39.496555 coreos-metadata[2044]: Apr 21 10:05:39.493 INFO Fetch successful Apr 21 10:05:39.496555 coreos-metadata[2044]: Apr 21 10:05:39.493 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Apr 21 10:05:39.503275 coreos-metadata[2044]: Apr 21 10:05:39.502 INFO Fetch successful Apr 21 10:05:39.522098 unknown[2044]: wrote ssh authorized keys file for user: core Apr 21 10:05:39.622199 dbus-daemon[1918]: [system] Successfully activated service 'org.freedesktop.hostname1' Apr 21 10:05:39.630979 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Apr 21 10:05:39.635432 dbus-daemon[1918]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1958 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Apr 21 10:05:39.686172 systemd[1]: Starting polkit.service - Authorization Manager... Apr 21 10:05:39.746702 update-ssh-keys[2088]: Updated "/home/core/.ssh/authorized_keys" Apr 21 10:05:39.753331 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Apr 21 10:05:39.771639 systemd[1]: Finished sshkeys.service. Apr 21 10:05:39.787631 amazon-ssm-agent[2071]: Initializing new seelog logger Apr 21 10:05:39.812915 amazon-ssm-agent[2071]: New Seelog Logger Creation Complete Apr 21 10:05:39.812915 amazon-ssm-agent[2071]: 2026/04/21 10:05:39 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 21 10:05:39.812915 amazon-ssm-agent[2071]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 21 10:05:39.812915 amazon-ssm-agent[2071]: 2026/04/21 10:05:39 processing appconfig overrides Apr 21 10:05:39.812915 amazon-ssm-agent[2071]: 2026/04/21 10:05:39 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 21 10:05:39.812915 amazon-ssm-agent[2071]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 21 10:05:39.812915 amazon-ssm-agent[2071]: 2026/04/21 10:05:39 processing appconfig overrides Apr 21 10:05:39.812915 amazon-ssm-agent[2071]: 2026/04/21 10:05:39 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 21 10:05:39.812915 amazon-ssm-agent[2071]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 21 10:05:39.812915 amazon-ssm-agent[2071]: 2026/04/21 10:05:39 processing appconfig overrides Apr 21 10:05:39.812915 amazon-ssm-agent[2071]: 2026-04-21 10:05:39 INFO Proxy environment variables: Apr 21 10:05:39.813177 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 21 10:05:39.823371 amazon-ssm-agent[2071]: 2026/04/21 10:05:39 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 21 10:05:39.823371 amazon-ssm-agent[2071]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 21 10:05:39.823371 amazon-ssm-agent[2071]: 2026/04/21 10:05:39 processing appconfig overrides Apr 21 10:05:39.893496 polkitd[2100]: Started polkitd version 121 Apr 21 10:05:39.907420 amazon-ssm-agent[2071]: 2026-04-21 10:05:39 INFO https_proxy: Apr 21 10:05:39.926494 locksmithd[1974]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 21 10:05:39.955602 polkitd[2100]: Loading rules from directory /etc/polkit-1/rules.d Apr 21 10:05:39.955770 polkitd[2100]: Loading rules from directory /usr/share/polkit-1/rules.d Apr 21 10:05:39.959253 polkitd[2100]: Finished loading, compiling and executing 2 rules Apr 21 10:05:39.961787 dbus-daemon[1918]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Apr 21 10:05:39.962123 systemd[1]: Started polkit.service - Authorization Manager. Apr 21 10:05:39.966641 polkitd[2100]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Apr 21 10:05:40.019108 amazon-ssm-agent[2071]: 2026-04-21 10:05:39 INFO http_proxy: Apr 21 10:05:40.032965 containerd[1955]: time="2026-04-21T10:05:40.030093225Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 21 10:05:40.068142 systemd-hostnamed[1958]: Hostname set to (transient) Apr 21 10:05:40.069328 systemd-resolved[1835]: System hostname changed to 'ip-172-31-20-11'. Apr 21 10:05:40.123555 amazon-ssm-agent[2071]: 2026-04-21 10:05:39 INFO no_proxy: Apr 21 10:05:40.162172 containerd[1955]: time="2026-04-21T10:05:40.162029541Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 21 10:05:40.165419 containerd[1955]: time="2026-04-21T10:05:40.165315513Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 21 10:05:40.165419 containerd[1955]: time="2026-04-21T10:05:40.165400425Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 21 10:05:40.165658 containerd[1955]: time="2026-04-21T10:05:40.165442737Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 21 10:05:40.165964 containerd[1955]: time="2026-04-21T10:05:40.165886449Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 21 10:05:40.166060 containerd[1955]: time="2026-04-21T10:05:40.165961857Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 21 10:05:40.167086 containerd[1955]: time="2026-04-21T10:05:40.166137897Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 21 10:05:40.167086 containerd[1955]: time="2026-04-21T10:05:40.166192017Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 21 10:05:40.167086 containerd[1955]: time="2026-04-21T10:05:40.166661589Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 21 10:05:40.167086 containerd[1955]: time="2026-04-21T10:05:40.166712265Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 21 10:05:40.167086 containerd[1955]: time="2026-04-21T10:05:40.166749873Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 21 10:05:40.167086 containerd[1955]: time="2026-04-21T10:05:40.166787493Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 21 10:05:40.167086 containerd[1955]: time="2026-04-21T10:05:40.167056257Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 21 10:05:40.167963 containerd[1955]: time="2026-04-21T10:05:40.167696337Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 21 10:05:40.168119 containerd[1955]: time="2026-04-21T10:05:40.168041973Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 21 10:05:40.168119 containerd[1955]: time="2026-04-21T10:05:40.168094449Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 21 10:05:40.170017 containerd[1955]: time="2026-04-21T10:05:40.168379389Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 21 10:05:40.170017 containerd[1955]: time="2026-04-21T10:05:40.168634269Z" level=info msg="metadata content store policy set" policy=shared Apr 21 10:05:40.176923 containerd[1955]: time="2026-04-21T10:05:40.176835777Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 21 10:05:40.177115 containerd[1955]: time="2026-04-21T10:05:40.176975229Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 21 10:05:40.177178 containerd[1955]: time="2026-04-21T10:05:40.177133245Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 21 10:05:40.177228 containerd[1955]: time="2026-04-21T10:05:40.177182229Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 21 10:05:40.177279 containerd[1955]: time="2026-04-21T10:05:40.177218241Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 21 10:05:40.178047 containerd[1955]: time="2026-04-21T10:05:40.177565665Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 21 10:05:40.184545 containerd[1955]: time="2026-04-21T10:05:40.181100793Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 21 10:05:40.184545 containerd[1955]: time="2026-04-21T10:05:40.183908145Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 21 10:05:40.184545 containerd[1955]: time="2026-04-21T10:05:40.183960657Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 21 10:05:40.184545 containerd[1955]: time="2026-04-21T10:05:40.184010193Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 21 10:05:40.184545 containerd[1955]: time="2026-04-21T10:05:40.184047453Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 21 10:05:40.184545 containerd[1955]: time="2026-04-21T10:05:40.184081881Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 21 10:05:40.184545 containerd[1955]: time="2026-04-21T10:05:40.184114401Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 21 10:05:40.184545 containerd[1955]: time="2026-04-21T10:05:40.184148517Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 21 10:05:40.184545 containerd[1955]: time="2026-04-21T10:05:40.184184877Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 21 10:05:40.184545 containerd[1955]: time="2026-04-21T10:05:40.184220361Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 21 10:05:40.184545 containerd[1955]: time="2026-04-21T10:05:40.184260981Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 21 10:05:40.184545 containerd[1955]: time="2026-04-21T10:05:40.184292157Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 21 10:05:40.184545 containerd[1955]: time="2026-04-21T10:05:40.184337133Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 21 10:05:40.184545 containerd[1955]: time="2026-04-21T10:05:40.184386393Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 21 10:05:40.185428 containerd[1955]: time="2026-04-21T10:05:40.184431717Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 21 10:05:40.185428 containerd[1955]: time="2026-04-21T10:05:40.184466697Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 21 10:05:40.185428 containerd[1955]: time="2026-04-21T10:05:40.184497441Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 21 10:05:40.185428 containerd[1955]: time="2026-04-21T10:05:40.185166405Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 21 10:05:40.185428 containerd[1955]: time="2026-04-21T10:05:40.185211729Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 21 10:05:40.185428 containerd[1955]: time="2026-04-21T10:05:40.185245509Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 21 10:05:40.185428 containerd[1955]: time="2026-04-21T10:05:40.185277273Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 21 10:05:40.185428 containerd[1955]: time="2026-04-21T10:05:40.185314389Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 21 10:05:40.185428 containerd[1955]: time="2026-04-21T10:05:40.185344797Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 21 10:05:40.185428 containerd[1955]: time="2026-04-21T10:05:40.185378385Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 21 10:05:40.185428 containerd[1955]: time="2026-04-21T10:05:40.185409057Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 21 10:05:40.185994 containerd[1955]: time="2026-04-21T10:05:40.185446521Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 21 10:05:40.185994 containerd[1955]: time="2026-04-21T10:05:40.185548293Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 21 10:05:40.185994 containerd[1955]: time="2026-04-21T10:05:40.185587665Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 21 10:05:40.185994 containerd[1955]: time="2026-04-21T10:05:40.185616333Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 21 10:05:40.185994 containerd[1955]: time="2026-04-21T10:05:40.185915793Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 21 10:05:40.188602 containerd[1955]: time="2026-04-21T10:05:40.186547449Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 21 10:05:40.188602 containerd[1955]: time="2026-04-21T10:05:40.186605805Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 21 10:05:40.188602 containerd[1955]: time="2026-04-21T10:05:40.186643521Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 21 10:05:40.188602 containerd[1955]: time="2026-04-21T10:05:40.186670065Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 21 10:05:40.188602 containerd[1955]: time="2026-04-21T10:05:40.186701397Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 21 10:05:40.188602 containerd[1955]: time="2026-04-21T10:05:40.186725877Z" level=info msg="NRI interface is disabled by configuration." Apr 21 10:05:40.188602 containerd[1955]: time="2026-04-21T10:05:40.186783333Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 21 10:05:40.190595 containerd[1955]: time="2026-04-21T10:05:40.187492305Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 21 10:05:40.190595 containerd[1955]: time="2026-04-21T10:05:40.189558285Z" level=info msg="Connect containerd service" Apr 21 10:05:40.190595 containerd[1955]: time="2026-04-21T10:05:40.189652833Z" level=info msg="using legacy CRI server" Apr 21 10:05:40.190595 containerd[1955]: time="2026-04-21T10:05:40.189673185Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 21 10:05:40.190595 containerd[1955]: time="2026-04-21T10:05:40.190378809Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 21 10:05:40.196840 containerd[1955]: time="2026-04-21T10:05:40.194538357Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 21 10:05:40.197864 containerd[1955]: time="2026-04-21T10:05:40.197756673Z" level=info msg="Start subscribing containerd event" Apr 21 10:05:40.197988 containerd[1955]: time="2026-04-21T10:05:40.197884557Z" level=info msg="Start recovering state" Apr 21 10:05:40.200551 containerd[1955]: time="2026-04-21T10:05:40.198044577Z" level=info msg="Start event monitor" Apr 21 10:05:40.200551 containerd[1955]: time="2026-04-21T10:05:40.198092193Z" level=info msg="Start snapshots syncer" Apr 21 10:05:40.200551 containerd[1955]: time="2026-04-21T10:05:40.198118809Z" level=info msg="Start cni network conf syncer for default" Apr 21 10:05:40.200551 containerd[1955]: time="2026-04-21T10:05:40.198149985Z" level=info msg="Start streaming server" Apr 21 10:05:40.200859 containerd[1955]: time="2026-04-21T10:05:40.200803209Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 21 10:05:40.200985 containerd[1955]: time="2026-04-21T10:05:40.200925417Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 21 10:05:40.202793 systemd[1]: Started containerd.service - containerd container runtime. Apr 21 10:05:40.206829 containerd[1955]: time="2026-04-21T10:05:40.206760550Z" level=info msg="containerd successfully booted in 0.195681s" Apr 21 10:05:40.219905 amazon-ssm-agent[2071]: 2026-04-21 10:05:39 INFO Checking if agent identity type OnPrem can be assumed Apr 21 10:05:40.318873 amazon-ssm-agent[2071]: 2026-04-21 10:05:39 INFO Checking if agent identity type EC2 can be assumed Apr 21 10:05:40.418881 amazon-ssm-agent[2071]: 2026-04-21 10:05:40 INFO Agent will take identity from EC2 Apr 21 10:05:40.518439 amazon-ssm-agent[2071]: 2026-04-21 10:05:40 INFO [amazon-ssm-agent] using named pipe channel for IPC Apr 21 10:05:40.619771 amazon-ssm-agent[2071]: 2026-04-21 10:05:40 INFO [amazon-ssm-agent] using named pipe channel for IPC Apr 21 10:05:40.719620 amazon-ssm-agent[2071]: 2026-04-21 10:05:40 INFO [amazon-ssm-agent] using named pipe channel for IPC Apr 21 10:05:40.820054 amazon-ssm-agent[2071]: 2026-04-21 10:05:40 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Apr 21 10:05:40.920820 amazon-ssm-agent[2071]: 2026-04-21 10:05:40 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Apr 21 10:05:41.021134 amazon-ssm-agent[2071]: 2026-04-21 10:05:40 INFO [amazon-ssm-agent] Starting Core Agent Apr 21 10:05:41.122713 amazon-ssm-agent[2071]: 2026-04-21 10:05:40 INFO [amazon-ssm-agent] registrar detected. Attempting registration Apr 21 10:05:41.152156 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 21 10:05:41.225386 amazon-ssm-agent[2071]: 2026-04-21 10:05:40 INFO [Registrar] Starting registrar module Apr 21 10:05:41.288772 tar[1938]: linux-arm64/README.md Apr 21 10:05:41.325653 amazon-ssm-agent[2071]: 2026-04-21 10:05:40 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Apr 21 10:05:41.335220 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 21 10:05:41.606826 ntpd[1924]: Listen normally on 6 eth0 [fe80::4c4:9bff:feb4:8297%2]:123 Apr 21 10:05:41.607387 ntpd[1924]: 21 Apr 10:05:41 ntpd[1924]: Listen normally on 6 eth0 [fe80::4c4:9bff:feb4:8297%2]:123 Apr 21 10:05:41.773211 amazon-ssm-agent[2071]: 2026-04-21 10:05:41 INFO [EC2Identity] EC2 registration was successful. Apr 21 10:05:41.803694 amazon-ssm-agent[2071]: 2026-04-21 10:05:41 INFO [CredentialRefresher] credentialRefresher has started Apr 21 10:05:41.804006 amazon-ssm-agent[2071]: 2026-04-21 10:05:41 INFO [CredentialRefresher] Starting credentials refresher loop Apr 21 10:05:41.804153 amazon-ssm-agent[2071]: 2026-04-21 10:05:41 INFO EC2RoleProvider Successfully connected with instance profile role credentials Apr 21 10:05:41.873302 amazon-ssm-agent[2071]: 2026-04-21 10:05:41 INFO [CredentialRefresher] Next credential rotation will be in 32.29164661196667 minutes Apr 21 10:05:42.016587 sshd_keygen[1953]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 21 10:05:42.062314 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 21 10:05:42.075157 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 21 10:05:42.087284 systemd[1]: Started sshd@0-172.31.20.11:22-4.175.71.9:50614.service - OpenSSH per-connection server daemon (4.175.71.9:50614). Apr 21 10:05:42.108773 systemd[1]: issuegen.service: Deactivated successfully. Apr 21 10:05:42.109258 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 21 10:05:42.126190 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 21 10:05:42.166693 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 21 10:05:42.182212 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 21 10:05:42.188213 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 21 10:05:42.191441 systemd[1]: Reached target getty.target - Login Prompts. Apr 21 10:05:42.838328 amazon-ssm-agent[2071]: 2026-04-21 10:05:42 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Apr 21 10:05:42.940117 amazon-ssm-agent[2071]: 2026-04-21 10:05:42 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2165) started Apr 21 10:05:43.041216 amazon-ssm-agent[2071]: 2026-04-21 10:05:42 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Apr 21 10:05:43.144433 sshd[2155]: Accepted publickey for core from 4.175.71.9 port 50614 ssh2: RSA SHA256:aREzjlBzhX3GBruysBn1Uz2TtCDk2d5wBU92NUxSFu4 Apr 21 10:05:43.148772 sshd[2155]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:05:43.175623 systemd-logind[1930]: New session 1 of user core. Apr 21 10:05:43.177727 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 21 10:05:43.194109 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 21 10:05:43.227645 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 21 10:05:43.241927 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 21 10:05:43.274070 (systemd)[2176]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 21 10:05:43.534639 systemd[2176]: Queued start job for default target default.target. Apr 21 10:05:43.546185 systemd[2176]: Created slice app.slice - User Application Slice. Apr 21 10:05:43.546262 systemd[2176]: Reached target paths.target - Paths. Apr 21 10:05:43.546297 systemd[2176]: Reached target timers.target - Timers. Apr 21 10:05:43.549406 systemd[2176]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 21 10:05:43.586338 systemd[2176]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 21 10:05:43.587029 systemd[2176]: Reached target sockets.target - Sockets. Apr 21 10:05:43.587088 systemd[2176]: Reached target basic.target - Basic System. Apr 21 10:05:43.587209 systemd[2176]: Reached target default.target - Main User Target. Apr 21 10:05:43.587284 systemd[2176]: Startup finished in 298ms. Apr 21 10:05:43.588892 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 21 10:05:43.604891 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 21 10:05:43.996975 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:05:44.002741 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 21 10:05:44.008944 systemd[1]: Startup finished in 1.277s (kernel) + 8.053s (initrd) + 11.188s (userspace) = 20.519s. Apr 21 10:05:44.018575 (kubelet)[2191]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 21 10:05:44.320093 systemd[1]: Started sshd@1-172.31.20.11:22-4.175.71.9:50630.service - OpenSSH per-connection server daemon (4.175.71.9:50630). Apr 21 10:05:45.326218 sshd[2197]: Accepted publickey for core from 4.175.71.9 port 50630 ssh2: RSA SHA256:aREzjlBzhX3GBruysBn1Uz2TtCDk2d5wBU92NUxSFu4 Apr 21 10:05:45.327944 sshd[2197]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:05:45.336869 systemd-logind[1930]: New session 2 of user core. Apr 21 10:05:45.345827 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 21 10:05:45.468958 kubelet[2191]: E0421 10:05:45.468894 2191 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 21 10:05:45.473974 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 21 10:05:45.474453 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 21 10:05:45.476634 systemd[1]: kubelet.service: Consumed 1.325s CPU time. Apr 21 10:05:46.018828 sshd[2197]: pam_unix(sshd:session): session closed for user core Apr 21 10:05:46.024636 systemd-logind[1930]: Session 2 logged out. Waiting for processes to exit. Apr 21 10:05:46.025889 systemd[1]: sshd@1-172.31.20.11:22-4.175.71.9:50630.service: Deactivated successfully. Apr 21 10:05:46.030067 systemd[1]: session-2.scope: Deactivated successfully. Apr 21 10:05:46.035160 systemd-logind[1930]: Removed session 2. Apr 21 10:05:46.205052 systemd[1]: Started sshd@2-172.31.20.11:22-4.175.71.9:50962.service - OpenSSH per-connection server daemon (4.175.71.9:50962). Apr 21 10:05:47.245638 sshd[2211]: Accepted publickey for core from 4.175.71.9 port 50962 ssh2: RSA SHA256:aREzjlBzhX3GBruysBn1Uz2TtCDk2d5wBU92NUxSFu4 Apr 21 10:05:47.247410 sshd[2211]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:05:47.255590 systemd-logind[1930]: New session 3 of user core. Apr 21 10:05:47.267777 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 21 10:05:47.948911 sshd[2211]: pam_unix(sshd:session): session closed for user core Apr 21 10:05:47.956290 systemd[1]: sshd@2-172.31.20.11:22-4.175.71.9:50962.service: Deactivated successfully. Apr 21 10:05:47.960024 systemd[1]: session-3.scope: Deactivated successfully. Apr 21 10:05:47.961564 systemd-logind[1930]: Session 3 logged out. Waiting for processes to exit. Apr 21 10:05:47.963579 systemd-logind[1930]: Removed session 3. Apr 21 10:05:48.128027 systemd[1]: Started sshd@3-172.31.20.11:22-4.175.71.9:50966.service - OpenSSH per-connection server daemon (4.175.71.9:50966). Apr 21 10:05:49.152099 sshd[2218]: Accepted publickey for core from 4.175.71.9 port 50966 ssh2: RSA SHA256:aREzjlBzhX3GBruysBn1Uz2TtCDk2d5wBU92NUxSFu4 Apr 21 10:05:49.154756 sshd[2218]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:05:49.163482 systemd-logind[1930]: New session 4 of user core. Apr 21 10:05:49.173874 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 21 10:05:49.858688 sshd[2218]: pam_unix(sshd:session): session closed for user core Apr 21 10:05:49.865971 systemd[1]: sshd@3-172.31.20.11:22-4.175.71.9:50966.service: Deactivated successfully. Apr 21 10:05:49.869361 systemd[1]: session-4.scope: Deactivated successfully. Apr 21 10:05:49.870991 systemd-logind[1930]: Session 4 logged out. Waiting for processes to exit. Apr 21 10:05:49.872930 systemd-logind[1930]: Removed session 4. Apr 21 10:05:50.038045 systemd[1]: Started sshd@4-172.31.20.11:22-4.175.71.9:50974.service - OpenSSH per-connection server daemon (4.175.71.9:50974). Apr 21 10:05:51.051273 sshd[2225]: Accepted publickey for core from 4.175.71.9 port 50974 ssh2: RSA SHA256:aREzjlBzhX3GBruysBn1Uz2TtCDk2d5wBU92NUxSFu4 Apr 21 10:05:51.053131 sshd[2225]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:05:51.061851 systemd-logind[1930]: New session 5 of user core. Apr 21 10:05:51.068818 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 21 10:05:51.600254 sudo[2228]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 21 10:05:51.601484 sudo[2228]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 21 10:05:51.619195 sudo[2228]: pam_unix(sudo:session): session closed for user root Apr 21 10:05:51.783121 sshd[2225]: pam_unix(sshd:session): session closed for user core Apr 21 10:05:51.791225 systemd[1]: sshd@4-172.31.20.11:22-4.175.71.9:50974.service: Deactivated successfully. Apr 21 10:05:51.791294 systemd-logind[1930]: Session 5 logged out. Waiting for processes to exit. Apr 21 10:05:51.795698 systemd[1]: session-5.scope: Deactivated successfully. Apr 21 10:05:51.798248 systemd-logind[1930]: Removed session 5. Apr 21 10:05:51.963025 systemd[1]: Started sshd@5-172.31.20.11:22-4.175.71.9:50976.service - OpenSSH per-connection server daemon (4.175.71.9:50976). Apr 21 10:05:52.967430 sshd[2233]: Accepted publickey for core from 4.175.71.9 port 50976 ssh2: RSA SHA256:aREzjlBzhX3GBruysBn1Uz2TtCDk2d5wBU92NUxSFu4 Apr 21 10:05:52.970135 sshd[2233]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:05:52.977668 systemd-logind[1930]: New session 6 of user core. Apr 21 10:05:52.989762 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 21 10:05:53.498864 sudo[2237]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 21 10:05:53.499569 sudo[2237]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 21 10:05:53.505812 sudo[2237]: pam_unix(sudo:session): session closed for user root Apr 21 10:05:53.516027 sudo[2236]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 21 10:05:53.516693 sudo[2236]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 21 10:05:53.541025 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 21 10:05:53.556394 auditctl[2240]: No rules Apr 21 10:05:53.557471 systemd[1]: audit-rules.service: Deactivated successfully. Apr 21 10:05:53.557852 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 21 10:05:53.570215 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 21 10:05:53.614394 augenrules[2258]: No rules Apr 21 10:05:53.617615 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 21 10:05:53.621028 sudo[2236]: pam_unix(sudo:session): session closed for user root Apr 21 10:05:53.782611 sshd[2233]: pam_unix(sshd:session): session closed for user core Apr 21 10:05:53.788020 systemd[1]: sshd@5-172.31.20.11:22-4.175.71.9:50976.service: Deactivated successfully. Apr 21 10:05:53.791091 systemd[1]: session-6.scope: Deactivated successfully. Apr 21 10:05:53.795072 systemd-logind[1930]: Session 6 logged out. Waiting for processes to exit. Apr 21 10:05:53.797328 systemd-logind[1930]: Removed session 6. Apr 21 10:05:53.971062 systemd[1]: Started sshd@6-172.31.20.11:22-4.175.71.9:50990.service - OpenSSH per-connection server daemon (4.175.71.9:50990). Apr 21 10:05:54.997010 sshd[2266]: Accepted publickey for core from 4.175.71.9 port 50990 ssh2: RSA SHA256:aREzjlBzhX3GBruysBn1Uz2TtCDk2d5wBU92NUxSFu4 Apr 21 10:05:54.999599 sshd[2266]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:05:55.006888 systemd-logind[1930]: New session 7 of user core. Apr 21 10:05:55.017766 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 21 10:05:55.490968 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 21 10:05:55.502210 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:05:55.540885 sudo[2272]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 21 10:05:55.541741 sudo[2272]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 21 10:05:55.978999 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:05:55.984069 (kubelet)[2291]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 21 10:05:56.075903 kubelet[2291]: E0421 10:05:56.075840 2291 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 21 10:05:56.090717 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 21 10:05:56.091266 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 21 10:05:56.091666 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 21 10:05:56.099159 (dockerd)[2299]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 21 10:05:56.501430 dockerd[2299]: time="2026-04-21T10:05:56.501321486Z" level=info msg="Starting up" Apr 21 10:05:56.634075 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1771710370-merged.mount: Deactivated successfully. Apr 21 10:05:56.658016 systemd[1]: var-lib-docker-metacopy\x2dcheck2460198633-merged.mount: Deactivated successfully. Apr 21 10:05:56.676218 dockerd[2299]: time="2026-04-21T10:05:56.675809848Z" level=info msg="Loading containers: start." Apr 21 10:05:56.865580 kernel: Initializing XFRM netlink socket Apr 21 10:05:56.896719 (udev-worker)[2322]: Network interface NamePolicy= disabled on kernel command line. Apr 21 10:05:56.989306 systemd-networkd[1831]: docker0: Link UP Apr 21 10:05:57.019955 dockerd[2299]: time="2026-04-21T10:05:57.019876045Z" level=info msg="Loading containers: done." Apr 21 10:05:57.052923 dockerd[2299]: time="2026-04-21T10:05:57.052843189Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 21 10:05:57.053153 dockerd[2299]: time="2026-04-21T10:05:57.053001380Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 21 10:05:57.053214 dockerd[2299]: time="2026-04-21T10:05:57.053189502Z" level=info msg="Daemon has completed initialization" Apr 21 10:05:57.122606 dockerd[2299]: time="2026-04-21T10:05:57.122133285Z" level=info msg="API listen on /run/docker.sock" Apr 21 10:05:57.124693 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 21 10:05:57.626386 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck857082305-merged.mount: Deactivated successfully. Apr 21 10:05:58.359946 containerd[1955]: time="2026-04-21T10:05:58.359445576Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.4\"" Apr 21 10:05:59.047596 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4212825141.mount: Deactivated successfully. Apr 21 10:06:00.439038 containerd[1955]: time="2026-04-21T10:06:00.438978318Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:06:00.441961 containerd[1955]: time="2026-04-21T10:06:00.441913040Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.35.4: active requests=0, bytes read=24608785" Apr 21 10:06:00.444188 containerd[1955]: time="2026-04-21T10:06:00.444116045Z" level=info msg="ImageCreate event name:\"sha256:09c946ff1743c56c0d49ef90ba95500741e0534f2f590ec98c924e4673ee3096\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:06:00.450468 containerd[1955]: time="2026-04-21T10:06:00.450386335Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:06b4bb208634a107ab9e6c50cdb9df178d05166a700c0cc448d59522091074b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:06:00.452807 containerd[1955]: time="2026-04-21T10:06:00.452755779Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.35.4\" with image id \"sha256:09c946ff1743c56c0d49ef90ba95500741e0534f2f590ec98c924e4673ee3096\", repo tag \"registry.k8s.io/kube-apiserver:v1.35.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:06b4bb208634a107ab9e6c50cdb9df178d05166a700c0cc448d59522091074b5\", size \"24605384\" in 2.093246247s" Apr 21 10:06:00.454441 containerd[1955]: time="2026-04-21T10:06:00.452973556Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.4\" returns image reference \"sha256:09c946ff1743c56c0d49ef90ba95500741e0534f2f590ec98c924e4673ee3096\"" Apr 21 10:06:00.454990 containerd[1955]: time="2026-04-21T10:06:00.454926596Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.4\"" Apr 21 10:06:01.710385 containerd[1955]: time="2026-04-21T10:06:01.710325735Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:06:01.713732 containerd[1955]: time="2026-04-21T10:06:01.713652561Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.35.4: active requests=0, bytes read=19073294" Apr 21 10:06:01.716713 containerd[1955]: time="2026-04-21T10:06:01.716652488Z" level=info msg="ImageCreate event name:\"sha256:95ce7d322e267614405a2a0eccfc0a1bdf5664dd9ab089bdfa9ae74d5ccb05a7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:06:01.722449 containerd[1955]: time="2026-04-21T10:06:01.722392869Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7b036c805d57f203e9efaf43672cff6019b9083a9c0eb107ea8500eace29d8fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:06:01.724994 containerd[1955]: time="2026-04-21T10:06:01.724943232Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.35.4\" with image id \"sha256:95ce7d322e267614405a2a0eccfc0a1bdf5664dd9ab089bdfa9ae74d5ccb05a7\", repo tag \"registry.k8s.io/kube-controller-manager:v1.35.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7b036c805d57f203e9efaf43672cff6019b9083a9c0eb107ea8500eace29d8fd\", size \"20579933\" in 1.269954781s" Apr 21 10:06:01.725172 containerd[1955]: time="2026-04-21T10:06:01.725142363Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.4\" returns image reference \"sha256:95ce7d322e267614405a2a0eccfc0a1bdf5664dd9ab089bdfa9ae74d5ccb05a7\"" Apr 21 10:06:01.727095 containerd[1955]: time="2026-04-21T10:06:01.726943407Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.4\"" Apr 21 10:06:02.809536 containerd[1955]: time="2026-04-21T10:06:02.808041886Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.35.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:06:02.810389 containerd[1955]: time="2026-04-21T10:06:02.810326820Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.35.4: active requests=0, bytes read=13800836" Apr 21 10:06:02.810978 containerd[1955]: time="2026-04-21T10:06:02.810942103Z" level=info msg="ImageCreate event name:\"sha256:77d7d4cb9aa826105b6410a50df1dda7462ec663ced995347d8c171b04b0ee81\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:06:02.817984 containerd[1955]: time="2026-04-21T10:06:02.817918082Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:9054fecb4fa04cc63aec47b0913c8deb3487d414190cd15211f864cfe0d0b4d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:06:02.825793 containerd[1955]: time="2026-04-21T10:06:02.825696997Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.35.4\" with image id \"sha256:77d7d4cb9aa826105b6410a50df1dda7462ec663ced995347d8c171b04b0ee81\", repo tag \"registry.k8s.io/kube-scheduler:v1.35.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:9054fecb4fa04cc63aec47b0913c8deb3487d414190cd15211f864cfe0d0b4d6\", size \"15307493\" in 1.098423641s" Apr 21 10:06:02.825976 containerd[1955]: time="2026-04-21T10:06:02.825945329Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.4\" returns image reference \"sha256:77d7d4cb9aa826105b6410a50df1dda7462ec663ced995347d8c171b04b0ee81\"" Apr 21 10:06:02.826866 containerd[1955]: time="2026-04-21T10:06:02.826815692Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.4\"" Apr 21 10:06:04.152996 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3306001135.mount: Deactivated successfully. Apr 21 10:06:04.532823 containerd[1955]: time="2026-04-21T10:06:04.532765065Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:06:04.534844 containerd[1955]: time="2026-04-21T10:06:04.534709076Z" level=info msg="ImageCreate event name:\"sha256:8c75fb69e773da539298848d12a0a12029818ee910a62f2abd68aa1a5805991c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:06:04.534844 containerd[1955]: time="2026-04-21T10:06:04.534801042Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.35.4: active requests=0, bytes read=22340584" Apr 21 10:06:04.538370 containerd[1955]: time="2026-04-21T10:06:04.538305630Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:c5daa23c72474e5e4062c320177d3b485fd42e7010f052bc80d657c4c00a0672\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:06:04.540131 containerd[1955]: time="2026-04-21T10:06:04.539936849Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.35.4\" with image id \"sha256:8c75fb69e773da539298848d12a0a12029818ee910a62f2abd68aa1a5805991c\", repo tag \"registry.k8s.io/kube-proxy:v1.35.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:c5daa23c72474e5e4062c320177d3b485fd42e7010f052bc80d657c4c00a0672\", size \"22339603\" in 1.713063695s" Apr 21 10:06:04.540131 containerd[1955]: time="2026-04-21T10:06:04.539993637Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.4\" returns image reference \"sha256:8c75fb69e773da539298848d12a0a12029818ee910a62f2abd68aa1a5805991c\"" Apr 21 10:06:04.541157 containerd[1955]: time="2026-04-21T10:06:04.541116450Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\"" Apr 21 10:06:05.195383 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3785282954.mount: Deactivated successfully. Apr 21 10:06:06.241082 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 21 10:06:06.248871 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:06:06.658436 containerd[1955]: time="2026-04-21T10:06:06.658250027Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:06:06.672689 containerd[1955]: time="2026-04-21T10:06:06.672619144Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.13.1: active requests=0, bytes read=21172211" Apr 21 10:06:06.683739 containerd[1955]: time="2026-04-21T10:06:06.683599591Z" level=info msg="ImageCreate event name:\"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:06:06.704954 containerd[1955]: time="2026-04-21T10:06:06.704855854Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:06:06.707653 containerd[1955]: time="2026-04-21T10:06:06.707596920Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.13.1\" with image id \"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\", repo tag \"registry.k8s.io/coredns/coredns:v1.13.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\", size \"21168808\" in 2.166265347s" Apr 21 10:06:06.707956 containerd[1955]: time="2026-04-21T10:06:06.707811179Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\" returns image reference \"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\"" Apr 21 10:06:06.710021 containerd[1955]: time="2026-04-21T10:06:06.709410330Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Apr 21 10:06:06.805681 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:06:06.808178 (kubelet)[2578]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 21 10:06:06.882417 kubelet[2578]: E0421 10:06:06.882242 2578 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 21 10:06:06.887406 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 21 10:06:06.887756 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 21 10:06:07.384108 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1538303870.mount: Deactivated successfully. Apr 21 10:06:07.393117 containerd[1955]: time="2026-04-21T10:06:07.391498953Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:06:07.394282 containerd[1955]: time="2026-04-21T10:06:07.394227605Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=268709" Apr 21 10:06:07.395288 containerd[1955]: time="2026-04-21T10:06:07.395231751Z" level=info msg="ImageCreate event name:\"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:06:07.401191 containerd[1955]: time="2026-04-21T10:06:07.401127921Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:06:07.403376 containerd[1955]: time="2026-04-21T10:06:07.403314826Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"267939\" in 693.839232ms" Apr 21 10:06:07.403935 containerd[1955]: time="2026-04-21T10:06:07.403373728Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\"" Apr 21 10:06:07.404178 containerd[1955]: time="2026-04-21T10:06:07.404128113Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\"" Apr 21 10:06:07.978650 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2069807065.mount: Deactivated successfully. Apr 21 10:06:09.025610 containerd[1955]: time="2026-04-21T10:06:09.025541365Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.6-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:06:09.027315 containerd[1955]: time="2026-04-21T10:06:09.027229241Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.6-0: active requests=0, bytes read=21752308" Apr 21 10:06:09.029008 containerd[1955]: time="2026-04-21T10:06:09.028129631Z" level=info msg="ImageCreate event name:\"sha256:271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:06:09.035082 containerd[1955]: time="2026-04-21T10:06:09.035025433Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:06:09.037230 containerd[1955]: time="2026-04-21T10:06:09.037180510Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.6-0\" with image id \"sha256:271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57\", repo tag \"registry.k8s.io/etcd:3.6.6-0\", repo digest \"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\", size \"21749640\" in 1.632992511s" Apr 21 10:06:09.037435 containerd[1955]: time="2026-04-21T10:06:09.037402597Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\" returns image reference \"sha256:271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57\"" Apr 21 10:06:10.081395 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Apr 21 10:06:13.798693 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:06:13.808415 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:06:13.870944 systemd[1]: Reloading requested from client PID 2679 ('systemctl') (unit session-7.scope)... Apr 21 10:06:13.870984 systemd[1]: Reloading... Apr 21 10:06:14.093552 zram_generator::config[2722]: No configuration found. Apr 21 10:06:14.345280 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 21 10:06:14.520760 systemd[1]: Reloading finished in 649 ms. Apr 21 10:06:14.611395 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 21 10:06:14.611655 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 21 10:06:14.612190 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:06:14.620091 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:06:14.944787 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:06:14.959035 (kubelet)[2781]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 21 10:06:15.032526 kubelet[2781]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 21 10:06:15.441076 kubelet[2781]: I0421 10:06:15.440982 2781 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Apr 21 10:06:15.441076 kubelet[2781]: I0421 10:06:15.441056 2781 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 21 10:06:15.443404 kubelet[2781]: I0421 10:06:15.443357 2781 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 21 10:06:15.443404 kubelet[2781]: I0421 10:06:15.443393 2781 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 21 10:06:15.443952 kubelet[2781]: I0421 10:06:15.443910 2781 server.go:951] "Client rotation is on, will bootstrap in background" Apr 21 10:06:15.455570 kubelet[2781]: E0421 10:06:15.454837 2781 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.20.11:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.20.11:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 21 10:06:15.458558 kubelet[2781]: I0421 10:06:15.458479 2781 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 21 10:06:15.465862 kubelet[2781]: E0421 10:06:15.465789 2781 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 21 10:06:15.466060 kubelet[2781]: I0421 10:06:15.465905 2781 server.go:1395] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Apr 21 10:06:15.471541 kubelet[2781]: I0421 10:06:15.471204 2781 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 21 10:06:15.472708 kubelet[2781]: I0421 10:06:15.472643 2781 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 21 10:06:15.472971 kubelet[2781]: I0421 10:06:15.472701 2781 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-20-11","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 21 10:06:15.473166 kubelet[2781]: I0421 10:06:15.472972 2781 topology_manager.go:143] "Creating topology manager with none policy" Apr 21 10:06:15.473166 kubelet[2781]: I0421 10:06:15.472992 2781 container_manager_linux.go:308] "Creating device plugin manager" Apr 21 10:06:15.473166 kubelet[2781]: I0421 10:06:15.473143 2781 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Apr 21 10:06:15.475852 kubelet[2781]: I0421 10:06:15.475802 2781 state_mem.go:41] "Initialized" logger="CPUManager state memory" Apr 21 10:06:15.478547 kubelet[2781]: I0421 10:06:15.476220 2781 kubelet.go:482] "Attempting to sync node with API server" Apr 21 10:06:15.478547 kubelet[2781]: I0421 10:06:15.476257 2781 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 21 10:06:15.478547 kubelet[2781]: I0421 10:06:15.476293 2781 kubelet.go:394] "Adding apiserver pod source" Apr 21 10:06:15.478547 kubelet[2781]: I0421 10:06:15.476319 2781 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 21 10:06:15.482319 kubelet[2781]: I0421 10:06:15.482274 2781 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 21 10:06:15.484356 kubelet[2781]: I0421 10:06:15.484292 2781 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 21 10:06:15.484356 kubelet[2781]: I0421 10:06:15.484363 2781 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 21 10:06:15.484630 kubelet[2781]: W0421 10:06:15.484437 2781 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 21 10:06:15.489569 kubelet[2781]: I0421 10:06:15.488905 2781 server.go:1257] "Started kubelet" Apr 21 10:06:15.490952 kubelet[2781]: I0421 10:06:15.490881 2781 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Apr 21 10:06:15.493615 kubelet[2781]: I0421 10:06:15.493580 2781 server.go:317] "Adding debug handlers to kubelet server" Apr 21 10:06:15.496955 kubelet[2781]: I0421 10:06:15.496561 2781 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 21 10:06:15.496955 kubelet[2781]: I0421 10:06:15.496673 2781 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 21 10:06:15.497188 kubelet[2781]: I0421 10:06:15.497134 2781 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 21 10:06:15.501392 kubelet[2781]: E0421 10:06:15.497396 2781 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.20.11:6443/api/v1/namespaces/default/events\": dial tcp 172.31.20.11:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-20-11.18a8573db2039fb1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-20-11,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-20-11,},FirstTimestamp:2026-04-21 10:06:15.488864177 +0000 UTC m=+0.523588124,LastTimestamp:2026-04-21 10:06:15.488864177 +0000 UTC m=+0.523588124,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-20-11,}" Apr 21 10:06:15.505394 kubelet[2781]: E0421 10:06:15.505355 2781 kubelet.go:1656] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 21 10:06:15.505654 kubelet[2781]: I0421 10:06:15.505600 2781 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Apr 21 10:06:15.506215 kubelet[2781]: I0421 10:06:15.506187 2781 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 21 10:06:15.509730 kubelet[2781]: E0421 10:06:15.509691 2781 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ip-172-31-20-11\" not found" Apr 21 10:06:15.509943 kubelet[2781]: I0421 10:06:15.509923 2781 volume_manager.go:311] "Starting Kubelet Volume Manager" Apr 21 10:06:15.510393 kubelet[2781]: I0421 10:06:15.510361 2781 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 21 10:06:15.510643 kubelet[2781]: I0421 10:06:15.510622 2781 reconciler.go:29] "Reconciler: start to sync state" Apr 21 10:06:15.512605 kubelet[2781]: I0421 10:06:15.512569 2781 factory.go:223] Registration of the systemd container factory successfully Apr 21 10:06:15.513073 kubelet[2781]: I0421 10:06:15.513039 2781 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 21 10:06:15.517091 kubelet[2781]: I0421 10:06:15.517056 2781 factory.go:223] Registration of the containerd container factory successfully Apr 21 10:06:15.526555 kubelet[2781]: E0421 10:06:15.525798 2781 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.20.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-11?timeout=10s\": dial tcp 172.31.20.11:6443: connect: connection refused" interval="200ms" Apr 21 10:06:15.547662 kubelet[2781]: I0421 10:06:15.547620 2781 cpu_manager.go:225] "Starting" policy="none" Apr 21 10:06:15.547662 kubelet[2781]: I0421 10:06:15.547656 2781 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Apr 21 10:06:15.547875 kubelet[2781]: I0421 10:06:15.547693 2781 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Apr 21 10:06:15.550775 kubelet[2781]: I0421 10:06:15.550724 2781 policy_none.go:50] "Start" Apr 21 10:06:15.550775 kubelet[2781]: I0421 10:06:15.550768 2781 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 21 10:06:15.550953 kubelet[2781]: I0421 10:06:15.550793 2781 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 21 10:06:15.553981 kubelet[2781]: I0421 10:06:15.553818 2781 policy_none.go:44] "Start" Apr 21 10:06:15.568324 kubelet[2781]: I0421 10:06:15.568246 2781 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 21 10:06:15.573969 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 21 10:06:15.575571 kubelet[2781]: I0421 10:06:15.574673 2781 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 21 10:06:15.575571 kubelet[2781]: I0421 10:06:15.574835 2781 status_manager.go:249] "Starting to sync pod status with apiserver" Apr 21 10:06:15.575571 kubelet[2781]: I0421 10:06:15.575088 2781 kubelet.go:2501] "Starting kubelet main sync loop" Apr 21 10:06:15.575571 kubelet[2781]: E0421 10:06:15.575280 2781 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 21 10:06:15.592629 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 21 10:06:15.599737 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 21 10:06:15.606327 kubelet[2781]: E0421 10:06:15.606267 2781 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 21 10:06:15.606644 kubelet[2781]: I0421 10:06:15.606601 2781 eviction_manager.go:194] "Eviction manager: starting control loop" Apr 21 10:06:15.606712 kubelet[2781]: I0421 10:06:15.606636 2781 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 21 10:06:15.610737 kubelet[2781]: I0421 10:06:15.608168 2781 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Apr 21 10:06:15.611600 kubelet[2781]: E0421 10:06:15.611139 2781 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 21 10:06:15.612682 kubelet[2781]: E0421 10:06:15.611688 2781 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-20-11\" not found" Apr 21 10:06:15.696118 systemd[1]: Created slice kubepods-burstable-podc1a049c8fcc29172082c0a68710c99e8.slice - libcontainer container kubepods-burstable-podc1a049c8fcc29172082c0a68710c99e8.slice. Apr 21 10:06:15.709397 kubelet[2781]: I0421 10:06:15.708985 2781 kubelet_node_status.go:74] "Attempting to register node" node="ip-172-31-20-11" Apr 21 10:06:15.710463 kubelet[2781]: E0421 10:06:15.709650 2781 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://172.31.20.11:6443/api/v1/nodes\": dial tcp 172.31.20.11:6443: connect: connection refused" node="ip-172-31-20-11" Apr 21 10:06:15.710463 kubelet[2781]: E0421 10:06:15.710193 2781 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-20-11\" not found" node="ip-172-31-20-11" Apr 21 10:06:15.711290 kubelet[2781]: I0421 10:06:15.711237 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/53bc613d3b4a50e5bcba1664ca1eee28-kubeconfig\") pod \"kube-controller-manager-ip-172-31-20-11\" (UID: \"53bc613d3b4a50e5bcba1664ca1eee28\") " pod="kube-system/kube-controller-manager-ip-172-31-20-11" Apr 21 10:06:15.711446 kubelet[2781]: I0421 10:06:15.711412 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/53bc613d3b4a50e5bcba1664ca1eee28-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-20-11\" (UID: \"53bc613d3b4a50e5bcba1664ca1eee28\") " pod="kube-system/kube-controller-manager-ip-172-31-20-11" Apr 21 10:06:15.711709 kubelet[2781]: I0421 10:06:15.711468 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c1a049c8fcc29172082c0a68710c99e8-kubeconfig\") pod \"kube-scheduler-ip-172-31-20-11\" (UID: \"c1a049c8fcc29172082c0a68710c99e8\") " pod="kube-system/kube-scheduler-ip-172-31-20-11" Apr 21 10:06:15.712031 kubelet[2781]: I0421 10:06:15.711984 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5b5e039f831c8ea0c4589c173271d02c-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-20-11\" (UID: \"5b5e039f831c8ea0c4589c173271d02c\") " pod="kube-system/kube-apiserver-ip-172-31-20-11" Apr 21 10:06:15.712189 kubelet[2781]: I0421 10:06:15.712164 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/53bc613d3b4a50e5bcba1664ca1eee28-k8s-certs\") pod \"kube-controller-manager-ip-172-31-20-11\" (UID: \"53bc613d3b4a50e5bcba1664ca1eee28\") " pod="kube-system/kube-controller-manager-ip-172-31-20-11" Apr 21 10:06:15.712283 kubelet[2781]: I0421 10:06:15.712251 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5b5e039f831c8ea0c4589c173271d02c-ca-certs\") pod \"kube-apiserver-ip-172-31-20-11\" (UID: \"5b5e039f831c8ea0c4589c173271d02c\") " pod="kube-system/kube-apiserver-ip-172-31-20-11" Apr 21 10:06:15.713591 kubelet[2781]: I0421 10:06:15.712945 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5b5e039f831c8ea0c4589c173271d02c-k8s-certs\") pod \"kube-apiserver-ip-172-31-20-11\" (UID: \"5b5e039f831c8ea0c4589c173271d02c\") " pod="kube-system/kube-apiserver-ip-172-31-20-11" Apr 21 10:06:15.713591 kubelet[2781]: I0421 10:06:15.713026 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/53bc613d3b4a50e5bcba1664ca1eee28-ca-certs\") pod \"kube-controller-manager-ip-172-31-20-11\" (UID: \"53bc613d3b4a50e5bcba1664ca1eee28\") " pod="kube-system/kube-controller-manager-ip-172-31-20-11" Apr 21 10:06:15.713591 kubelet[2781]: I0421 10:06:15.713185 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/53bc613d3b4a50e5bcba1664ca1eee28-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-20-11\" (UID: \"53bc613d3b4a50e5bcba1664ca1eee28\") " pod="kube-system/kube-controller-manager-ip-172-31-20-11" Apr 21 10:06:15.718497 systemd[1]: Created slice kubepods-burstable-pod5b5e039f831c8ea0c4589c173271d02c.slice - libcontainer container kubepods-burstable-pod5b5e039f831c8ea0c4589c173271d02c.slice. Apr 21 10:06:15.723251 kubelet[2781]: E0421 10:06:15.723209 2781 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-20-11\" not found" node="ip-172-31-20-11" Apr 21 10:06:15.729045 kubelet[2781]: E0421 10:06:15.728977 2781 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.20.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-11?timeout=10s\": dial tcp 172.31.20.11:6443: connect: connection refused" interval="400ms" Apr 21 10:06:15.730843 systemd[1]: Created slice kubepods-burstable-pod53bc613d3b4a50e5bcba1664ca1eee28.slice - libcontainer container kubepods-burstable-pod53bc613d3b4a50e5bcba1664ca1eee28.slice. Apr 21 10:06:15.734331 kubelet[2781]: E0421 10:06:15.734273 2781 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-20-11\" not found" node="ip-172-31-20-11" Apr 21 10:06:15.912321 kubelet[2781]: I0421 10:06:15.912281 2781 kubelet_node_status.go:74] "Attempting to register node" node="ip-172-31-20-11" Apr 21 10:06:15.913002 kubelet[2781]: E0421 10:06:15.912936 2781 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://172.31.20.11:6443/api/v1/nodes\": dial tcp 172.31.20.11:6443: connect: connection refused" node="ip-172-31-20-11" Apr 21 10:06:16.016150 containerd[1955]: time="2026-04-21T10:06:16.015752293Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-20-11,Uid:c1a049c8fcc29172082c0a68710c99e8,Namespace:kube-system,Attempt:0,}" Apr 21 10:06:16.028062 containerd[1955]: time="2026-04-21T10:06:16.027986263Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-20-11,Uid:5b5e039f831c8ea0c4589c173271d02c,Namespace:kube-system,Attempt:0,}" Apr 21 10:06:16.039397 containerd[1955]: time="2026-04-21T10:06:16.039033896Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-20-11,Uid:53bc613d3b4a50e5bcba1664ca1eee28,Namespace:kube-system,Attempt:0,}" Apr 21 10:06:16.130259 kubelet[2781]: E0421 10:06:16.130200 2781 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.20.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-11?timeout=10s\": dial tcp 172.31.20.11:6443: connect: connection refused" interval="800ms" Apr 21 10:06:16.315950 kubelet[2781]: I0421 10:06:16.315339 2781 kubelet_node_status.go:74] "Attempting to register node" node="ip-172-31-20-11" Apr 21 10:06:16.315950 kubelet[2781]: E0421 10:06:16.315795 2781 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://172.31.20.11:6443/api/v1/nodes\": dial tcp 172.31.20.11:6443: connect: connection refused" node="ip-172-31-20-11" Apr 21 10:06:16.600502 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2165255658.mount: Deactivated successfully. Apr 21 10:06:16.616581 containerd[1955]: time="2026-04-21T10:06:16.615721448Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 21 10:06:16.618083 containerd[1955]: time="2026-04-21T10:06:16.618008495Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 21 10:06:16.620387 containerd[1955]: time="2026-04-21T10:06:16.619986723Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Apr 21 10:06:16.622023 containerd[1955]: time="2026-04-21T10:06:16.621952730Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 21 10:06:16.624084 containerd[1955]: time="2026-04-21T10:06:16.624032385Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 21 10:06:16.627545 containerd[1955]: time="2026-04-21T10:06:16.626777401Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 21 10:06:16.628410 containerd[1955]: time="2026-04-21T10:06:16.628352048Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 21 10:06:16.633192 containerd[1955]: time="2026-04-21T10:06:16.633122404Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 21 10:06:16.637354 containerd[1955]: time="2026-04-21T10:06:16.637287802Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 609.184072ms" Apr 21 10:06:16.642674 containerd[1955]: time="2026-04-21T10:06:16.642609173Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 626.743556ms" Apr 21 10:06:16.644196 containerd[1955]: time="2026-04-21T10:06:16.644144500Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 605.002034ms" Apr 21 10:06:16.870716 containerd[1955]: time="2026-04-21T10:06:16.867813100Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:06:16.870716 containerd[1955]: time="2026-04-21T10:06:16.867933244Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:06:16.870716 containerd[1955]: time="2026-04-21T10:06:16.867962695Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:06:16.870716 containerd[1955]: time="2026-04-21T10:06:16.868144490Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:06:16.884703 containerd[1955]: time="2026-04-21T10:06:16.883658343Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:06:16.884703 containerd[1955]: time="2026-04-21T10:06:16.883763780Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:06:16.884703 containerd[1955]: time="2026-04-21T10:06:16.883813725Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:06:16.884703 containerd[1955]: time="2026-04-21T10:06:16.884023049Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:06:16.886962 containerd[1955]: time="2026-04-21T10:06:16.886782905Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:06:16.887289 containerd[1955]: time="2026-04-21T10:06:16.887130130Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:06:16.887351 containerd[1955]: time="2026-04-21T10:06:16.887242146Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:06:16.887612 containerd[1955]: time="2026-04-21T10:06:16.887551397Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:06:16.929934 systemd[1]: Started cri-containerd-57643a2d4f3516af55ea9d4567035419c7a588810aa3e6948d6c1023cc061740.scope - libcontainer container 57643a2d4f3516af55ea9d4567035419c7a588810aa3e6948d6c1023cc061740. Apr 21 10:06:16.932049 kubelet[2781]: E0421 10:06:16.931103 2781 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.20.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-11?timeout=10s\": dial tcp 172.31.20.11:6443: connect: connection refused" interval="1.6s" Apr 21 10:06:16.943070 systemd[1]: Started cri-containerd-90dd0df3282434f4248494f1309ad6660e989dd97206795dff9c7dd0686bf9a7.scope - libcontainer container 90dd0df3282434f4248494f1309ad6660e989dd97206795dff9c7dd0686bf9a7. Apr 21 10:06:16.956323 systemd[1]: Started cri-containerd-dcbd461d69c9327767543660c172abb76953b2c505fb8e537de4e9ae0b089841.scope - libcontainer container dcbd461d69c9327767543660c172abb76953b2c505fb8e537de4e9ae0b089841. Apr 21 10:06:17.073944 containerd[1955]: time="2026-04-21T10:06:17.073878527Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-20-11,Uid:5b5e039f831c8ea0c4589c173271d02c,Namespace:kube-system,Attempt:0,} returns sandbox id \"57643a2d4f3516af55ea9d4567035419c7a588810aa3e6948d6c1023cc061740\"" Apr 21 10:06:17.075861 containerd[1955]: time="2026-04-21T10:06:17.075795753Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-20-11,Uid:53bc613d3b4a50e5bcba1664ca1eee28,Namespace:kube-system,Attempt:0,} returns sandbox id \"90dd0df3282434f4248494f1309ad6660e989dd97206795dff9c7dd0686bf9a7\"" Apr 21 10:06:17.089763 containerd[1955]: time="2026-04-21T10:06:17.089486615Z" level=info msg="CreateContainer within sandbox \"90dd0df3282434f4248494f1309ad6660e989dd97206795dff9c7dd0686bf9a7\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 21 10:06:17.090296 containerd[1955]: time="2026-04-21T10:06:17.089966063Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-20-11,Uid:c1a049c8fcc29172082c0a68710c99e8,Namespace:kube-system,Attempt:0,} returns sandbox id \"dcbd461d69c9327767543660c172abb76953b2c505fb8e537de4e9ae0b089841\"" Apr 21 10:06:17.093047 containerd[1955]: time="2026-04-21T10:06:17.092996965Z" level=info msg="CreateContainer within sandbox \"57643a2d4f3516af55ea9d4567035419c7a588810aa3e6948d6c1023cc061740\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 21 10:06:17.102819 containerd[1955]: time="2026-04-21T10:06:17.102490650Z" level=info msg="CreateContainer within sandbox \"dcbd461d69c9327767543660c172abb76953b2c505fb8e537de4e9ae0b089841\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 21 10:06:17.119295 kubelet[2781]: I0421 10:06:17.118420 2781 kubelet_node_status.go:74] "Attempting to register node" node="ip-172-31-20-11" Apr 21 10:06:17.119295 kubelet[2781]: E0421 10:06:17.119217 2781 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://172.31.20.11:6443/api/v1/nodes\": dial tcp 172.31.20.11:6443: connect: connection refused" node="ip-172-31-20-11" Apr 21 10:06:17.144900 containerd[1955]: time="2026-04-21T10:06:17.143596937Z" level=info msg="CreateContainer within sandbox \"90dd0df3282434f4248494f1309ad6660e989dd97206795dff9c7dd0686bf9a7\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"0bd22e82160f2c8bf19791a772b6dcc521c75d5ebeaf23f07a12ce9d26a825dc\"" Apr 21 10:06:17.144900 containerd[1955]: time="2026-04-21T10:06:17.144480651Z" level=info msg="StartContainer for \"0bd22e82160f2c8bf19791a772b6dcc521c75d5ebeaf23f07a12ce9d26a825dc\"" Apr 21 10:06:17.151722 containerd[1955]: time="2026-04-21T10:06:17.151485792Z" level=info msg="CreateContainer within sandbox \"dcbd461d69c9327767543660c172abb76953b2c505fb8e537de4e9ae0b089841\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"75942a978b5676d6758cdc99ff7aa0248e5378c2a3b90b974f5d4520976282f7\"" Apr 21 10:06:17.154538 containerd[1955]: time="2026-04-21T10:06:17.152591688Z" level=info msg="StartContainer for \"75942a978b5676d6758cdc99ff7aa0248e5378c2a3b90b974f5d4520976282f7\"" Apr 21 10:06:17.155250 containerd[1955]: time="2026-04-21T10:06:17.155199788Z" level=info msg="CreateContainer within sandbox \"57643a2d4f3516af55ea9d4567035419c7a588810aa3e6948d6c1023cc061740\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"08c375305a941f47509863054e75b5f71921ac97c7a465cafea6e8b46230223d\"" Apr 21 10:06:17.156294 containerd[1955]: time="2026-04-21T10:06:17.156244730Z" level=info msg="StartContainer for \"08c375305a941f47509863054e75b5f71921ac97c7a465cafea6e8b46230223d\"" Apr 21 10:06:17.206845 systemd[1]: Started cri-containerd-0bd22e82160f2c8bf19791a772b6dcc521c75d5ebeaf23f07a12ce9d26a825dc.scope - libcontainer container 0bd22e82160f2c8bf19791a772b6dcc521c75d5ebeaf23f07a12ce9d26a825dc. Apr 21 10:06:17.236599 systemd[1]: Started cri-containerd-75942a978b5676d6758cdc99ff7aa0248e5378c2a3b90b974f5d4520976282f7.scope - libcontainer container 75942a978b5676d6758cdc99ff7aa0248e5378c2a3b90b974f5d4520976282f7. Apr 21 10:06:17.254861 systemd[1]: Started cri-containerd-08c375305a941f47509863054e75b5f71921ac97c7a465cafea6e8b46230223d.scope - libcontainer container 08c375305a941f47509863054e75b5f71921ac97c7a465cafea6e8b46230223d. Apr 21 10:06:17.336991 containerd[1955]: time="2026-04-21T10:06:17.336923373Z" level=info msg="StartContainer for \"0bd22e82160f2c8bf19791a772b6dcc521c75d5ebeaf23f07a12ce9d26a825dc\" returns successfully" Apr 21 10:06:17.381763 containerd[1955]: time="2026-04-21T10:06:17.381689666Z" level=info msg="StartContainer for \"75942a978b5676d6758cdc99ff7aa0248e5378c2a3b90b974f5d4520976282f7\" returns successfully" Apr 21 10:06:17.395932 containerd[1955]: time="2026-04-21T10:06:17.395156712Z" level=info msg="StartContainer for \"08c375305a941f47509863054e75b5f71921ac97c7a465cafea6e8b46230223d\" returns successfully" Apr 21 10:06:17.608973 kubelet[2781]: E0421 10:06:17.608821 2781 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-20-11\" not found" node="ip-172-31-20-11" Apr 21 10:06:17.628532 kubelet[2781]: E0421 10:06:17.628273 2781 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.20.11:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.20.11:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 21 10:06:17.647654 kubelet[2781]: E0421 10:06:17.646764 2781 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-20-11\" not found" node="ip-172-31-20-11" Apr 21 10:06:17.647654 kubelet[2781]: E0421 10:06:17.647420 2781 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-20-11\" not found" node="ip-172-31-20-11" Apr 21 10:06:18.622533 kubelet[2781]: E0421 10:06:18.622444 2781 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-20-11\" not found" node="ip-172-31-20-11" Apr 21 10:06:18.641567 kubelet[2781]: E0421 10:06:18.640080 2781 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-20-11\" not found" node="ip-172-31-20-11" Apr 21 10:06:18.722531 kubelet[2781]: I0421 10:06:18.721862 2781 kubelet_node_status.go:74] "Attempting to register node" node="ip-172-31-20-11" Apr 21 10:06:19.626032 kubelet[2781]: E0421 10:06:19.625981 2781 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-20-11\" not found" node="ip-172-31-20-11" Apr 21 10:06:20.075198 kubelet[2781]: E0421 10:06:20.075126 2781 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-20-11\" not found" node="ip-172-31-20-11" Apr 21 10:06:20.266921 kubelet[2781]: E0421 10:06:20.266850 2781 nodelease.go:50] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-20-11\" not found" node="ip-172-31-20-11" Apr 21 10:06:20.386157 kubelet[2781]: I0421 10:06:20.384699 2781 kubelet_node_status.go:77] "Successfully registered node" node="ip-172-31-20-11" Apr 21 10:06:20.417101 kubelet[2781]: I0421 10:06:20.416981 2781 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-20-11" Apr 21 10:06:20.449746 kubelet[2781]: E0421 10:06:20.449676 2781 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-20-11\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-20-11" Apr 21 10:06:20.449746 kubelet[2781]: I0421 10:06:20.449725 2781 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-20-11" Apr 21 10:06:20.457851 kubelet[2781]: E0421 10:06:20.457719 2781 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-20-11\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-20-11" Apr 21 10:06:20.457851 kubelet[2781]: I0421 10:06:20.457791 2781 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-20-11" Apr 21 10:06:20.468073 kubelet[2781]: E0421 10:06:20.468015 2781 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-20-11\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-20-11" Apr 21 10:06:20.484665 kubelet[2781]: I0421 10:06:20.484373 2781 apiserver.go:52] "Watching apiserver" Apr 21 10:06:20.511717 kubelet[2781]: I0421 10:06:20.511614 2781 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 21 10:06:21.639185 kubelet[2781]: I0421 10:06:21.638854 2781 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-20-11" Apr 21 10:06:22.945956 systemd[1]: Reloading requested from client PID 3061 ('systemctl') (unit session-7.scope)... Apr 21 10:06:22.946519 systemd[1]: Reloading... Apr 21 10:06:23.135663 zram_generator::config[3113]: No configuration found. Apr 21 10:06:23.364228 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 21 10:06:23.574104 systemd[1]: Reloading finished in 626 ms. Apr 21 10:06:23.658859 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:06:23.677311 systemd[1]: kubelet.service: Deactivated successfully. Apr 21 10:06:23.677835 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:06:23.677928 systemd[1]: kubelet.service: Consumed 1.310s CPU time, 123.8M memory peak, 0B memory swap peak. Apr 21 10:06:23.686080 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:06:24.052413 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:06:24.073407 (kubelet)[3162]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 21 10:06:24.173577 kubelet[3162]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 21 10:06:24.189141 kubelet[3162]: I0421 10:06:24.189061 3162 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Apr 21 10:06:24.190539 kubelet[3162]: I0421 10:06:24.189314 3162 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 21 10:06:24.190539 kubelet[3162]: I0421 10:06:24.189367 3162 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 21 10:06:24.190539 kubelet[3162]: I0421 10:06:24.189380 3162 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 21 10:06:24.190539 kubelet[3162]: I0421 10:06:24.189873 3162 server.go:951] "Client rotation is on, will bootstrap in background" Apr 21 10:06:24.192670 kubelet[3162]: I0421 10:06:24.192630 3162 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 21 10:06:24.197645 kubelet[3162]: I0421 10:06:24.197602 3162 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 21 10:06:24.207079 kubelet[3162]: E0421 10:06:24.207022 3162 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 21 10:06:24.207484 kubelet[3162]: I0421 10:06:24.207459 3162 server.go:1395] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Apr 21 10:06:24.221250 kubelet[3162]: I0421 10:06:24.221200 3162 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 21 10:06:24.221863 kubelet[3162]: I0421 10:06:24.221821 3162 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 21 10:06:24.222358 kubelet[3162]: I0421 10:06:24.221957 3162 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-20-11","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 21 10:06:24.222874 kubelet[3162]: I0421 10:06:24.222825 3162 topology_manager.go:143] "Creating topology manager with none policy" Apr 21 10:06:24.222984 kubelet[3162]: I0421 10:06:24.222966 3162 container_manager_linux.go:308] "Creating device plugin manager" Apr 21 10:06:24.223137 kubelet[3162]: I0421 10:06:24.223117 3162 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Apr 21 10:06:24.223644 kubelet[3162]: I0421 10:06:24.223618 3162 state_mem.go:41] "Initialized" logger="CPUManager state memory" Apr 21 10:06:24.224025 kubelet[3162]: I0421 10:06:24.224003 3162 kubelet.go:482] "Attempting to sync node with API server" Apr 21 10:06:24.224148 kubelet[3162]: I0421 10:06:24.224129 3162 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 21 10:06:24.224267 kubelet[3162]: I0421 10:06:24.224248 3162 kubelet.go:394] "Adding apiserver pod source" Apr 21 10:06:24.224374 kubelet[3162]: I0421 10:06:24.224356 3162 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 21 10:06:24.226742 kubelet[3162]: I0421 10:06:24.226697 3162 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 21 10:06:24.230003 kubelet[3162]: I0421 10:06:24.228763 3162 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 21 10:06:24.230003 kubelet[3162]: I0421 10:06:24.228827 3162 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 21 10:06:24.234751 kubelet[3162]: I0421 10:06:24.234719 3162 server.go:1257] "Started kubelet" Apr 21 10:06:24.241051 kubelet[3162]: I0421 10:06:24.241014 3162 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Apr 21 10:06:24.252673 kubelet[3162]: I0421 10:06:24.252594 3162 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Apr 21 10:06:24.254630 kubelet[3162]: I0421 10:06:24.254596 3162 server.go:317] "Adding debug handlers to kubelet server" Apr 21 10:06:24.260413 kubelet[3162]: I0421 10:06:24.260328 3162 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 21 10:06:24.260726 kubelet[3162]: I0421 10:06:24.260700 3162 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 21 10:06:24.261071 kubelet[3162]: I0421 10:06:24.261048 3162 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 21 10:06:24.261639 kubelet[3162]: I0421 10:06:24.261610 3162 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 21 10:06:24.266897 kubelet[3162]: I0421 10:06:24.266851 3162 volume_manager.go:311] "Starting Kubelet Volume Manager" Apr 21 10:06:24.270805 kubelet[3162]: E0421 10:06:24.270763 3162 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ip-172-31-20-11\" not found" Apr 21 10:06:24.271988 kubelet[3162]: I0421 10:06:24.271960 3162 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 21 10:06:24.273810 kubelet[3162]: I0421 10:06:24.273788 3162 reconciler.go:29] "Reconciler: start to sync state" Apr 21 10:06:24.319148 kubelet[3162]: I0421 10:06:24.317699 3162 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 21 10:06:24.324392 kubelet[3162]: I0421 10:06:24.323924 3162 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 21 10:06:24.324392 kubelet[3162]: I0421 10:06:24.323973 3162 status_manager.go:249] "Starting to sync pod status with apiserver" Apr 21 10:06:24.324392 kubelet[3162]: I0421 10:06:24.324011 3162 kubelet.go:2501] "Starting kubelet main sync loop" Apr 21 10:06:24.324392 kubelet[3162]: E0421 10:06:24.324082 3162 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 21 10:06:24.335423 kubelet[3162]: I0421 10:06:24.335369 3162 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 21 10:06:24.348878 kubelet[3162]: I0421 10:06:24.348797 3162 factory.go:223] Registration of the containerd container factory successfully Apr 21 10:06:24.348878 kubelet[3162]: I0421 10:06:24.348839 3162 factory.go:223] Registration of the systemd container factory successfully Apr 21 10:06:24.426132 kubelet[3162]: E0421 10:06:24.425377 3162 kubelet.go:2525] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 21 10:06:24.491801 update_engine[1931]: I20260421 10:06:24.490854 1931 update_attempter.cc:509] Updating boot flags... Apr 21 10:06:24.569702 kubelet[3162]: I0421 10:06:24.567300 3162 cpu_manager.go:225] "Starting" policy="none" Apr 21 10:06:24.569702 kubelet[3162]: I0421 10:06:24.567334 3162 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Apr 21 10:06:24.569702 kubelet[3162]: I0421 10:06:24.567368 3162 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Apr 21 10:06:24.569702 kubelet[3162]: I0421 10:06:24.569178 3162 state_mem.go:94] "Updated default CPUSet" logger="CPUManager state checkpoint.CPUManager state memory" cpuSet="" Apr 21 10:06:24.569702 kubelet[3162]: I0421 10:06:24.569211 3162 state_mem.go:102] "Updated CPUSet assignments" logger="CPUManager state checkpoint.CPUManager state memory" assignments={} Apr 21 10:06:24.569702 kubelet[3162]: I0421 10:06:24.569249 3162 policy_none.go:50] "Start" Apr 21 10:06:24.569702 kubelet[3162]: I0421 10:06:24.569271 3162 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 21 10:06:24.569702 kubelet[3162]: I0421 10:06:24.569294 3162 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 21 10:06:24.569702 kubelet[3162]: I0421 10:06:24.569530 3162 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Apr 21 10:06:24.569702 kubelet[3162]: I0421 10:06:24.569562 3162 policy_none.go:44] "Start" Apr 21 10:06:24.592098 kubelet[3162]: E0421 10:06:24.590128 3162 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 21 10:06:24.592098 kubelet[3162]: I0421 10:06:24.590442 3162 eviction_manager.go:194] "Eviction manager: starting control loop" Apr 21 10:06:24.592098 kubelet[3162]: I0421 10:06:24.590463 3162 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 21 10:06:24.592098 kubelet[3162]: I0421 10:06:24.591450 3162 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Apr 21 10:06:24.600109 kubelet[3162]: E0421 10:06:24.597112 3162 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 21 10:06:24.634235 kubelet[3162]: I0421 10:06:24.634131 3162 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-20-11" Apr 21 10:06:24.637878 kubelet[3162]: I0421 10:06:24.637820 3162 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-20-11" Apr 21 10:06:24.641761 kubelet[3162]: I0421 10:06:24.640980 3162 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-20-11" Apr 21 10:06:24.693314 kubelet[3162]: I0421 10:06:24.693234 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/53bc613d3b4a50e5bcba1664ca1eee28-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-20-11\" (UID: \"53bc613d3b4a50e5bcba1664ca1eee28\") " pod="kube-system/kube-controller-manager-ip-172-31-20-11" Apr 21 10:06:24.693314 kubelet[3162]: I0421 10:06:24.693309 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/53bc613d3b4a50e5bcba1664ca1eee28-k8s-certs\") pod \"kube-controller-manager-ip-172-31-20-11\" (UID: \"53bc613d3b4a50e5bcba1664ca1eee28\") " pod="kube-system/kube-controller-manager-ip-172-31-20-11" Apr 21 10:06:24.693580 kubelet[3162]: I0421 10:06:24.693356 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/53bc613d3b4a50e5bcba1664ca1eee28-kubeconfig\") pod \"kube-controller-manager-ip-172-31-20-11\" (UID: \"53bc613d3b4a50e5bcba1664ca1eee28\") " pod="kube-system/kube-controller-manager-ip-172-31-20-11" Apr 21 10:06:24.693580 kubelet[3162]: I0421 10:06:24.693394 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5b5e039f831c8ea0c4589c173271d02c-ca-certs\") pod \"kube-apiserver-ip-172-31-20-11\" (UID: \"5b5e039f831c8ea0c4589c173271d02c\") " pod="kube-system/kube-apiserver-ip-172-31-20-11" Apr 21 10:06:24.693580 kubelet[3162]: I0421 10:06:24.693436 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5b5e039f831c8ea0c4589c173271d02c-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-20-11\" (UID: \"5b5e039f831c8ea0c4589c173271d02c\") " pod="kube-system/kube-apiserver-ip-172-31-20-11" Apr 21 10:06:24.693580 kubelet[3162]: I0421 10:06:24.693472 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/53bc613d3b4a50e5bcba1664ca1eee28-ca-certs\") pod \"kube-controller-manager-ip-172-31-20-11\" (UID: \"53bc613d3b4a50e5bcba1664ca1eee28\") " pod="kube-system/kube-controller-manager-ip-172-31-20-11" Apr 21 10:06:24.693580 kubelet[3162]: I0421 10:06:24.693508 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/53bc613d3b4a50e5bcba1664ca1eee28-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-20-11\" (UID: \"53bc613d3b4a50e5bcba1664ca1eee28\") " pod="kube-system/kube-controller-manager-ip-172-31-20-11" Apr 21 10:06:24.693858 kubelet[3162]: I0421 10:06:24.693596 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c1a049c8fcc29172082c0a68710c99e8-kubeconfig\") pod \"kube-scheduler-ip-172-31-20-11\" (UID: \"c1a049c8fcc29172082c0a68710c99e8\") " pod="kube-system/kube-scheduler-ip-172-31-20-11" Apr 21 10:06:24.693858 kubelet[3162]: I0421 10:06:24.693634 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5b5e039f831c8ea0c4589c173271d02c-k8s-certs\") pod \"kube-apiserver-ip-172-31-20-11\" (UID: \"5b5e039f831c8ea0c4589c173271d02c\") " pod="kube-system/kube-apiserver-ip-172-31-20-11" Apr 21 10:06:24.701909 kubelet[3162]: E0421 10:06:24.701188 3162 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-20-11\" already exists" pod="kube-system/kube-scheduler-ip-172-31-20-11" Apr 21 10:06:24.737028 kubelet[3162]: I0421 10:06:24.736971 3162 kubelet_node_status.go:74] "Attempting to register node" node="ip-172-31-20-11" Apr 21 10:06:24.769314 kubelet[3162]: I0421 10:06:24.767532 3162 kubelet_node_status.go:123] "Node was previously registered" node="ip-172-31-20-11" Apr 21 10:06:24.769314 kubelet[3162]: I0421 10:06:24.767667 3162 kubelet_node_status.go:77] "Successfully registered node" node="ip-172-31-20-11" Apr 21 10:06:24.817036 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 32 scanned by (udev-worker) (3220) Apr 21 10:06:25.215696 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 32 scanned by (udev-worker) (3219) Apr 21 10:06:25.224954 kubelet[3162]: I0421 10:06:25.224781 3162 apiserver.go:52] "Watching apiserver" Apr 21 10:06:25.274537 kubelet[3162]: I0421 10:06:25.274416 3162 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 21 10:06:25.556490 kubelet[3162]: I0421 10:06:25.556367 3162 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-20-11" podStartSLOduration=1.5563267299999999 podStartE2EDuration="1.55632673s" podCreationTimestamp="2026-04-21 10:06:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 10:06:25.552567136 +0000 UTC m=+1.469929265" watchObservedRunningTime="2026-04-21 10:06:25.55632673 +0000 UTC m=+1.473688835" Apr 21 10:06:25.605311 kubelet[3162]: I0421 10:06:25.605217 3162 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-20-11" podStartSLOduration=4.605197562 podStartE2EDuration="4.605197562s" podCreationTimestamp="2026-04-21 10:06:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 10:06:25.572490997 +0000 UTC m=+1.489853126" watchObservedRunningTime="2026-04-21 10:06:25.605197562 +0000 UTC m=+1.522559667" Apr 21 10:06:25.669671 kubelet[3162]: I0421 10:06:25.669567 3162 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-20-11" podStartSLOduration=1.6695476930000002 podStartE2EDuration="1.669547693s" podCreationTimestamp="2026-04-21 10:06:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 10:06:25.605606042 +0000 UTC m=+1.522968171" watchObservedRunningTime="2026-04-21 10:06:25.669547693 +0000 UTC m=+1.586909798" Apr 21 10:06:28.759968 kubelet[3162]: I0421 10:06:28.759903 3162 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 21 10:06:28.761792 containerd[1955]: time="2026-04-21T10:06:28.760843698Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 21 10:06:28.764304 kubelet[3162]: I0421 10:06:28.762942 3162 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 21 10:06:29.554056 systemd[1]: Created slice kubepods-besteffort-pod22b035df_a196_4586_ac32_0bca169bbf09.slice - libcontainer container kubepods-besteffort-pod22b035df_a196_4586_ac32_0bca169bbf09.slice. Apr 21 10:06:29.635082 kubelet[3162]: I0421 10:06:29.634356 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/22b035df-a196-4586-ac32-0bca169bbf09-kube-proxy\") pod \"kube-proxy-jqzzb\" (UID: \"22b035df-a196-4586-ac32-0bca169bbf09\") " pod="kube-system/kube-proxy-jqzzb" Apr 21 10:06:29.635082 kubelet[3162]: I0421 10:06:29.634436 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/22b035df-a196-4586-ac32-0bca169bbf09-lib-modules\") pod \"kube-proxy-jqzzb\" (UID: \"22b035df-a196-4586-ac32-0bca169bbf09\") " pod="kube-system/kube-proxy-jqzzb" Apr 21 10:06:29.635082 kubelet[3162]: I0421 10:06:29.634494 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/22b035df-a196-4586-ac32-0bca169bbf09-xtables-lock\") pod \"kube-proxy-jqzzb\" (UID: \"22b035df-a196-4586-ac32-0bca169bbf09\") " pod="kube-system/kube-proxy-jqzzb" Apr 21 10:06:29.635501 kubelet[3162]: I0421 10:06:29.635386 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bczld\" (UniqueName: \"kubernetes.io/projected/22b035df-a196-4586-ac32-0bca169bbf09-kube-api-access-bczld\") pod \"kube-proxy-jqzzb\" (UID: \"22b035df-a196-4586-ac32-0bca169bbf09\") " pod="kube-system/kube-proxy-jqzzb" Apr 21 10:06:29.761702 kubelet[3162]: E0421 10:06:29.761635 3162 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Apr 21 10:06:29.762964 kubelet[3162]: E0421 10:06:29.762314 3162 projected.go:196] Error preparing data for projected volume kube-api-access-bczld for pod kube-system/kube-proxy-jqzzb: configmap "kube-root-ca.crt" not found Apr 21 10:06:29.762964 kubelet[3162]: E0421 10:06:29.762927 3162 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/22b035df-a196-4586-ac32-0bca169bbf09-kube-api-access-bczld podName:22b035df-a196-4586-ac32-0bca169bbf09 nodeName:}" failed. No retries permitted until 2026-04-21 10:06:30.262847177 +0000 UTC m=+6.180209294 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-bczld" (UniqueName: "kubernetes.io/projected/22b035df-a196-4586-ac32-0bca169bbf09-kube-api-access-bczld") pod "kube-proxy-jqzzb" (UID: "22b035df-a196-4586-ac32-0bca169bbf09") : configmap "kube-root-ca.crt" not found Apr 21 10:06:30.097085 systemd[1]: Created slice kubepods-besteffort-pod4e1de161_af2b_4f77_a406_898e76c6edbf.slice - libcontainer container kubepods-besteffort-pod4e1de161_af2b_4f77_a406_898e76c6edbf.slice. Apr 21 10:06:30.138853 kubelet[3162]: I0421 10:06:30.138639 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/4e1de161-af2b-4f77-a406-898e76c6edbf-var-lib-calico\") pod \"tigera-operator-6cf4cccc57-6vz9l\" (UID: \"4e1de161-af2b-4f77-a406-898e76c6edbf\") " pod="tigera-operator/tigera-operator-6cf4cccc57-6vz9l" Apr 21 10:06:30.138853 kubelet[3162]: I0421 10:06:30.138792 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-674zw\" (UniqueName: \"kubernetes.io/projected/4e1de161-af2b-4f77-a406-898e76c6edbf-kube-api-access-674zw\") pod \"tigera-operator-6cf4cccc57-6vz9l\" (UID: \"4e1de161-af2b-4f77-a406-898e76c6edbf\") " pod="tigera-operator/tigera-operator-6cf4cccc57-6vz9l" Apr 21 10:06:30.416099 containerd[1955]: time="2026-04-21T10:06:30.415501289Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6cf4cccc57-6vz9l,Uid:4e1de161-af2b-4f77-a406-898e76c6edbf,Namespace:tigera-operator,Attempt:0,}" Apr 21 10:06:30.466065 containerd[1955]: time="2026-04-21T10:06:30.465879822Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:06:30.466273 containerd[1955]: time="2026-04-21T10:06:30.466035096Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:06:30.466413 containerd[1955]: time="2026-04-21T10:06:30.466250399Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:06:30.466892 containerd[1955]: time="2026-04-21T10:06:30.466700264Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:06:30.479631 containerd[1955]: time="2026-04-21T10:06:30.478753088Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jqzzb,Uid:22b035df-a196-4586-ac32-0bca169bbf09,Namespace:kube-system,Attempt:0,}" Apr 21 10:06:30.519985 systemd[1]: Started cri-containerd-ba70ae438ab14005cc968aef7fbe9f2345663a51d888de4657b4e658697b5561.scope - libcontainer container ba70ae438ab14005cc968aef7fbe9f2345663a51d888de4657b4e658697b5561. Apr 21 10:06:30.543414 containerd[1955]: time="2026-04-21T10:06:30.543044174Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:06:30.544167 containerd[1955]: time="2026-04-21T10:06:30.543356114Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:06:30.544167 containerd[1955]: time="2026-04-21T10:06:30.543692438Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:06:30.544167 containerd[1955]: time="2026-04-21T10:06:30.544004882Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:06:30.587854 systemd[1]: Started cri-containerd-7bb9f3a0f7c2dcf13b6e3ccce18e43979cba4c618e0d6d68cbd5003e7b925e44.scope - libcontainer container 7bb9f3a0f7c2dcf13b6e3ccce18e43979cba4c618e0d6d68cbd5003e7b925e44. Apr 21 10:06:30.620584 containerd[1955]: time="2026-04-21T10:06:30.620483042Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6cf4cccc57-6vz9l,Uid:4e1de161-af2b-4f77-a406-898e76c6edbf,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"ba70ae438ab14005cc968aef7fbe9f2345663a51d888de4657b4e658697b5561\"" Apr 21 10:06:30.628966 containerd[1955]: time="2026-04-21T10:06:30.628903895Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Apr 21 10:06:30.648666 containerd[1955]: time="2026-04-21T10:06:30.648457047Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jqzzb,Uid:22b035df-a196-4586-ac32-0bca169bbf09,Namespace:kube-system,Attempt:0,} returns sandbox id \"7bb9f3a0f7c2dcf13b6e3ccce18e43979cba4c618e0d6d68cbd5003e7b925e44\"" Apr 21 10:06:30.660598 containerd[1955]: time="2026-04-21T10:06:30.660495571Z" level=info msg="CreateContainer within sandbox \"7bb9f3a0f7c2dcf13b6e3ccce18e43979cba4c618e0d6d68cbd5003e7b925e44\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 21 10:06:30.696125 containerd[1955]: time="2026-04-21T10:06:30.695915408Z" level=info msg="CreateContainer within sandbox \"7bb9f3a0f7c2dcf13b6e3ccce18e43979cba4c618e0d6d68cbd5003e7b925e44\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"181250eebdc304cd8e6046bfb2ea9d20f03b9323ce01cc095a368ee4a6b76d82\"" Apr 21 10:06:30.698095 containerd[1955]: time="2026-04-21T10:06:30.698009147Z" level=info msg="StartContainer for \"181250eebdc304cd8e6046bfb2ea9d20f03b9323ce01cc095a368ee4a6b76d82\"" Apr 21 10:06:30.747071 systemd[1]: Started cri-containerd-181250eebdc304cd8e6046bfb2ea9d20f03b9323ce01cc095a368ee4a6b76d82.scope - libcontainer container 181250eebdc304cd8e6046bfb2ea9d20f03b9323ce01cc095a368ee4a6b76d82. Apr 21 10:06:30.804050 containerd[1955]: time="2026-04-21T10:06:30.803971234Z" level=info msg="StartContainer for \"181250eebdc304cd8e6046bfb2ea9d20f03b9323ce01cc095a368ee4a6b76d82\" returns successfully" Apr 21 10:06:31.517042 kubelet[3162]: I0421 10:06:31.516813 3162 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-jqzzb" podStartSLOduration=2.516782382 podStartE2EDuration="2.516782382s" podCreationTimestamp="2026-04-21 10:06:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 10:06:31.515881392 +0000 UTC m=+7.433243509" watchObservedRunningTime="2026-04-21 10:06:31.516782382 +0000 UTC m=+7.434144499" Apr 21 10:06:32.037296 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2505934620.mount: Deactivated successfully. Apr 21 10:06:33.166371 containerd[1955]: time="2026-04-21T10:06:33.166300132Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:06:33.168043 containerd[1955]: time="2026-04-21T10:06:33.167973757Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=25071565" Apr 21 10:06:33.169670 containerd[1955]: time="2026-04-21T10:06:33.169439665Z" level=info msg="ImageCreate event name:\"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:06:33.174700 containerd[1955]: time="2026-04-21T10:06:33.174643822Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:06:33.176589 containerd[1955]: time="2026-04-21T10:06:33.176366539Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"25067560\" in 2.547398425s" Apr 21 10:06:33.176589 containerd[1955]: time="2026-04-21T10:06:33.176421911Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\"" Apr 21 10:06:33.187147 containerd[1955]: time="2026-04-21T10:06:33.186824149Z" level=info msg="CreateContainer within sandbox \"ba70ae438ab14005cc968aef7fbe9f2345663a51d888de4657b4e658697b5561\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Apr 21 10:06:33.213582 containerd[1955]: time="2026-04-21T10:06:33.213438438Z" level=info msg="CreateContainer within sandbox \"ba70ae438ab14005cc968aef7fbe9f2345663a51d888de4657b4e658697b5561\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"d0a692be492791dd6bbcd6ee236d22e2681dd83c8d77ed339f033965e62ca6e5\"" Apr 21 10:06:33.216566 containerd[1955]: time="2026-04-21T10:06:33.216239654Z" level=info msg="StartContainer for \"d0a692be492791dd6bbcd6ee236d22e2681dd83c8d77ed339f033965e62ca6e5\"" Apr 21 10:06:33.273844 systemd[1]: Started cri-containerd-d0a692be492791dd6bbcd6ee236d22e2681dd83c8d77ed339f033965e62ca6e5.scope - libcontainer container d0a692be492791dd6bbcd6ee236d22e2681dd83c8d77ed339f033965e62ca6e5. Apr 21 10:06:33.323624 containerd[1955]: time="2026-04-21T10:06:33.323182487Z" level=info msg="StartContainer for \"d0a692be492791dd6bbcd6ee236d22e2681dd83c8d77ed339f033965e62ca6e5\" returns successfully" Apr 21 10:06:40.147977 kubelet[3162]: I0421 10:06:40.147849 3162 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6cf4cccc57-6vz9l" podStartSLOduration=8.596642183 podStartE2EDuration="11.147833435s" podCreationTimestamp="2026-04-21 10:06:29 +0000 UTC" firstStartedPulling="2026-04-21 10:06:30.627127943 +0000 UTC m=+6.544490048" lastFinishedPulling="2026-04-21 10:06:33.178319183 +0000 UTC m=+9.095681300" observedRunningTime="2026-04-21 10:06:33.522534459 +0000 UTC m=+9.439896576" watchObservedRunningTime="2026-04-21 10:06:40.147833435 +0000 UTC m=+16.065195540" Apr 21 10:06:40.790896 sudo[2272]: pam_unix(sudo:session): session closed for user root Apr 21 10:06:40.957399 sshd[2266]: pam_unix(sshd:session): session closed for user core Apr 21 10:06:40.965800 systemd[1]: sshd@6-172.31.20.11:22-4.175.71.9:50990.service: Deactivated successfully. Apr 21 10:06:40.974437 systemd[1]: session-7.scope: Deactivated successfully. Apr 21 10:06:40.975201 systemd[1]: session-7.scope: Consumed 8.789s CPU time, 153.2M memory peak, 0B memory swap peak. Apr 21 10:06:40.981348 systemd-logind[1930]: Session 7 logged out. Waiting for processes to exit. Apr 21 10:06:40.985349 systemd-logind[1930]: Removed session 7. Apr 21 10:06:59.132652 systemd[1]: Created slice kubepods-besteffort-podf1140441_5ca2_434a_8fa9_ab001491c1aa.slice - libcontainer container kubepods-besteffort-podf1140441_5ca2_434a_8fa9_ab001491c1aa.slice. Apr 21 10:06:59.140891 kubelet[3162]: E0421 10:06:59.140830 3162 status_manager.go:1045] "Failed to get status for pod" err="pods \"calico-typha-8b9c97b6-cdsvn\" is forbidden: User \"system:node:ip-172-31-20-11\" cannot get resource \"pods\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ip-172-31-20-11' and this object" podUID="f1140441-5ca2-434a-8fa9-ab001491c1aa" pod="calico-system/calico-typha-8b9c97b6-cdsvn" Apr 21 10:06:59.246215 kubelet[3162]: I0421 10:06:59.245970 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f1140441-5ca2-434a-8fa9-ab001491c1aa-tigera-ca-bundle\") pod \"calico-typha-8b9c97b6-cdsvn\" (UID: \"f1140441-5ca2-434a-8fa9-ab001491c1aa\") " pod="calico-system/calico-typha-8b9c97b6-cdsvn" Apr 21 10:06:59.246215 kubelet[3162]: I0421 10:06:59.246041 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lsb6b\" (UniqueName: \"kubernetes.io/projected/f1140441-5ca2-434a-8fa9-ab001491c1aa-kube-api-access-lsb6b\") pod \"calico-typha-8b9c97b6-cdsvn\" (UID: \"f1140441-5ca2-434a-8fa9-ab001491c1aa\") " pod="calico-system/calico-typha-8b9c97b6-cdsvn" Apr 21 10:06:59.246215 kubelet[3162]: I0421 10:06:59.246086 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/f1140441-5ca2-434a-8fa9-ab001491c1aa-typha-certs\") pod \"calico-typha-8b9c97b6-cdsvn\" (UID: \"f1140441-5ca2-434a-8fa9-ab001491c1aa\") " pod="calico-system/calico-typha-8b9c97b6-cdsvn" Apr 21 10:06:59.407485 systemd[1]: Created slice kubepods-besteffort-podf55657b9_fe08_42c5_ba8a_a2349b69285b.slice - libcontainer container kubepods-besteffort-podf55657b9_fe08_42c5_ba8a_a2349b69285b.slice. Apr 21 10:06:59.451830 kubelet[3162]: I0421 10:06:59.451749 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f55657b9-fe08-42c5-ba8a-a2349b69285b-xtables-lock\") pod \"calico-node-fg7rc\" (UID: \"f55657b9-fe08-42c5-ba8a-a2349b69285b\") " pod="calico-system/calico-node-fg7rc" Apr 21 10:06:59.451830 kubelet[3162]: I0421 10:06:59.451832 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/f55657b9-fe08-42c5-ba8a-a2349b69285b-bpffs\") pod \"calico-node-fg7rc\" (UID: \"f55657b9-fe08-42c5-ba8a-a2349b69285b\") " pod="calico-system/calico-node-fg7rc" Apr 21 10:06:59.452067 kubelet[3162]: I0421 10:06:59.451872 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/f55657b9-fe08-42c5-ba8a-a2349b69285b-flexvol-driver-host\") pod \"calico-node-fg7rc\" (UID: \"f55657b9-fe08-42c5-ba8a-a2349b69285b\") " pod="calico-system/calico-node-fg7rc" Apr 21 10:06:59.452067 kubelet[3162]: I0421 10:06:59.451912 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/f55657b9-fe08-42c5-ba8a-a2349b69285b-cni-bin-dir\") pod \"calico-node-fg7rc\" (UID: \"f55657b9-fe08-42c5-ba8a-a2349b69285b\") " pod="calico-system/calico-node-fg7rc" Apr 21 10:06:59.452067 kubelet[3162]: I0421 10:06:59.451954 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/f55657b9-fe08-42c5-ba8a-a2349b69285b-nodeproc\") pod \"calico-node-fg7rc\" (UID: \"f55657b9-fe08-42c5-ba8a-a2349b69285b\") " pod="calico-system/calico-node-fg7rc" Apr 21 10:06:59.452067 kubelet[3162]: I0421 10:06:59.451993 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/f55657b9-fe08-42c5-ba8a-a2349b69285b-cni-net-dir\") pod \"calico-node-fg7rc\" (UID: \"f55657b9-fe08-42c5-ba8a-a2349b69285b\") " pod="calico-system/calico-node-fg7rc" Apr 21 10:06:59.452067 kubelet[3162]: I0421 10:06:59.452029 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/f55657b9-fe08-42c5-ba8a-a2349b69285b-node-certs\") pod \"calico-node-fg7rc\" (UID: \"f55657b9-fe08-42c5-ba8a-a2349b69285b\") " pod="calico-system/calico-node-fg7rc" Apr 21 10:06:59.452337 kubelet[3162]: I0421 10:06:59.452064 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/f55657b9-fe08-42c5-ba8a-a2349b69285b-policysync\") pod \"calico-node-fg7rc\" (UID: \"f55657b9-fe08-42c5-ba8a-a2349b69285b\") " pod="calico-system/calico-node-fg7rc" Apr 21 10:06:59.452337 kubelet[3162]: I0421 10:06:59.452099 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/f55657b9-fe08-42c5-ba8a-a2349b69285b-sys-fs\") pod \"calico-node-fg7rc\" (UID: \"f55657b9-fe08-42c5-ba8a-a2349b69285b\") " pod="calico-system/calico-node-fg7rc" Apr 21 10:06:59.452337 kubelet[3162]: I0421 10:06:59.452142 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f55657b9-fe08-42c5-ba8a-a2349b69285b-var-lib-calico\") pod \"calico-node-fg7rc\" (UID: \"f55657b9-fe08-42c5-ba8a-a2349b69285b\") " pod="calico-system/calico-node-fg7rc" Apr 21 10:06:59.452337 kubelet[3162]: I0421 10:06:59.452178 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/f55657b9-fe08-42c5-ba8a-a2349b69285b-var-run-calico\") pod \"calico-node-fg7rc\" (UID: \"f55657b9-fe08-42c5-ba8a-a2349b69285b\") " pod="calico-system/calico-node-fg7rc" Apr 21 10:06:59.452337 kubelet[3162]: I0421 10:06:59.452213 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55g9t\" (UniqueName: \"kubernetes.io/projected/f55657b9-fe08-42c5-ba8a-a2349b69285b-kube-api-access-55g9t\") pod \"calico-node-fg7rc\" (UID: \"f55657b9-fe08-42c5-ba8a-a2349b69285b\") " pod="calico-system/calico-node-fg7rc" Apr 21 10:06:59.452680 kubelet[3162]: I0421 10:06:59.452257 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/f55657b9-fe08-42c5-ba8a-a2349b69285b-cni-log-dir\") pod \"calico-node-fg7rc\" (UID: \"f55657b9-fe08-42c5-ba8a-a2349b69285b\") " pod="calico-system/calico-node-fg7rc" Apr 21 10:06:59.452680 kubelet[3162]: I0421 10:06:59.452294 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f55657b9-fe08-42c5-ba8a-a2349b69285b-tigera-ca-bundle\") pod \"calico-node-fg7rc\" (UID: \"f55657b9-fe08-42c5-ba8a-a2349b69285b\") " pod="calico-system/calico-node-fg7rc" Apr 21 10:06:59.452680 kubelet[3162]: I0421 10:06:59.452335 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f55657b9-fe08-42c5-ba8a-a2349b69285b-lib-modules\") pod \"calico-node-fg7rc\" (UID: \"f55657b9-fe08-42c5-ba8a-a2349b69285b\") " pod="calico-system/calico-node-fg7rc" Apr 21 10:06:59.558420 kubelet[3162]: E0421 10:06:59.557565 3162 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:06:59.558420 kubelet[3162]: W0421 10:06:59.557609 3162 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:06:59.558420 kubelet[3162]: E0421 10:06:59.557650 3162 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:06:59.558420 kubelet[3162]: E0421 10:06:59.558085 3162 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:06:59.558420 kubelet[3162]: W0421 10:06:59.558107 3162 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:06:59.558420 kubelet[3162]: E0421 10:06:59.558134 3162 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:06:59.559443 kubelet[3162]: E0421 10:06:59.559054 3162 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:06:59.559443 kubelet[3162]: W0421 10:06:59.559087 3162 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:06:59.559443 kubelet[3162]: E0421 10:06:59.559121 3162 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:06:59.560905 kubelet[3162]: E0421 10:06:59.560871 3162 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:06:59.561056 kubelet[3162]: W0421 10:06:59.561025 3162 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:06:59.561181 kubelet[3162]: E0421 10:06:59.561157 3162 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:06:59.562806 kubelet[3162]: E0421 10:06:59.562763 3162 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:06:59.563013 kubelet[3162]: W0421 10:06:59.562983 3162 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:06:59.563140 kubelet[3162]: E0421 10:06:59.563115 3162 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:06:59.565278 kubelet[3162]: E0421 10:06:59.565239 3162 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:06:59.565731 kubelet[3162]: W0421 10:06:59.565466 3162 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:06:59.565731 kubelet[3162]: E0421 10:06:59.565553 3162 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:06:59.566475 kubelet[3162]: E0421 10:06:59.566133 3162 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:06:59.566475 kubelet[3162]: W0421 10:06:59.566162 3162 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:06:59.566475 kubelet[3162]: E0421 10:06:59.566192 3162 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:06:59.569303 kubelet[3162]: E0421 10:06:59.569266 3162 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:06:59.571560 kubelet[3162]: W0421 10:06:59.569482 3162 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:06:59.571560 kubelet[3162]: E0421 10:06:59.569553 3162 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:06:59.574544 kubelet[3162]: E0421 10:06:59.572729 3162 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:06:59.574544 kubelet[3162]: W0421 10:06:59.572765 3162 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:06:59.574544 kubelet[3162]: E0421 10:06:59.572824 3162 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:06:59.575639 kubelet[3162]: E0421 10:06:59.575279 3162 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:06:59.575639 kubelet[3162]: W0421 10:06:59.575314 3162 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:06:59.575639 kubelet[3162]: E0421 10:06:59.575346 3162 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:06:59.578156 kubelet[3162]: E0421 10:06:59.577945 3162 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:06:59.578156 kubelet[3162]: W0421 10:06:59.577980 3162 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:06:59.578156 kubelet[3162]: E0421 10:06:59.578015 3162 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:06:59.580183 kubelet[3162]: E0421 10:06:59.579383 3162 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:06:59.580183 kubelet[3162]: W0421 10:06:59.579413 3162 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:06:59.580183 kubelet[3162]: E0421 10:06:59.579447 3162 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:06:59.582736 kubelet[3162]: E0421 10:06:59.580793 3162 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:06:59.582736 kubelet[3162]: W0421 10:06:59.580823 3162 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:06:59.582736 kubelet[3162]: E0421 10:06:59.580860 3162 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:06:59.583244 kubelet[3162]: E0421 10:06:59.583213 3162 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:06:59.583613 kubelet[3162]: W0421 10:06:59.583357 3162 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:06:59.583613 kubelet[3162]: E0421 10:06:59.583402 3162 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:06:59.584393 kubelet[3162]: E0421 10:06:59.584220 3162 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:06:59.584393 kubelet[3162]: W0421 10:06:59.584251 3162 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:06:59.584393 kubelet[3162]: E0421 10:06:59.584284 3162 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:06:59.585135 kubelet[3162]: E0421 10:06:59.585106 3162 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:06:59.585327 kubelet[3162]: W0421 10:06:59.585295 3162 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:06:59.585619 kubelet[3162]: E0421 10:06:59.585590 3162 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:06:59.586631 kubelet[3162]: E0421 10:06:59.586411 3162 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:06:59.586631 kubelet[3162]: W0421 10:06:59.586443 3162 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:06:59.586631 kubelet[3162]: E0421 10:06:59.586477 3162 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:06:59.587680 kubelet[3162]: E0421 10:06:59.587352 3162 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:06:59.587680 kubelet[3162]: W0421 10:06:59.587385 3162 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:06:59.587680 kubelet[3162]: E0421 10:06:59.587418 3162 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:06:59.588557 kubelet[3162]: E0421 10:06:59.588226 3162 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:06:59.588557 kubelet[3162]: W0421 10:06:59.588264 3162 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:06:59.588557 kubelet[3162]: E0421 10:06:59.588303 3162 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:06:59.589992 kubelet[3162]: E0421 10:06:59.589679 3162 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:06:59.589992 kubelet[3162]: W0421 10:06:59.589717 3162 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:06:59.589992 kubelet[3162]: E0421 10:06:59.589752 3162 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:06:59.591173 kubelet[3162]: E0421 10:06:59.590887 3162 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:06:59.591173 kubelet[3162]: W0421 10:06:59.590920 3162 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:06:59.591173 kubelet[3162]: E0421 10:06:59.590955 3162 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:06:59.592233 kubelet[3162]: E0421 10:06:59.591885 3162 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:06:59.592233 kubelet[3162]: W0421 10:06:59.591922 3162 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:06:59.592233 kubelet[3162]: E0421 10:06:59.591997 3162 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:06:59.593145 kubelet[3162]: E0421 10:06:59.592868 3162 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:06:59.593145 kubelet[3162]: W0421 10:06:59.592899 3162 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:06:59.593145 kubelet[3162]: E0421 10:06:59.592932 3162 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:06:59.594030 kubelet[3162]: E0421 10:06:59.593678 3162 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:06:59.594030 kubelet[3162]: W0421 10:06:59.593707 3162 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:06:59.594030 kubelet[3162]: E0421 10:06:59.593739 3162 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:06:59.596886 kubelet[3162]: E0421 10:06:59.596586 3162 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:06:59.596886 kubelet[3162]: W0421 10:06:59.596621 3162 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:06:59.596886 kubelet[3162]: E0421 10:06:59.596654 3162 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:06:59.597743 kubelet[3162]: E0421 10:06:59.597414 3162 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:06:59.597743 kubelet[3162]: W0421 10:06:59.597454 3162 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:06:59.597743 kubelet[3162]: E0421 10:06:59.597547 3162 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:06:59.598562 kubelet[3162]: E0421 10:06:59.598238 3162 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:06:59.598562 kubelet[3162]: W0421 10:06:59.598268 3162 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:06:59.598562 kubelet[3162]: E0421 10:06:59.598298 3162 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:06:59.599076 kubelet[3162]: E0421 10:06:59.599048 3162 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:06:59.600550 kubelet[3162]: W0421 10:06:59.599213 3162 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:06:59.600550 kubelet[3162]: E0421 10:06:59.599257 3162 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:06:59.601894 kubelet[3162]: E0421 10:06:59.601795 3162 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:06:59.602041 kubelet[3162]: W0421 10:06:59.601885 3162 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:06:59.602041 kubelet[3162]: E0421 10:06:59.601992 3162 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:06:59.603855 kubelet[3162]: E0421 10:06:59.603805 3162 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:06:59.603855 kubelet[3162]: W0421 10:06:59.603842 3162 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:06:59.604082 kubelet[3162]: E0421 10:06:59.603883 3162 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:06:59.671654 kubelet[3162]: E0421 10:06:59.667882 3162 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:06:59.671915 kubelet[3162]: W0421 10:06:59.671855 3162 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:06:59.672047 kubelet[3162]: E0421 10:06:59.672022 3162 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:06:59.722478 containerd[1955]: time="2026-04-21T10:06:59.722386797Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-fg7rc,Uid:f55657b9-fe08-42c5-ba8a-a2349b69285b,Namespace:calico-system,Attempt:0,}" Apr 21 10:06:59.725765 kubelet[3162]: E0421 10:06:59.725474 3162 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8sx57" podUID="678541e7-a702-489b-a934-cdd3a561ab11" Apr 21 10:06:59.746371 containerd[1955]: time="2026-04-21T10:06:59.745949280Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-8b9c97b6-cdsvn,Uid:f1140441-5ca2-434a-8fa9-ab001491c1aa,Namespace:calico-system,Attempt:0,}" Apr 21 10:06:59.811653 kubelet[3162]: E0421 10:06:59.811593 3162 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:06:59.812019 kubelet[3162]: W0421 10:06:59.811855 3162 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:06:59.812019 kubelet[3162]: E0421 10:06:59.811975 3162 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:06:59.813420 kubelet[3162]: E0421 10:06:59.813097 3162 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:06:59.813420 kubelet[3162]: W0421 10:06:59.813174 3162 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:06:59.813420 kubelet[3162]: E0421 10:06:59.813212 3162 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:06:59.815771 kubelet[3162]: E0421 10:06:59.815730 3162 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:06:59.816314 kubelet[3162]: W0421 10:06:59.816085 3162 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:06:59.816314 kubelet[3162]: E0421 10:06:59.816131 3162 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:06:59.817132 kubelet[3162]: E0421 10:06:59.816858 3162 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:06:59.817132 kubelet[3162]: W0421 10:06:59.816895 3162 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:06:59.817132 kubelet[3162]: E0421 10:06:59.816929 3162 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:06:59.817657 kubelet[3162]: E0421 10:06:59.817629 3162 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:06:59.817784 kubelet[3162]: W0421 10:06:59.817755 3162 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:06:59.817918 kubelet[3162]: E0421 10:06:59.817894 3162 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:06:59.819226 kubelet[3162]: E0421 10:06:59.818831 3162 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:06:59.819226 kubelet[3162]: W0421 10:06:59.818865 3162 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:06:59.819226 kubelet[3162]: E0421 10:06:59.818900 3162 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:06:59.820921 kubelet[3162]: E0421 10:06:59.820285 3162 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:06:59.820921 kubelet[3162]: W0421 10:06:59.820317 3162 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:06:59.820921 kubelet[3162]: E0421 10:06:59.820352 3162 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:06:59.823749 kubelet[3162]: E0421 10:06:59.821643 3162 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:06:59.823749 kubelet[3162]: W0421 10:06:59.821676 3162 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:06:59.823749 kubelet[3162]: E0421 10:06:59.821711 3162 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:06:59.824724 kubelet[3162]: E0421 10:06:59.824690 3162 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:06:59.824881 kubelet[3162]: W0421 10:06:59.824851 3162 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:06:59.825051 kubelet[3162]: E0421 10:06:59.825025 3162 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:06:59.826162 kubelet[3162]: E0421 10:06:59.826119 3162 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:06:59.826395 kubelet[3162]: W0421 10:06:59.826361 3162 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:06:59.826597 kubelet[3162]: E0421 10:06:59.826485 3162 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:06:59.827903 kubelet[3162]: E0421 10:06:59.827601 3162 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:06:59.827903 kubelet[3162]: W0421 10:06:59.827655 3162 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:06:59.827903 kubelet[3162]: E0421 10:06:59.827701 3162 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:06:59.828585 kubelet[3162]: E0421 10:06:59.828551 3162 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:06:59.829406 kubelet[3162]: W0421 10:06:59.829356 3162 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:06:59.830024 kubelet[3162]: E0421 10:06:59.829663 3162 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:06:59.831070 kubelet[3162]: E0421 10:06:59.831032 3162 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:06:59.832547 kubelet[3162]: W0421 10:06:59.831449 3162 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:06:59.832547 kubelet[3162]: E0421 10:06:59.831621 3162 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:06:59.833944 kubelet[3162]: E0421 10:06:59.833448 3162 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:06:59.833944 kubelet[3162]: W0421 10:06:59.833486 3162 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:06:59.833944 kubelet[3162]: E0421 10:06:59.833570 3162 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:06:59.836187 kubelet[3162]: E0421 10:06:59.835786 3162 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:06:59.836187 kubelet[3162]: W0421 10:06:59.835945 3162 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:06:59.836187 kubelet[3162]: E0421 10:06:59.835985 3162 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:06:59.837473 kubelet[3162]: E0421 10:06:59.836903 3162 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:06:59.837473 kubelet[3162]: W0421 10:06:59.837035 3162 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:06:59.837473 kubelet[3162]: E0421 10:06:59.837075 3162 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:06:59.843265 kubelet[3162]: E0421 10:06:59.841344 3162 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:06:59.843265 kubelet[3162]: W0421 10:06:59.841382 3162 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:06:59.843265 kubelet[3162]: E0421 10:06:59.841420 3162 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:06:59.843265 kubelet[3162]: E0421 10:06:59.843017 3162 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:06:59.843265 kubelet[3162]: W0421 10:06:59.843045 3162 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:06:59.843265 kubelet[3162]: E0421 10:06:59.843080 3162 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:06:59.843981 kubelet[3162]: E0421 10:06:59.843953 3162 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:06:59.844135 containerd[1955]: time="2026-04-21T10:06:59.840694378Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:06:59.844135 containerd[1955]: time="2026-04-21T10:06:59.840795108Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:06:59.844135 containerd[1955]: time="2026-04-21T10:06:59.840849015Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:06:59.844135 containerd[1955]: time="2026-04-21T10:06:59.841089736Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:06:59.844597 kubelet[3162]: W0421 10:06:59.844467 3162 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:06:59.845140 kubelet[3162]: E0421 10:06:59.844746 3162 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:06:59.848942 kubelet[3162]: E0421 10:06:59.848720 3162 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:06:59.848942 kubelet[3162]: W0421 10:06:59.848757 3162 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:06:59.848942 kubelet[3162]: E0421 10:06:59.848794 3162 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:06:59.860556 kubelet[3162]: E0421 10:06:59.858853 3162 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:06:59.860556 kubelet[3162]: W0421 10:06:59.858896 3162 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:06:59.860556 kubelet[3162]: E0421 10:06:59.858977 3162 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:06:59.860556 kubelet[3162]: I0421 10:06:59.859024 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-csh4w\" (UniqueName: \"kubernetes.io/projected/678541e7-a702-489b-a934-cdd3a561ab11-kube-api-access-csh4w\") pod \"csi-node-driver-8sx57\" (UID: \"678541e7-a702-489b-a934-cdd3a561ab11\") " pod="calico-system/csi-node-driver-8sx57" Apr 21 10:06:59.861802 kubelet[3162]: E0421 10:06:59.860970 3162 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:06:59.861802 kubelet[3162]: W0421 10:06:59.861014 3162 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:06:59.861802 kubelet[3162]: E0421 10:06:59.861053 3162 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:06:59.861802 kubelet[3162]: I0421 10:06:59.861097 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/678541e7-a702-489b-a934-cdd3a561ab11-socket-dir\") pod \"csi-node-driver-8sx57\" (UID: \"678541e7-a702-489b-a934-cdd3a561ab11\") " pod="calico-system/csi-node-driver-8sx57" Apr 21 10:06:59.864907 kubelet[3162]: E0421 10:06:59.864794 3162 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:06:59.864907 kubelet[3162]: W0421 10:06:59.864839 3162 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:06:59.864907 kubelet[3162]: E0421 10:06:59.864893 3162 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:06:59.865186 kubelet[3162]: I0421 10:06:59.864965 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/678541e7-a702-489b-a934-cdd3a561ab11-varrun\") pod \"csi-node-driver-8sx57\" (UID: \"678541e7-a702-489b-a934-cdd3a561ab11\") " pod="calico-system/csi-node-driver-8sx57" Apr 21 10:06:59.866800 kubelet[3162]: E0421 10:06:59.866739 3162 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:06:59.866800 kubelet[3162]: W0421 10:06:59.866783 3162 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:06:59.867159 kubelet[3162]: E0421 10:06:59.866821 3162 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:06:59.867159 kubelet[3162]: I0421 10:06:59.866932 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/678541e7-a702-489b-a934-cdd3a561ab11-kubelet-dir\") pod \"csi-node-driver-8sx57\" (UID: \"678541e7-a702-489b-a934-cdd3a561ab11\") " pod="calico-system/csi-node-driver-8sx57" Apr 21 10:06:59.867610 kubelet[3162]: E0421 10:06:59.867326 3162 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:06:59.867610 kubelet[3162]: W0421 10:06:59.867365 3162 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:06:59.867610 kubelet[3162]: E0421 10:06:59.867393 3162 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:06:59.869603 kubelet[3162]: E0421 10:06:59.868582 3162 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:06:59.869603 kubelet[3162]: W0421 10:06:59.868621 3162 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:06:59.869603 kubelet[3162]: E0421 10:06:59.868655 3162 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:06:59.870973 kubelet[3162]: E0421 10:06:59.870599 3162 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:06:59.870973 kubelet[3162]: W0421 10:06:59.870640 3162 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:06:59.870973 kubelet[3162]: E0421 10:06:59.870675 3162 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:06:59.872524 kubelet[3162]: E0421 10:06:59.871838 3162 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:06:59.872524 kubelet[3162]: W0421 10:06:59.871873 3162 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:06:59.872524 kubelet[3162]: E0421 10:06:59.871908 3162 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:06:59.874551 kubelet[3162]: I0421 10:06:59.873846 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/678541e7-a702-489b-a934-cdd3a561ab11-registration-dir\") pod \"csi-node-driver-8sx57\" (UID: \"678541e7-a702-489b-a934-cdd3a561ab11\") " pod="calico-system/csi-node-driver-8sx57" Apr 21 10:06:59.874712 kubelet[3162]: E0421 10:06:59.874579 3162 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:06:59.874712 kubelet[3162]: W0421 10:06:59.874605 3162 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:06:59.874712 kubelet[3162]: E0421 10:06:59.874639 3162 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:06:59.875983 kubelet[3162]: E0421 10:06:59.875775 3162 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:06:59.875983 kubelet[3162]: W0421 10:06:59.875813 3162 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:06:59.875983 kubelet[3162]: E0421 10:06:59.875847 3162 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:06:59.878073 kubelet[3162]: E0421 10:06:59.878007 3162 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:06:59.878073 kubelet[3162]: W0421 10:06:59.878060 3162 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:06:59.879005 kubelet[3162]: E0421 10:06:59.878101 3162 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:06:59.880936 kubelet[3162]: E0421 10:06:59.880879 3162 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:06:59.880936 kubelet[3162]: W0421 10:06:59.880923 3162 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:06:59.881181 kubelet[3162]: E0421 10:06:59.880964 3162 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:06:59.886792 kubelet[3162]: E0421 10:06:59.884939 3162 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:06:59.886792 kubelet[3162]: W0421 10:06:59.885258 3162 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:06:59.888484 kubelet[3162]: E0421 10:06:59.888384 3162 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:06:59.889948 kubelet[3162]: E0421 10:06:59.889893 3162 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:06:59.889948 kubelet[3162]: W0421 10:06:59.889935 3162 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:06:59.890567 kubelet[3162]: E0421 10:06:59.889974 3162 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:06:59.894972 kubelet[3162]: E0421 10:06:59.893317 3162 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:06:59.894972 kubelet[3162]: W0421 10:06:59.893362 3162 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:06:59.894972 kubelet[3162]: E0421 10:06:59.893578 3162 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:06:59.924752 systemd[1]: Started cri-containerd-b2a5c8c68e7f4606e858ec2b37fae47354d266869d5befd11e3b895b8fb92fe2.scope - libcontainer container b2a5c8c68e7f4606e858ec2b37fae47354d266869d5befd11e3b895b8fb92fe2. Apr 21 10:06:59.966630 containerd[1955]: time="2026-04-21T10:06:59.964840945Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:06:59.966630 containerd[1955]: time="2026-04-21T10:06:59.964950080Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:06:59.966630 containerd[1955]: time="2026-04-21T10:06:59.964986710Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:06:59.966630 containerd[1955]: time="2026-04-21T10:06:59.965149991Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:06:59.999203 kubelet[3162]: E0421 10:06:59.998492 3162 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:06:59.999203 kubelet[3162]: W0421 10:06:59.998562 3162 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:06:59.999203 kubelet[3162]: E0421 10:06:59.998602 3162 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:07:00.001208 kubelet[3162]: E0421 10:07:00.000718 3162 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:07:00.001208 kubelet[3162]: W0421 10:07:00.000781 3162 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:07:00.001208 kubelet[3162]: E0421 10:07:00.000846 3162 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:07:00.004434 kubelet[3162]: E0421 10:07:00.003578 3162 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:07:00.004434 kubelet[3162]: W0421 10:07:00.003640 3162 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:07:00.004434 kubelet[3162]: E0421 10:07:00.003685 3162 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:07:00.005703 kubelet[3162]: E0421 10:07:00.005388 3162 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:07:00.005954 kubelet[3162]: W0421 10:07:00.005885 3162 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:07:00.006320 kubelet[3162]: E0421 10:07:00.006060 3162 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:07:00.008786 kubelet[3162]: E0421 10:07:00.008653 3162 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:07:00.008786 kubelet[3162]: W0421 10:07:00.008688 3162 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:07:00.008786 kubelet[3162]: E0421 10:07:00.008751 3162 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:07:00.010715 kubelet[3162]: E0421 10:07:00.009993 3162 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:07:00.010715 kubelet[3162]: W0421 10:07:00.010060 3162 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:07:00.010715 kubelet[3162]: E0421 10:07:00.010099 3162 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:07:00.013176 kubelet[3162]: E0421 10:07:00.012157 3162 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:07:00.013176 kubelet[3162]: W0421 10:07:00.012192 3162 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:07:00.013176 kubelet[3162]: E0421 10:07:00.012249 3162 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:07:00.016867 kubelet[3162]: E0421 10:07:00.016575 3162 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:07:00.016867 kubelet[3162]: W0421 10:07:00.016617 3162 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:07:00.016867 kubelet[3162]: E0421 10:07:00.016654 3162 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:07:00.019826 kubelet[3162]: E0421 10:07:00.019454 3162 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:07:00.019826 kubelet[3162]: W0421 10:07:00.019493 3162 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:07:00.019826 kubelet[3162]: E0421 10:07:00.019558 3162 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:07:00.023218 kubelet[3162]: E0421 10:07:00.022706 3162 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:07:00.023218 kubelet[3162]: W0421 10:07:00.022744 3162 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:07:00.023218 kubelet[3162]: E0421 10:07:00.022785 3162 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:07:00.025157 kubelet[3162]: E0421 10:07:00.024649 3162 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:07:00.025157 kubelet[3162]: W0421 10:07:00.024685 3162 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:07:00.025157 kubelet[3162]: E0421 10:07:00.024722 3162 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:07:00.026780 kubelet[3162]: E0421 10:07:00.026471 3162 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:07:00.026780 kubelet[3162]: W0421 10:07:00.026547 3162 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:07:00.026780 kubelet[3162]: E0421 10:07:00.026588 3162 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:07:00.029830 kubelet[3162]: E0421 10:07:00.029699 3162 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:07:00.029830 kubelet[3162]: W0421 10:07:00.029750 3162 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:07:00.029830 kubelet[3162]: E0421 10:07:00.029791 3162 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:07:00.031611 kubelet[3162]: E0421 10:07:00.031275 3162 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:07:00.031611 kubelet[3162]: W0421 10:07:00.031312 3162 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:07:00.031611 kubelet[3162]: E0421 10:07:00.031353 3162 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:07:00.032367 kubelet[3162]: E0421 10:07:00.032092 3162 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:07:00.032367 kubelet[3162]: W0421 10:07:00.032121 3162 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:07:00.032367 kubelet[3162]: E0421 10:07:00.032148 3162 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:07:00.034223 kubelet[3162]: E0421 10:07:00.033967 3162 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:07:00.034223 kubelet[3162]: W0421 10:07:00.034000 3162 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:07:00.034223 kubelet[3162]: E0421 10:07:00.034035 3162 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:07:00.035057 kubelet[3162]: E0421 10:07:00.034810 3162 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:07:00.035057 kubelet[3162]: W0421 10:07:00.034854 3162 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:07:00.035057 kubelet[3162]: E0421 10:07:00.034889 3162 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:07:00.035941 kubelet[3162]: E0421 10:07:00.035697 3162 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:07:00.035941 kubelet[3162]: W0421 10:07:00.035728 3162 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:07:00.035941 kubelet[3162]: E0421 10:07:00.035783 3162 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:07:00.036466 kubelet[3162]: E0421 10:07:00.036438 3162 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:07:00.037006 kubelet[3162]: W0421 10:07:00.036697 3162 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:07:00.037006 kubelet[3162]: E0421 10:07:00.036739 3162 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:07:00.038566 kubelet[3162]: E0421 10:07:00.038450 3162 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:07:00.039113 kubelet[3162]: W0421 10:07:00.038773 3162 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:07:00.039113 kubelet[3162]: E0421 10:07:00.038827 3162 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:07:00.039414 kubelet[3162]: E0421 10:07:00.039386 3162 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:07:00.039573 kubelet[3162]: W0421 10:07:00.039543 3162 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:07:00.039736 kubelet[3162]: E0421 10:07:00.039710 3162 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:07:00.039773 systemd[1]: Started cri-containerd-b9b7c33ee2e04f71c02cb02d16f77286fa412c3ec99adc30fa7701ee7236a30f.scope - libcontainer container b9b7c33ee2e04f71c02cb02d16f77286fa412c3ec99adc30fa7701ee7236a30f. Apr 21 10:07:00.045543 kubelet[3162]: E0421 10:07:00.044331 3162 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:07:00.045543 kubelet[3162]: W0421 10:07:00.044368 3162 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:07:00.045543 kubelet[3162]: E0421 10:07:00.044446 3162 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:07:00.045982 kubelet[3162]: E0421 10:07:00.045946 3162 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:07:00.046607 kubelet[3162]: W0421 10:07:00.046406 3162 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:07:00.046786 kubelet[3162]: E0421 10:07:00.046760 3162 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:07:00.052842 kubelet[3162]: E0421 10:07:00.052789 3162 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:07:00.053102 kubelet[3162]: W0421 10:07:00.053052 3162 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:07:00.053632 kubelet[3162]: E0421 10:07:00.053378 3162 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:07:00.055020 kubelet[3162]: E0421 10:07:00.054957 3162 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:07:00.055020 kubelet[3162]: W0421 10:07:00.055002 3162 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:07:00.055235 kubelet[3162]: E0421 10:07:00.055043 3162 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:07:00.140191 kubelet[3162]: E0421 10:07:00.139779 3162 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:07:00.140191 kubelet[3162]: W0421 10:07:00.139849 3162 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:07:00.140191 kubelet[3162]: E0421 10:07:00.139895 3162 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:07:00.142302 containerd[1955]: time="2026-04-21T10:07:00.142009947Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-fg7rc,Uid:f55657b9-fe08-42c5-ba8a-a2349b69285b,Namespace:calico-system,Attempt:0,} returns sandbox id \"b2a5c8c68e7f4606e858ec2b37fae47354d266869d5befd11e3b895b8fb92fe2\"" Apr 21 10:07:00.149631 containerd[1955]: time="2026-04-21T10:07:00.149323678Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Apr 21 10:07:00.272932 containerd[1955]: time="2026-04-21T10:07:00.272779263Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-8b9c97b6-cdsvn,Uid:f1140441-5ca2-434a-8fa9-ab001491c1aa,Namespace:calico-system,Attempt:0,} returns sandbox id \"b9b7c33ee2e04f71c02cb02d16f77286fa412c3ec99adc30fa7701ee7236a30f\"" Apr 21 10:07:01.325293 kubelet[3162]: E0421 10:07:01.325205 3162 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8sx57" podUID="678541e7-a702-489b-a934-cdd3a561ab11" Apr 21 10:07:01.602677 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4013555483.mount: Deactivated successfully. Apr 21 10:07:01.750013 containerd[1955]: time="2026-04-21T10:07:01.749924697Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:07:01.753650 containerd[1955]: time="2026-04-21T10:07:01.753588100Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=5855345" Apr 21 10:07:01.756065 containerd[1955]: time="2026-04-21T10:07:01.755980896Z" level=info msg="ImageCreate event name:\"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:07:01.761046 containerd[1955]: time="2026-04-21T10:07:01.760936661Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:07:01.763549 containerd[1955]: time="2026-04-21T10:07:01.762654311Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"5855167\" in 1.613227646s" Apr 21 10:07:01.763549 containerd[1955]: time="2026-04-21T10:07:01.762725327Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\"" Apr 21 10:07:01.766628 containerd[1955]: time="2026-04-21T10:07:01.766279415Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Apr 21 10:07:01.774639 containerd[1955]: time="2026-04-21T10:07:01.774408989Z" level=info msg="CreateContainer within sandbox \"b2a5c8c68e7f4606e858ec2b37fae47354d266869d5befd11e3b895b8fb92fe2\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Apr 21 10:07:01.805962 containerd[1955]: time="2026-04-21T10:07:01.805778675Z" level=info msg="CreateContainer within sandbox \"b2a5c8c68e7f4606e858ec2b37fae47354d266869d5befd11e3b895b8fb92fe2\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"2d72562462d80cb24969f0ba667857e8037fc30a7da77b022c7fb93cd3f2ff93\"" Apr 21 10:07:01.811567 containerd[1955]: time="2026-04-21T10:07:01.809355070Z" level=info msg="StartContainer for \"2d72562462d80cb24969f0ba667857e8037fc30a7da77b022c7fb93cd3f2ff93\"" Apr 21 10:07:01.879176 systemd[1]: Started cri-containerd-2d72562462d80cb24969f0ba667857e8037fc30a7da77b022c7fb93cd3f2ff93.scope - libcontainer container 2d72562462d80cb24969f0ba667857e8037fc30a7da77b022c7fb93cd3f2ff93. Apr 21 10:07:01.933186 containerd[1955]: time="2026-04-21T10:07:01.933091344Z" level=info msg="StartContainer for \"2d72562462d80cb24969f0ba667857e8037fc30a7da77b022c7fb93cd3f2ff93\" returns successfully" Apr 21 10:07:01.963329 systemd[1]: cri-containerd-2d72562462d80cb24969f0ba667857e8037fc30a7da77b022c7fb93cd3f2ff93.scope: Deactivated successfully. Apr 21 10:07:02.287748 containerd[1955]: time="2026-04-21T10:07:02.287642838Z" level=info msg="shim disconnected" id=2d72562462d80cb24969f0ba667857e8037fc30a7da77b022c7fb93cd3f2ff93 namespace=k8s.io Apr 21 10:07:02.288088 containerd[1955]: time="2026-04-21T10:07:02.288056324Z" level=warning msg="cleaning up after shim disconnected" id=2d72562462d80cb24969f0ba667857e8037fc30a7da77b022c7fb93cd3f2ff93 namespace=k8s.io Apr 21 10:07:02.288286 containerd[1955]: time="2026-04-21T10:07:02.288138926Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 10:07:02.556442 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2d72562462d80cb24969f0ba667857e8037fc30a7da77b022c7fb93cd3f2ff93-rootfs.mount: Deactivated successfully. Apr 21 10:07:03.326603 kubelet[3162]: E0421 10:07:03.324708 3162 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8sx57" podUID="678541e7-a702-489b-a934-cdd3a561ab11" Apr 21 10:07:04.283813 containerd[1955]: time="2026-04-21T10:07:04.283736063Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:07:04.285844 containerd[1955]: time="2026-04-21T10:07:04.285747633Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=32467511" Apr 21 10:07:04.288230 containerd[1955]: time="2026-04-21T10:07:04.286744851Z" level=info msg="ImageCreate event name:\"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:07:04.292796 containerd[1955]: time="2026-04-21T10:07:04.292733420Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:07:04.294762 containerd[1955]: time="2026-04-21T10:07:04.294688357Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"33865028\" in 2.52835069s" Apr 21 10:07:04.294922 containerd[1955]: time="2026-04-21T10:07:04.294806640Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\"" Apr 21 10:07:04.299561 containerd[1955]: time="2026-04-21T10:07:04.299432552Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Apr 21 10:07:04.332325 containerd[1955]: time="2026-04-21T10:07:04.332246283Z" level=info msg="CreateContainer within sandbox \"b9b7c33ee2e04f71c02cb02d16f77286fa412c3ec99adc30fa7701ee7236a30f\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Apr 21 10:07:04.357026 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount300081704.mount: Deactivated successfully. Apr 21 10:07:04.365866 containerd[1955]: time="2026-04-21T10:07:04.365480536Z" level=info msg="CreateContainer within sandbox \"b9b7c33ee2e04f71c02cb02d16f77286fa412c3ec99adc30fa7701ee7236a30f\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"808de11285d159dd400a092f89e24f37610f4b58ae677d405646bf083b5a2848\"" Apr 21 10:07:04.369562 containerd[1955]: time="2026-04-21T10:07:04.367904127Z" level=info msg="StartContainer for \"808de11285d159dd400a092f89e24f37610f4b58ae677d405646bf083b5a2848\"" Apr 21 10:07:04.436876 systemd[1]: Started cri-containerd-808de11285d159dd400a092f89e24f37610f4b58ae677d405646bf083b5a2848.scope - libcontainer container 808de11285d159dd400a092f89e24f37610f4b58ae677d405646bf083b5a2848. Apr 21 10:07:04.507965 containerd[1955]: time="2026-04-21T10:07:04.507887845Z" level=info msg="StartContainer for \"808de11285d159dd400a092f89e24f37610f4b58ae677d405646bf083b5a2848\" returns successfully" Apr 21 10:07:05.325583 kubelet[3162]: E0421 10:07:05.325274 3162 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8sx57" podUID="678541e7-a702-489b-a934-cdd3a561ab11" Apr 21 10:07:05.657446 kubelet[3162]: I0421 10:07:05.657225 3162 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-typha-8b9c97b6-cdsvn" podStartSLOduration=2.635760182 podStartE2EDuration="6.657205172s" podCreationTimestamp="2026-04-21 10:06:59 +0000 UTC" firstStartedPulling="2026-04-21 10:07:00.275708499 +0000 UTC m=+36.193070604" lastFinishedPulling="2026-04-21 10:07:04.297153429 +0000 UTC m=+40.214515594" observedRunningTime="2026-04-21 10:07:04.660837358 +0000 UTC m=+40.578199548" watchObservedRunningTime="2026-04-21 10:07:05.657205172 +0000 UTC m=+41.574567277" Apr 21 10:07:07.325155 kubelet[3162]: E0421 10:07:07.325075 3162 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8sx57" podUID="678541e7-a702-489b-a934-cdd3a561ab11" Apr 21 10:07:09.324590 kubelet[3162]: E0421 10:07:09.324426 3162 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8sx57" podUID="678541e7-a702-489b-a934-cdd3a561ab11" Apr 21 10:07:11.325585 kubelet[3162]: E0421 10:07:11.325410 3162 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8sx57" podUID="678541e7-a702-489b-a934-cdd3a561ab11" Apr 21 10:07:11.326617 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3940508223.mount: Deactivated successfully. Apr 21 10:07:11.400335 containerd[1955]: time="2026-04-21T10:07:11.398689632Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:07:11.400335 containerd[1955]: time="2026-04-21T10:07:11.400258792Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=153921674" Apr 21 10:07:11.401651 containerd[1955]: time="2026-04-21T10:07:11.401531176Z" level=info msg="ImageCreate event name:\"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:07:11.408882 containerd[1955]: time="2026-04-21T10:07:11.408819983Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:07:11.410849 containerd[1955]: time="2026-04-21T10:07:11.410757391Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"153921536\" in 7.111249465s" Apr 21 10:07:11.410849 containerd[1955]: time="2026-04-21T10:07:11.410829163Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\"" Apr 21 10:07:11.422094 containerd[1955]: time="2026-04-21T10:07:11.421812912Z" level=info msg="CreateContainer within sandbox \"b2a5c8c68e7f4606e858ec2b37fae47354d266869d5befd11e3b895b8fb92fe2\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Apr 21 10:07:11.449163 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2848807941.mount: Deactivated successfully. Apr 21 10:07:11.452667 containerd[1955]: time="2026-04-21T10:07:11.452590317Z" level=info msg="CreateContainer within sandbox \"b2a5c8c68e7f4606e858ec2b37fae47354d266869d5befd11e3b895b8fb92fe2\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"6e3f253c253659cae9d6eb73b2db6c9f32e78f3f590dd33abaac7c3f3a49a424\"" Apr 21 10:07:11.456151 containerd[1955]: time="2026-04-21T10:07:11.456059475Z" level=info msg="StartContainer for \"6e3f253c253659cae9d6eb73b2db6c9f32e78f3f590dd33abaac7c3f3a49a424\"" Apr 21 10:07:11.532852 systemd[1]: Started cri-containerd-6e3f253c253659cae9d6eb73b2db6c9f32e78f3f590dd33abaac7c3f3a49a424.scope - libcontainer container 6e3f253c253659cae9d6eb73b2db6c9f32e78f3f590dd33abaac7c3f3a49a424. Apr 21 10:07:11.593317 containerd[1955]: time="2026-04-21T10:07:11.592348850Z" level=info msg="StartContainer for \"6e3f253c253659cae9d6eb73b2db6c9f32e78f3f590dd33abaac7c3f3a49a424\" returns successfully" Apr 21 10:07:11.824250 systemd[1]: cri-containerd-6e3f253c253659cae9d6eb73b2db6c9f32e78f3f590dd33abaac7c3f3a49a424.scope: Deactivated successfully. Apr 21 10:07:12.324115 systemd[1]: run-containerd-runc-k8s.io-6e3f253c253659cae9d6eb73b2db6c9f32e78f3f590dd33abaac7c3f3a49a424-runc.paqJh1.mount: Deactivated successfully. Apr 21 10:07:12.324307 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6e3f253c253659cae9d6eb73b2db6c9f32e78f3f590dd33abaac7c3f3a49a424-rootfs.mount: Deactivated successfully. Apr 21 10:07:12.633706 containerd[1955]: time="2026-04-21T10:07:12.633296744Z" level=info msg="shim disconnected" id=6e3f253c253659cae9d6eb73b2db6c9f32e78f3f590dd33abaac7c3f3a49a424 namespace=k8s.io Apr 21 10:07:12.633706 containerd[1955]: time="2026-04-21T10:07:12.633578416Z" level=warning msg="cleaning up after shim disconnected" id=6e3f253c253659cae9d6eb73b2db6c9f32e78f3f590dd33abaac7c3f3a49a424 namespace=k8s.io Apr 21 10:07:12.634998 containerd[1955]: time="2026-04-21T10:07:12.633611805Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 10:07:12.672949 containerd[1955]: time="2026-04-21T10:07:12.672863489Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Apr 21 10:07:13.325600 kubelet[3162]: E0421 10:07:13.325068 3162 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8sx57" podUID="678541e7-a702-489b-a934-cdd3a561ab11" Apr 21 10:07:15.325493 kubelet[3162]: E0421 10:07:15.325414 3162 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8sx57" podUID="678541e7-a702-489b-a934-cdd3a561ab11" Apr 21 10:07:15.954774 containerd[1955]: time="2026-04-21T10:07:15.954242402Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:07:15.956192 containerd[1955]: time="2026-04-21T10:07:15.956115050Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=66009216" Apr 21 10:07:15.957709 containerd[1955]: time="2026-04-21T10:07:15.957199732Z" level=info msg="ImageCreate event name:\"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:07:15.962595 containerd[1955]: time="2026-04-21T10:07:15.962473919Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:07:15.964764 containerd[1955]: time="2026-04-21T10:07:15.964690683Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"67406741\" in 3.291743176s" Apr 21 10:07:15.965101 containerd[1955]: time="2026-04-21T10:07:15.964956256Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\"" Apr 21 10:07:15.972789 containerd[1955]: time="2026-04-21T10:07:15.972569045Z" level=info msg="CreateContainer within sandbox \"b2a5c8c68e7f4606e858ec2b37fae47354d266869d5befd11e3b895b8fb92fe2\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 21 10:07:15.996237 containerd[1955]: time="2026-04-21T10:07:15.996166826Z" level=info msg="CreateContainer within sandbox \"b2a5c8c68e7f4606e858ec2b37fae47354d266869d5befd11e3b895b8fb92fe2\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"233f78f839b0eb91c8297c5e9bb66b505c5b53a547d6c6a808a29b217a8892ce\"" Apr 21 10:07:15.999485 containerd[1955]: time="2026-04-21T10:07:15.999339075Z" level=info msg="StartContainer for \"233f78f839b0eb91c8297c5e9bb66b505c5b53a547d6c6a808a29b217a8892ce\"" Apr 21 10:07:16.062845 systemd[1]: Started cri-containerd-233f78f839b0eb91c8297c5e9bb66b505c5b53a547d6c6a808a29b217a8892ce.scope - libcontainer container 233f78f839b0eb91c8297c5e9bb66b505c5b53a547d6c6a808a29b217a8892ce. Apr 21 10:07:16.128308 containerd[1955]: time="2026-04-21T10:07:16.128233427Z" level=info msg="StartContainer for \"233f78f839b0eb91c8297c5e9bb66b505c5b53a547d6c6a808a29b217a8892ce\" returns successfully" Apr 21 10:07:17.326154 kubelet[3162]: E0421 10:07:17.326022 3162 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8sx57" podUID="678541e7-a702-489b-a934-cdd3a561ab11" Apr 21 10:07:17.815798 containerd[1955]: time="2026-04-21T10:07:17.815720236Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 21 10:07:17.820729 systemd[1]: cri-containerd-233f78f839b0eb91c8297c5e9bb66b505c5b53a547d6c6a808a29b217a8892ce.scope: Deactivated successfully. Apr 21 10:07:17.821198 systemd[1]: cri-containerd-233f78f839b0eb91c8297c5e9bb66b505c5b53a547d6c6a808a29b217a8892ce.scope: Consumed 1.006s CPU time. Apr 21 10:07:17.875876 kubelet[3162]: I0421 10:07:17.871780 3162 kubelet_node_status.go:427] "Fast updating node status as it just became ready" Apr 21 10:07:17.875203 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-233f78f839b0eb91c8297c5e9bb66b505c5b53a547d6c6a808a29b217a8892ce-rootfs.mount: Deactivated successfully. Apr 21 10:07:17.939418 containerd[1955]: time="2026-04-21T10:07:17.939318129Z" level=info msg="shim disconnected" id=233f78f839b0eb91c8297c5e9bb66b505c5b53a547d6c6a808a29b217a8892ce namespace=k8s.io Apr 21 10:07:17.939418 containerd[1955]: time="2026-04-21T10:07:17.939407646Z" level=warning msg="cleaning up after shim disconnected" id=233f78f839b0eb91c8297c5e9bb66b505c5b53a547d6c6a808a29b217a8892ce namespace=k8s.io Apr 21 10:07:17.939418 containerd[1955]: time="2026-04-21T10:07:17.939429869Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 10:07:17.972941 systemd[1]: Created slice kubepods-besteffort-pod845525b3_5736_4423_84eb_ae963b0e3b2b.slice - libcontainer container kubepods-besteffort-pod845525b3_5736_4423_84eb_ae963b0e3b2b.slice. Apr 21 10:07:17.989118 kubelet[3162]: E0421 10:07:17.989066 3162 status_manager.go:1045] "Failed to get status for pod" err="pods \"whisker-5668cff8fc-xc5cn\" is forbidden: User \"system:node:ip-172-31-20-11\" cannot get resource \"pods\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ip-172-31-20-11' and this object" podUID="845525b3-5736-4423-84eb-ae963b0e3b2b" pod="calico-system/whisker-5668cff8fc-xc5cn" Apr 21 10:07:17.993860 systemd[1]: Created slice kubepods-burstable-podaf9f8050_444d_4935_9809_2c239d4c35de.slice - libcontainer container kubepods-burstable-podaf9f8050_444d_4935_9809_2c239d4c35de.slice. Apr 21 10:07:18.041172 systemd[1]: Created slice kubepods-burstable-pod7842ac67_69e7_4228_9787_d5e2cbd9c0b8.slice - libcontainer container kubepods-burstable-pod7842ac67_69e7_4228_9787_d5e2cbd9c0b8.slice. Apr 21 10:07:18.059117 systemd[1]: Created slice kubepods-besteffort-poddcac6754_a482_47df_a01f_171bd1cc8980.slice - libcontainer container kubepods-besteffort-poddcac6754_a482_47df_a01f_171bd1cc8980.slice. Apr 21 10:07:18.071583 kubelet[3162]: I0421 10:07:18.070547 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/af9f8050-444d-4935-9809-2c239d4c35de-config-volume\") pod \"coredns-7d764666f9-cxnd9\" (UID: \"af9f8050-444d-4935-9809-2c239d4c35de\") " pod="kube-system/coredns-7d764666f9-cxnd9" Apr 21 10:07:18.071583 kubelet[3162]: I0421 10:07:18.070765 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/845525b3-5736-4423-84eb-ae963b0e3b2b-whisker-ca-bundle\") pod \"whisker-5668cff8fc-xc5cn\" (UID: \"845525b3-5736-4423-84eb-ae963b0e3b2b\") " pod="calico-system/whisker-5668cff8fc-xc5cn" Apr 21 10:07:18.071583 kubelet[3162]: I0421 10:07:18.071150 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cv8z6\" (UniqueName: \"kubernetes.io/projected/af9f8050-444d-4935-9809-2c239d4c35de-kube-api-access-cv8z6\") pod \"coredns-7d764666f9-cxnd9\" (UID: \"af9f8050-444d-4935-9809-2c239d4c35de\") " pod="kube-system/coredns-7d764666f9-cxnd9" Apr 21 10:07:18.077631 kubelet[3162]: I0421 10:07:18.077497 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/845525b3-5736-4423-84eb-ae963b0e3b2b-nginx-config\") pod \"whisker-5668cff8fc-xc5cn\" (UID: \"845525b3-5736-4423-84eb-ae963b0e3b2b\") " pod="calico-system/whisker-5668cff8fc-xc5cn" Apr 21 10:07:18.077806 kubelet[3162]: I0421 10:07:18.077658 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/845525b3-5736-4423-84eb-ae963b0e3b2b-whisker-backend-key-pair\") pod \"whisker-5668cff8fc-xc5cn\" (UID: \"845525b3-5736-4423-84eb-ae963b0e3b2b\") " pod="calico-system/whisker-5668cff8fc-xc5cn" Apr 21 10:07:18.077806 kubelet[3162]: I0421 10:07:18.077752 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ncnr\" (UniqueName: \"kubernetes.io/projected/845525b3-5736-4423-84eb-ae963b0e3b2b-kube-api-access-2ncnr\") pod \"whisker-5668cff8fc-xc5cn\" (UID: \"845525b3-5736-4423-84eb-ae963b0e3b2b\") " pod="calico-system/whisker-5668cff8fc-xc5cn" Apr 21 10:07:18.094843 systemd[1]: Created slice kubepods-besteffort-pod1a31afb8_9d55_404f_8363_717bc3de0918.slice - libcontainer container kubepods-besteffort-pod1a31afb8_9d55_404f_8363_717bc3de0918.slice. Apr 21 10:07:18.115620 systemd[1]: Created slice kubepods-besteffort-pod26d85b28_0419_4bec_8bca_4c6bc1376147.slice - libcontainer container kubepods-besteffort-pod26d85b28_0419_4bec_8bca_4c6bc1376147.slice. Apr 21 10:07:18.137749 systemd[1]: Created slice kubepods-besteffort-poda8a3131a_1ae1_4f2c_85eb_7bb7a52a06c6.slice - libcontainer container kubepods-besteffort-poda8a3131a_1ae1_4f2c_85eb_7bb7a52a06c6.slice. Apr 21 10:07:18.181984 kubelet[3162]: I0421 10:07:18.178481 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a31afb8-9d55-404f-8363-717bc3de0918-config\") pod \"goldmane-9f7667bb8-fr8h9\" (UID: \"1a31afb8-9d55-404f-8363-717bc3de0918\") " pod="calico-system/goldmane-9f7667bb8-fr8h9" Apr 21 10:07:18.181984 kubelet[3162]: I0421 10:07:18.178593 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a8a3131a-1ae1-4f2c-85eb-7bb7a52a06c6-calico-apiserver-certs\") pod \"calico-apiserver-756779796c-l52kg\" (UID: \"a8a3131a-1ae1-4f2c-85eb-7bb7a52a06c6\") " pod="calico-system/calico-apiserver-756779796c-l52kg" Apr 21 10:07:18.181984 kubelet[3162]: I0421 10:07:18.178632 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-249tm\" (UniqueName: \"kubernetes.io/projected/a8a3131a-1ae1-4f2c-85eb-7bb7a52a06c6-kube-api-access-249tm\") pod \"calico-apiserver-756779796c-l52kg\" (UID: \"a8a3131a-1ae1-4f2c-85eb-7bb7a52a06c6\") " pod="calico-system/calico-apiserver-756779796c-l52kg" Apr 21 10:07:18.181984 kubelet[3162]: I0421 10:07:18.178680 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/1a31afb8-9d55-404f-8363-717bc3de0918-goldmane-key-pair\") pod \"goldmane-9f7667bb8-fr8h9\" (UID: \"1a31afb8-9d55-404f-8363-717bc3de0918\") " pod="calico-system/goldmane-9f7667bb8-fr8h9" Apr 21 10:07:18.181984 kubelet[3162]: I0421 10:07:18.178762 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/26d85b28-0419-4bec-8bca-4c6bc1376147-tigera-ca-bundle\") pod \"calico-kube-controllers-74499fb665-57b24\" (UID: \"26d85b28-0419-4bec-8bca-4c6bc1376147\") " pod="calico-system/calico-kube-controllers-74499fb665-57b24" Apr 21 10:07:18.182463 kubelet[3162]: I0421 10:07:18.178803 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpvzt\" (UniqueName: \"kubernetes.io/projected/26d85b28-0419-4bec-8bca-4c6bc1376147-kube-api-access-vpvzt\") pod \"calico-kube-controllers-74499fb665-57b24\" (UID: \"26d85b28-0419-4bec-8bca-4c6bc1376147\") " pod="calico-system/calico-kube-controllers-74499fb665-57b24" Apr 21 10:07:18.182463 kubelet[3162]: I0421 10:07:18.178842 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/dcac6754-a482-47df-a01f-171bd1cc8980-calico-apiserver-certs\") pod \"calico-apiserver-756779796c-mwdfq\" (UID: \"dcac6754-a482-47df-a01f-171bd1cc8980\") " pod="calico-system/calico-apiserver-756779796c-mwdfq" Apr 21 10:07:18.182463 kubelet[3162]: I0421 10:07:18.178882 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjwtb\" (UniqueName: \"kubernetes.io/projected/dcac6754-a482-47df-a01f-171bd1cc8980-kube-api-access-zjwtb\") pod \"calico-apiserver-756779796c-mwdfq\" (UID: \"dcac6754-a482-47df-a01f-171bd1cc8980\") " pod="calico-system/calico-apiserver-756779796c-mwdfq" Apr 21 10:07:18.182463 kubelet[3162]: I0421 10:07:18.178917 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpkh2\" (UniqueName: \"kubernetes.io/projected/7842ac67-69e7-4228-9787-d5e2cbd9c0b8-kube-api-access-xpkh2\") pod \"coredns-7d764666f9-n4jjg\" (UID: \"7842ac67-69e7-4228-9787-d5e2cbd9c0b8\") " pod="kube-system/coredns-7d764666f9-n4jjg" Apr 21 10:07:18.182463 kubelet[3162]: I0421 10:07:18.178959 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4k29g\" (UniqueName: \"kubernetes.io/projected/1a31afb8-9d55-404f-8363-717bc3de0918-kube-api-access-4k29g\") pod \"goldmane-9f7667bb8-fr8h9\" (UID: \"1a31afb8-9d55-404f-8363-717bc3de0918\") " pod="calico-system/goldmane-9f7667bb8-fr8h9" Apr 21 10:07:18.182798 kubelet[3162]: I0421 10:07:18.179061 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7842ac67-69e7-4228-9787-d5e2cbd9c0b8-config-volume\") pod \"coredns-7d764666f9-n4jjg\" (UID: \"7842ac67-69e7-4228-9787-d5e2cbd9c0b8\") " pod="kube-system/coredns-7d764666f9-n4jjg" Apr 21 10:07:18.182798 kubelet[3162]: I0421 10:07:18.179099 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1a31afb8-9d55-404f-8363-717bc3de0918-goldmane-ca-bundle\") pod \"goldmane-9f7667bb8-fr8h9\" (UID: \"1a31afb8-9d55-404f-8363-717bc3de0918\") " pod="calico-system/goldmane-9f7667bb8-fr8h9" Apr 21 10:07:18.302909 containerd[1955]: time="2026-04-21T10:07:18.302833363Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5668cff8fc-xc5cn,Uid:845525b3-5736-4423-84eb-ae963b0e3b2b,Namespace:calico-system,Attempt:0,}" Apr 21 10:07:18.323964 containerd[1955]: time="2026-04-21T10:07:18.322827975Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-cxnd9,Uid:af9f8050-444d-4935-9809-2c239d4c35de,Namespace:kube-system,Attempt:0,}" Apr 21 10:07:18.384541 containerd[1955]: time="2026-04-21T10:07:18.384457583Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-756779796c-mwdfq,Uid:dcac6754-a482-47df-a01f-171bd1cc8980,Namespace:calico-system,Attempt:0,}" Apr 21 10:07:18.408008 containerd[1955]: time="2026-04-21T10:07:18.407929949Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-fr8h9,Uid:1a31afb8-9d55-404f-8363-717bc3de0918,Namespace:calico-system,Attempt:0,}" Apr 21 10:07:18.431226 containerd[1955]: time="2026-04-21T10:07:18.430749370Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-74499fb665-57b24,Uid:26d85b28-0419-4bec-8bca-4c6bc1376147,Namespace:calico-system,Attempt:0,}" Apr 21 10:07:18.450934 containerd[1955]: time="2026-04-21T10:07:18.449889011Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-756779796c-l52kg,Uid:a8a3131a-1ae1-4f2c-85eb-7bb7a52a06c6,Namespace:calico-system,Attempt:0,}" Apr 21 10:07:18.660569 containerd[1955]: time="2026-04-21T10:07:18.660377184Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-n4jjg,Uid:7842ac67-69e7-4228-9787-d5e2cbd9c0b8,Namespace:kube-system,Attempt:0,}" Apr 21 10:07:18.776016 containerd[1955]: time="2026-04-21T10:07:18.775935787Z" level=info msg="CreateContainer within sandbox \"b2a5c8c68e7f4606e858ec2b37fae47354d266869d5befd11e3b895b8fb92fe2\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Apr 21 10:07:18.851549 containerd[1955]: time="2026-04-21T10:07:18.850895981Z" level=error msg="Failed to destroy network for sandbox \"b9259996c4f20e4dffc1ab6aa795f2d080338d492106c257cc9efc78a9edf950\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:07:18.854227 containerd[1955]: time="2026-04-21T10:07:18.854164302Z" level=info msg="CreateContainer within sandbox \"b2a5c8c68e7f4606e858ec2b37fae47354d266869d5befd11e3b895b8fb92fe2\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"aa3f799afc4f8dbab9f71652bc82ca185c44f2ef33b5744a98cdb8a47aba84ed\"" Apr 21 10:07:18.856213 containerd[1955]: time="2026-04-21T10:07:18.856146757Z" level=info msg="StartContainer for \"aa3f799afc4f8dbab9f71652bc82ca185c44f2ef33b5744a98cdb8a47aba84ed\"" Apr 21 10:07:18.902535 containerd[1955]: time="2026-04-21T10:07:18.895365449Z" level=error msg="encountered an error cleaning up failed sandbox \"b9259996c4f20e4dffc1ab6aa795f2d080338d492106c257cc9efc78a9edf950\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:07:18.902535 containerd[1955]: time="2026-04-21T10:07:18.895858199Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-cxnd9,Uid:af9f8050-444d-4935-9809-2c239d4c35de,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b9259996c4f20e4dffc1ab6aa795f2d080338d492106c257cc9efc78a9edf950\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:07:18.904951 kubelet[3162]: E0421 10:07:18.904059 3162 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9259996c4f20e4dffc1ab6aa795f2d080338d492106c257cc9efc78a9edf950\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:07:18.912651 kubelet[3162]: E0421 10:07:18.911890 3162 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9259996c4f20e4dffc1ab6aa795f2d080338d492106c257cc9efc78a9edf950\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-cxnd9" Apr 21 10:07:18.917716 kubelet[3162]: E0421 10:07:18.914956 3162 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9259996c4f20e4dffc1ab6aa795f2d080338d492106c257cc9efc78a9edf950\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-cxnd9" Apr 21 10:07:18.917716 kubelet[3162]: E0421 10:07:18.915078 3162 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7d764666f9-cxnd9_kube-system(af9f8050-444d-4935-9809-2c239d4c35de)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7d764666f9-cxnd9_kube-system(af9f8050-444d-4935-9809-2c239d4c35de)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b9259996c4f20e4dffc1ab6aa795f2d080338d492106c257cc9efc78a9edf950\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-cxnd9" podUID="af9f8050-444d-4935-9809-2c239d4c35de" Apr 21 10:07:18.981786 containerd[1955]: time="2026-04-21T10:07:18.981671217Z" level=error msg="Failed to destroy network for sandbox \"734c7f6214f7cf5d7196daffcbcfbbe97e510515850d3c08bd2886e1174b3685\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:07:18.984540 containerd[1955]: time="2026-04-21T10:07:18.984416809Z" level=error msg="encountered an error cleaning up failed sandbox \"734c7f6214f7cf5d7196daffcbcfbbe97e510515850d3c08bd2886e1174b3685\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:07:18.991846 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-734c7f6214f7cf5d7196daffcbcfbbe97e510515850d3c08bd2886e1174b3685-shm.mount: Deactivated successfully. Apr 21 10:07:18.996912 containerd[1955]: time="2026-04-21T10:07:18.996262661Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-756779796c-mwdfq,Uid:dcac6754-a482-47df-a01f-171bd1cc8980,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"734c7f6214f7cf5d7196daffcbcfbbe97e510515850d3c08bd2886e1174b3685\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:07:18.998495 kubelet[3162]: E0421 10:07:18.997873 3162 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"734c7f6214f7cf5d7196daffcbcfbbe97e510515850d3c08bd2886e1174b3685\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:07:18.998495 kubelet[3162]: E0421 10:07:18.997967 3162 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"734c7f6214f7cf5d7196daffcbcfbbe97e510515850d3c08bd2886e1174b3685\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-756779796c-mwdfq" Apr 21 10:07:18.998495 kubelet[3162]: E0421 10:07:18.998003 3162 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"734c7f6214f7cf5d7196daffcbcfbbe97e510515850d3c08bd2886e1174b3685\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-756779796c-mwdfq" Apr 21 10:07:18.999000 kubelet[3162]: E0421 10:07:18.998084 3162 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-756779796c-mwdfq_calico-system(dcac6754-a482-47df-a01f-171bd1cc8980)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-756779796c-mwdfq_calico-system(dcac6754-a482-47df-a01f-171bd1cc8980)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"734c7f6214f7cf5d7196daffcbcfbbe97e510515850d3c08bd2886e1174b3685\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-756779796c-mwdfq" podUID="dcac6754-a482-47df-a01f-171bd1cc8980" Apr 21 10:07:19.030091 containerd[1955]: time="2026-04-21T10:07:19.029542261Z" level=error msg="Failed to destroy network for sandbox \"21275ea079eb3066827c0126331c15680105000b3b73bfe6800334903f295186\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:07:19.038375 containerd[1955]: time="2026-04-21T10:07:19.037885614Z" level=error msg="Failed to destroy network for sandbox \"c426fd80f098b5a96307306ff075dfee427d004363145efae42a8a28c4de5c7e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:07:19.039184 containerd[1955]: time="2026-04-21T10:07:19.038115493Z" level=error msg="encountered an error cleaning up failed sandbox \"21275ea079eb3066827c0126331c15680105000b3b73bfe6800334903f295186\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:07:19.039184 containerd[1955]: time="2026-04-21T10:07:19.039069202Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5668cff8fc-xc5cn,Uid:845525b3-5736-4423-84eb-ae963b0e3b2b,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"21275ea079eb3066827c0126331c15680105000b3b73bfe6800334903f295186\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:07:19.039381 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-21275ea079eb3066827c0126331c15680105000b3b73bfe6800334903f295186-shm.mount: Deactivated successfully. Apr 21 10:07:19.048550 containerd[1955]: time="2026-04-21T10:07:19.044992602Z" level=error msg="encountered an error cleaning up failed sandbox \"c426fd80f098b5a96307306ff075dfee427d004363145efae42a8a28c4de5c7e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:07:19.048704 kubelet[3162]: E0421 10:07:19.045440 3162 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"21275ea079eb3066827c0126331c15680105000b3b73bfe6800334903f295186\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:07:19.048704 kubelet[3162]: E0421 10:07:19.045554 3162 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"21275ea079eb3066827c0126331c15680105000b3b73bfe6800334903f295186\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5668cff8fc-xc5cn" Apr 21 10:07:19.052252 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c426fd80f098b5a96307306ff075dfee427d004363145efae42a8a28c4de5c7e-shm.mount: Deactivated successfully. Apr 21 10:07:19.057252 containerd[1955]: time="2026-04-21T10:07:19.051930785Z" level=error msg="Failed to destroy network for sandbox \"fab10588f8e7d4a2b00722325f708644c5b0a0b846bcb03afc2160de84a05ad6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:07:19.057252 containerd[1955]: time="2026-04-21T10:07:19.056965502Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-74499fb665-57b24,Uid:26d85b28-0419-4bec-8bca-4c6bc1376147,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c426fd80f098b5a96307306ff075dfee427d004363145efae42a8a28c4de5c7e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:07:19.057785 kubelet[3162]: E0421 10:07:19.057712 3162 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c426fd80f098b5a96307306ff075dfee427d004363145efae42a8a28c4de5c7e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:07:19.060713 kubelet[3162]: E0421 10:07:19.060647 3162 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c426fd80f098b5a96307306ff075dfee427d004363145efae42a8a28c4de5c7e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-74499fb665-57b24" Apr 21 10:07:19.065455 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fab10588f8e7d4a2b00722325f708644c5b0a0b846bcb03afc2160de84a05ad6-shm.mount: Deactivated successfully. Apr 21 10:07:19.066488 containerd[1955]: time="2026-04-21T10:07:19.066011062Z" level=error msg="encountered an error cleaning up failed sandbox \"fab10588f8e7d4a2b00722325f708644c5b0a0b846bcb03afc2160de84a05ad6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:07:19.066488 containerd[1955]: time="2026-04-21T10:07:19.066109703Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-fr8h9,Uid:1a31afb8-9d55-404f-8363-717bc3de0918,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fab10588f8e7d4a2b00722325f708644c5b0a0b846bcb03afc2160de84a05ad6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:07:19.069114 kubelet[3162]: E0421 10:07:19.069038 3162 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c426fd80f098b5a96307306ff075dfee427d004363145efae42a8a28c4de5c7e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-74499fb665-57b24" Apr 21 10:07:19.070467 kubelet[3162]: E0421 10:07:19.069577 3162 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-74499fb665-57b24_calico-system(26d85b28-0419-4bec-8bca-4c6bc1376147)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-74499fb665-57b24_calico-system(26d85b28-0419-4bec-8bca-4c6bc1376147)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c426fd80f098b5a96307306ff075dfee427d004363145efae42a8a28c4de5c7e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-74499fb665-57b24" podUID="26d85b28-0419-4bec-8bca-4c6bc1376147" Apr 21 10:07:19.071088 kubelet[3162]: E0421 10:07:19.070998 3162 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"21275ea079eb3066827c0126331c15680105000b3b73bfe6800334903f295186\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5668cff8fc-xc5cn" Apr 21 10:07:19.071420 kubelet[3162]: E0421 10:07:19.071359 3162 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5668cff8fc-xc5cn_calico-system(845525b3-5736-4423-84eb-ae963b0e3b2b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5668cff8fc-xc5cn_calico-system(845525b3-5736-4423-84eb-ae963b0e3b2b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"21275ea079eb3066827c0126331c15680105000b3b73bfe6800334903f295186\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5668cff8fc-xc5cn" podUID="845525b3-5736-4423-84eb-ae963b0e3b2b" Apr 21 10:07:19.072200 kubelet[3162]: E0421 10:07:19.071890 3162 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fab10588f8e7d4a2b00722325f708644c5b0a0b846bcb03afc2160de84a05ad6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:07:19.072200 kubelet[3162]: E0421 10:07:19.071971 3162 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fab10588f8e7d4a2b00722325f708644c5b0a0b846bcb03afc2160de84a05ad6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-9f7667bb8-fr8h9" Apr 21 10:07:19.072200 kubelet[3162]: E0421 10:07:19.072005 3162 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fab10588f8e7d4a2b00722325f708644c5b0a0b846bcb03afc2160de84a05ad6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-9f7667bb8-fr8h9" Apr 21 10:07:19.072548 kubelet[3162]: E0421 10:07:19.072090 3162 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-9f7667bb8-fr8h9_calico-system(1a31afb8-9d55-404f-8363-717bc3de0918)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-9f7667bb8-fr8h9_calico-system(1a31afb8-9d55-404f-8363-717bc3de0918)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fab10588f8e7d4a2b00722325f708644c5b0a0b846bcb03afc2160de84a05ad6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-9f7667bb8-fr8h9" podUID="1a31afb8-9d55-404f-8363-717bc3de0918" Apr 21 10:07:19.123027 systemd[1]: Started cri-containerd-aa3f799afc4f8dbab9f71652bc82ca185c44f2ef33b5744a98cdb8a47aba84ed.scope - libcontainer container aa3f799afc4f8dbab9f71652bc82ca185c44f2ef33b5744a98cdb8a47aba84ed. Apr 21 10:07:19.143718 containerd[1955]: time="2026-04-21T10:07:19.142920850Z" level=error msg="Failed to destroy network for sandbox \"3387e0f166e367a3b7fb9f1457e40b12bac4376046f8c938aa64f3351806361b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:07:19.143718 containerd[1955]: time="2026-04-21T10:07:19.143465154Z" level=error msg="encountered an error cleaning up failed sandbox \"3387e0f166e367a3b7fb9f1457e40b12bac4376046f8c938aa64f3351806361b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:07:19.143718 containerd[1955]: time="2026-04-21T10:07:19.143599081Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-756779796c-l52kg,Uid:a8a3131a-1ae1-4f2c-85eb-7bb7a52a06c6,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3387e0f166e367a3b7fb9f1457e40b12bac4376046f8c938aa64f3351806361b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:07:19.144354 kubelet[3162]: E0421 10:07:19.144306 3162 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3387e0f166e367a3b7fb9f1457e40b12bac4376046f8c938aa64f3351806361b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:07:19.144660 kubelet[3162]: E0421 10:07:19.144625 3162 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3387e0f166e367a3b7fb9f1457e40b12bac4376046f8c938aa64f3351806361b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-756779796c-l52kg" Apr 21 10:07:19.144819 kubelet[3162]: E0421 10:07:19.144786 3162 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3387e0f166e367a3b7fb9f1457e40b12bac4376046f8c938aa64f3351806361b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-756779796c-l52kg" Apr 21 10:07:19.145445 kubelet[3162]: E0421 10:07:19.144985 3162 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-756779796c-l52kg_calico-system(a8a3131a-1ae1-4f2c-85eb-7bb7a52a06c6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-756779796c-l52kg_calico-system(a8a3131a-1ae1-4f2c-85eb-7bb7a52a06c6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3387e0f166e367a3b7fb9f1457e40b12bac4376046f8c938aa64f3351806361b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-756779796c-l52kg" podUID="a8a3131a-1ae1-4f2c-85eb-7bb7a52a06c6" Apr 21 10:07:19.192278 containerd[1955]: time="2026-04-21T10:07:19.192125977Z" level=error msg="Failed to destroy network for sandbox \"107ed6270dced41699dd23a916d50793c177cbb6a6886b67de1690ccbbf15373\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:07:19.195628 containerd[1955]: time="2026-04-21T10:07:19.195426666Z" level=error msg="encountered an error cleaning up failed sandbox \"107ed6270dced41699dd23a916d50793c177cbb6a6886b67de1690ccbbf15373\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:07:19.198604 containerd[1955]: time="2026-04-21T10:07:19.198335768Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-n4jjg,Uid:7842ac67-69e7-4228-9787-d5e2cbd9c0b8,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"107ed6270dced41699dd23a916d50793c177cbb6a6886b67de1690ccbbf15373\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:07:19.200559 kubelet[3162]: E0421 10:07:19.198889 3162 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"107ed6270dced41699dd23a916d50793c177cbb6a6886b67de1690ccbbf15373\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:07:19.200559 kubelet[3162]: E0421 10:07:19.198963 3162 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"107ed6270dced41699dd23a916d50793c177cbb6a6886b67de1690ccbbf15373\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-n4jjg" Apr 21 10:07:19.200559 kubelet[3162]: E0421 10:07:19.199000 3162 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"107ed6270dced41699dd23a916d50793c177cbb6a6886b67de1690ccbbf15373\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-n4jjg" Apr 21 10:07:19.200824 kubelet[3162]: E0421 10:07:19.199076 3162 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7d764666f9-n4jjg_kube-system(7842ac67-69e7-4228-9787-d5e2cbd9c0b8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7d764666f9-n4jjg_kube-system(7842ac67-69e7-4228-9787-d5e2cbd9c0b8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"107ed6270dced41699dd23a916d50793c177cbb6a6886b67de1690ccbbf15373\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-n4jjg" podUID="7842ac67-69e7-4228-9787-d5e2cbd9c0b8" Apr 21 10:07:19.207356 containerd[1955]: time="2026-04-21T10:07:19.206163224Z" level=info msg="StartContainer for \"aa3f799afc4f8dbab9f71652bc82ca185c44f2ef33b5744a98cdb8a47aba84ed\" returns successfully" Apr 21 10:07:19.343102 systemd[1]: Created slice kubepods-besteffort-pod678541e7_a702_489b_a934_cdd3a561ab11.slice - libcontainer container kubepods-besteffort-pod678541e7_a702_489b_a934_cdd3a561ab11.slice. Apr 21 10:07:19.355632 containerd[1955]: time="2026-04-21T10:07:19.355040459Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8sx57,Uid:678541e7-a702-489b-a934-cdd3a561ab11,Namespace:calico-system,Attempt:0,}" Apr 21 10:07:19.517796 containerd[1955]: time="2026-04-21T10:07:19.517731124Z" level=error msg="Failed to destroy network for sandbox \"5c866df755e1af6a463e3173f093534a10997bc2c2d8feb4bcc9715f12fe346d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:07:19.520154 containerd[1955]: time="2026-04-21T10:07:19.519884821Z" level=error msg="encountered an error cleaning up failed sandbox \"5c866df755e1af6a463e3173f093534a10997bc2c2d8feb4bcc9715f12fe346d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:07:19.520154 containerd[1955]: time="2026-04-21T10:07:19.520004629Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8sx57,Uid:678541e7-a702-489b-a934-cdd3a561ab11,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5c866df755e1af6a463e3173f093534a10997bc2c2d8feb4bcc9715f12fe346d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:07:19.521163 kubelet[3162]: E0421 10:07:19.520501 3162 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5c866df755e1af6a463e3173f093534a10997bc2c2d8feb4bcc9715f12fe346d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:07:19.521419 kubelet[3162]: E0421 10:07:19.521364 3162 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5c866df755e1af6a463e3173f093534a10997bc2c2d8feb4bcc9715f12fe346d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-8sx57" Apr 21 10:07:19.522213 kubelet[3162]: E0421 10:07:19.521677 3162 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5c866df755e1af6a463e3173f093534a10997bc2c2d8feb4bcc9715f12fe346d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-8sx57" Apr 21 10:07:19.522213 kubelet[3162]: E0421 10:07:19.521832 3162 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-8sx57_calico-system(678541e7-a702-489b-a934-cdd3a561ab11)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-8sx57_calico-system(678541e7-a702-489b-a934-cdd3a561ab11)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5c866df755e1af6a463e3173f093534a10997bc2c2d8feb4bcc9715f12fe346d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-8sx57" podUID="678541e7-a702-489b-a934-cdd3a561ab11" Apr 21 10:07:19.749238 kubelet[3162]: I0421 10:07:19.748226 3162 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5c866df755e1af6a463e3173f093534a10997bc2c2d8feb4bcc9715f12fe346d" Apr 21 10:07:19.749372 containerd[1955]: time="2026-04-21T10:07:19.749154772Z" level=info msg="StopPodSandbox for \"5c866df755e1af6a463e3173f093534a10997bc2c2d8feb4bcc9715f12fe346d\"" Apr 21 10:07:19.749560 containerd[1955]: time="2026-04-21T10:07:19.749436097Z" level=info msg="Ensure that sandbox 5c866df755e1af6a463e3173f093534a10997bc2c2d8feb4bcc9715f12fe346d in task-service has been cleanup successfully" Apr 21 10:07:19.754437 kubelet[3162]: I0421 10:07:19.754373 3162 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3387e0f166e367a3b7fb9f1457e40b12bac4376046f8c938aa64f3351806361b" Apr 21 10:07:19.757609 containerd[1955]: time="2026-04-21T10:07:19.756956043Z" level=info msg="StopPodSandbox for \"3387e0f166e367a3b7fb9f1457e40b12bac4376046f8c938aa64f3351806361b\"" Apr 21 10:07:19.758532 containerd[1955]: time="2026-04-21T10:07:19.758393449Z" level=info msg="Ensure that sandbox 3387e0f166e367a3b7fb9f1457e40b12bac4376046f8c938aa64f3351806361b in task-service has been cleanup successfully" Apr 21 10:07:19.777597 kubelet[3162]: I0421 10:07:19.774065 3162 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b9259996c4f20e4dffc1ab6aa795f2d080338d492106c257cc9efc78a9edf950" Apr 21 10:07:19.786427 containerd[1955]: time="2026-04-21T10:07:19.785729382Z" level=info msg="StopPodSandbox for \"b9259996c4f20e4dffc1ab6aa795f2d080338d492106c257cc9efc78a9edf950\"" Apr 21 10:07:19.786427 containerd[1955]: time="2026-04-21T10:07:19.786022533Z" level=info msg="Ensure that sandbox b9259996c4f20e4dffc1ab6aa795f2d080338d492106c257cc9efc78a9edf950 in task-service has been cleanup successfully" Apr 21 10:07:19.805594 kubelet[3162]: I0421 10:07:19.805522 3162 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="107ed6270dced41699dd23a916d50793c177cbb6a6886b67de1690ccbbf15373" Apr 21 10:07:19.811703 containerd[1955]: time="2026-04-21T10:07:19.811046495Z" level=info msg="StopPodSandbox for \"107ed6270dced41699dd23a916d50793c177cbb6a6886b67de1690ccbbf15373\"" Apr 21 10:07:19.811703 containerd[1955]: time="2026-04-21T10:07:19.811345528Z" level=info msg="Ensure that sandbox 107ed6270dced41699dd23a916d50793c177cbb6a6886b67de1690ccbbf15373 in task-service has been cleanup successfully" Apr 21 10:07:19.820301 kubelet[3162]: I0421 10:07:19.820185 3162 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fab10588f8e7d4a2b00722325f708644c5b0a0b846bcb03afc2160de84a05ad6" Apr 21 10:07:19.825419 containerd[1955]: time="2026-04-21T10:07:19.824668334Z" level=info msg="StopPodSandbox for \"fab10588f8e7d4a2b00722325f708644c5b0a0b846bcb03afc2160de84a05ad6\"" Apr 21 10:07:19.825419 containerd[1955]: time="2026-04-21T10:07:19.824981643Z" level=info msg="Ensure that sandbox fab10588f8e7d4a2b00722325f708644c5b0a0b846bcb03afc2160de84a05ad6 in task-service has been cleanup successfully" Apr 21 10:07:19.832438 kubelet[3162]: I0421 10:07:19.832377 3162 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="21275ea079eb3066827c0126331c15680105000b3b73bfe6800334903f295186" Apr 21 10:07:19.835309 containerd[1955]: time="2026-04-21T10:07:19.835047978Z" level=info msg="StopPodSandbox for \"21275ea079eb3066827c0126331c15680105000b3b73bfe6800334903f295186\"" Apr 21 10:07:19.835478 containerd[1955]: time="2026-04-21T10:07:19.835343638Z" level=info msg="Ensure that sandbox 21275ea079eb3066827c0126331c15680105000b3b73bfe6800334903f295186 in task-service has been cleanup successfully" Apr 21 10:07:19.854548 kubelet[3162]: I0421 10:07:19.852463 3162 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c426fd80f098b5a96307306ff075dfee427d004363145efae42a8a28c4de5c7e" Apr 21 10:07:19.856611 containerd[1955]: time="2026-04-21T10:07:19.856351736Z" level=info msg="StopPodSandbox for \"c426fd80f098b5a96307306ff075dfee427d004363145efae42a8a28c4de5c7e\"" Apr 21 10:07:19.864536 containerd[1955]: time="2026-04-21T10:07:19.863146736Z" level=info msg="Ensure that sandbox c426fd80f098b5a96307306ff075dfee427d004363145efae42a8a28c4de5c7e in task-service has been cleanup successfully" Apr 21 10:07:19.883925 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-107ed6270dced41699dd23a916d50793c177cbb6a6886b67de1690ccbbf15373-shm.mount: Deactivated successfully. Apr 21 10:07:19.884486 kubelet[3162]: I0421 10:07:19.884364 3162 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="734c7f6214f7cf5d7196daffcbcfbbe97e510515850d3c08bd2886e1174b3685" Apr 21 10:07:19.884851 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3387e0f166e367a3b7fb9f1457e40b12bac4376046f8c938aa64f3351806361b-shm.mount: Deactivated successfully. Apr 21 10:07:19.899568 kubelet[3162]: I0421 10:07:19.899428 3162 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-node-fg7rc" podStartSLOduration=2.311310997 podStartE2EDuration="20.899404857s" podCreationTimestamp="2026-04-21 10:06:59 +0000 UTC" firstStartedPulling="2026-04-21 10:07:00.148318571 +0000 UTC m=+36.065680676" lastFinishedPulling="2026-04-21 10:07:18.736412419 +0000 UTC m=+54.653774536" observedRunningTime="2026-04-21 10:07:19.816883344 +0000 UTC m=+55.734245461" watchObservedRunningTime="2026-04-21 10:07:19.899404857 +0000 UTC m=+55.816766962" Apr 21 10:07:19.902103 containerd[1955]: time="2026-04-21T10:07:19.902047641Z" level=info msg="StopPodSandbox for \"734c7f6214f7cf5d7196daffcbcfbbe97e510515850d3c08bd2886e1174b3685\"" Apr 21 10:07:19.903748 containerd[1955]: time="2026-04-21T10:07:19.903678837Z" level=info msg="Ensure that sandbox 734c7f6214f7cf5d7196daffcbcfbbe97e510515850d3c08bd2886e1174b3685 in task-service has been cleanup successfully" Apr 21 10:07:20.833583 containerd[1955]: 2026-04-21 10:07:20.497 [INFO][4545] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="107ed6270dced41699dd23a916d50793c177cbb6a6886b67de1690ccbbf15373" Apr 21 10:07:20.833583 containerd[1955]: 2026-04-21 10:07:20.498 [INFO][4545] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="107ed6270dced41699dd23a916d50793c177cbb6a6886b67de1690ccbbf15373" iface="eth0" netns="/var/run/netns/cni-f4580026-f7c2-5ad1-4bd3-8ded81cbe7e3" Apr 21 10:07:20.833583 containerd[1955]: 2026-04-21 10:07:20.498 [INFO][4545] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="107ed6270dced41699dd23a916d50793c177cbb6a6886b67de1690ccbbf15373" iface="eth0" netns="/var/run/netns/cni-f4580026-f7c2-5ad1-4bd3-8ded81cbe7e3" Apr 21 10:07:20.833583 containerd[1955]: 2026-04-21 10:07:20.501 [INFO][4545] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="107ed6270dced41699dd23a916d50793c177cbb6a6886b67de1690ccbbf15373" iface="eth0" netns="/var/run/netns/cni-f4580026-f7c2-5ad1-4bd3-8ded81cbe7e3" Apr 21 10:07:20.833583 containerd[1955]: 2026-04-21 10:07:20.501 [INFO][4545] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="107ed6270dced41699dd23a916d50793c177cbb6a6886b67de1690ccbbf15373" Apr 21 10:07:20.833583 containerd[1955]: 2026-04-21 10:07:20.501 [INFO][4545] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="107ed6270dced41699dd23a916d50793c177cbb6a6886b67de1690ccbbf15373" Apr 21 10:07:20.833583 containerd[1955]: 2026-04-21 10:07:20.732 [INFO][4633] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="107ed6270dced41699dd23a916d50793c177cbb6a6886b67de1690ccbbf15373" HandleID="k8s-pod-network.107ed6270dced41699dd23a916d50793c177cbb6a6886b67de1690ccbbf15373" Workload="ip--172--31--20--11-k8s-coredns--7d764666f9--n4jjg-eth0" Apr 21 10:07:20.833583 containerd[1955]: 2026-04-21 10:07:20.732 [INFO][4633] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:07:20.833583 containerd[1955]: 2026-04-21 10:07:20.733 [INFO][4633] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:07:20.833583 containerd[1955]: 2026-04-21 10:07:20.768 [WARNING][4633] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="107ed6270dced41699dd23a916d50793c177cbb6a6886b67de1690ccbbf15373" HandleID="k8s-pod-network.107ed6270dced41699dd23a916d50793c177cbb6a6886b67de1690ccbbf15373" Workload="ip--172--31--20--11-k8s-coredns--7d764666f9--n4jjg-eth0" Apr 21 10:07:20.833583 containerd[1955]: 2026-04-21 10:07:20.768 [INFO][4633] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="107ed6270dced41699dd23a916d50793c177cbb6a6886b67de1690ccbbf15373" HandleID="k8s-pod-network.107ed6270dced41699dd23a916d50793c177cbb6a6886b67de1690ccbbf15373" Workload="ip--172--31--20--11-k8s-coredns--7d764666f9--n4jjg-eth0" Apr 21 10:07:20.833583 containerd[1955]: 2026-04-21 10:07:20.784 [INFO][4633] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:07:20.833583 containerd[1955]: 2026-04-21 10:07:20.811 [INFO][4545] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="107ed6270dced41699dd23a916d50793c177cbb6a6886b67de1690ccbbf15373" Apr 21 10:07:20.837342 containerd[1955]: time="2026-04-21T10:07:20.834677726Z" level=info msg="TearDown network for sandbox \"107ed6270dced41699dd23a916d50793c177cbb6a6886b67de1690ccbbf15373\" successfully" Apr 21 10:07:20.837342 containerd[1955]: time="2026-04-21T10:07:20.834726494Z" level=info msg="StopPodSandbox for \"107ed6270dced41699dd23a916d50793c177cbb6a6886b67de1690ccbbf15373\" returns successfully" Apr 21 10:07:20.843060 systemd[1]: run-netns-cni\x2df4580026\x2df7c2\x2d5ad1\x2d4bd3\x2d8ded81cbe7e3.mount: Deactivated successfully. Apr 21 10:07:20.844044 containerd[1955]: 2026-04-21 10:07:20.463 [INFO][4544] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b9259996c4f20e4dffc1ab6aa795f2d080338d492106c257cc9efc78a9edf950" Apr 21 10:07:20.844044 containerd[1955]: 2026-04-21 10:07:20.464 [INFO][4544] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b9259996c4f20e4dffc1ab6aa795f2d080338d492106c257cc9efc78a9edf950" iface="eth0" netns="/var/run/netns/cni-a5ba33ae-671c-6626-472e-f2e88f9c0984" Apr 21 10:07:20.844044 containerd[1955]: 2026-04-21 10:07:20.464 [INFO][4544] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b9259996c4f20e4dffc1ab6aa795f2d080338d492106c257cc9efc78a9edf950" iface="eth0" netns="/var/run/netns/cni-a5ba33ae-671c-6626-472e-f2e88f9c0984" Apr 21 10:07:20.844044 containerd[1955]: 2026-04-21 10:07:20.466 [INFO][4544] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b9259996c4f20e4dffc1ab6aa795f2d080338d492106c257cc9efc78a9edf950" iface="eth0" netns="/var/run/netns/cni-a5ba33ae-671c-6626-472e-f2e88f9c0984" Apr 21 10:07:20.844044 containerd[1955]: 2026-04-21 10:07:20.466 [INFO][4544] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b9259996c4f20e4dffc1ab6aa795f2d080338d492106c257cc9efc78a9edf950" Apr 21 10:07:20.844044 containerd[1955]: 2026-04-21 10:07:20.467 [INFO][4544] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b9259996c4f20e4dffc1ab6aa795f2d080338d492106c257cc9efc78a9edf950" Apr 21 10:07:20.844044 containerd[1955]: 2026-04-21 10:07:20.729 [INFO][4628] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b9259996c4f20e4dffc1ab6aa795f2d080338d492106c257cc9efc78a9edf950" HandleID="k8s-pod-network.b9259996c4f20e4dffc1ab6aa795f2d080338d492106c257cc9efc78a9edf950" Workload="ip--172--31--20--11-k8s-coredns--7d764666f9--cxnd9-eth0" Apr 21 10:07:20.844044 containerd[1955]: 2026-04-21 10:07:20.734 [INFO][4628] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:07:20.844044 containerd[1955]: 2026-04-21 10:07:20.788 [INFO][4628] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:07:20.844044 containerd[1955]: 2026-04-21 10:07:20.816 [WARNING][4628] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b9259996c4f20e4dffc1ab6aa795f2d080338d492106c257cc9efc78a9edf950" HandleID="k8s-pod-network.b9259996c4f20e4dffc1ab6aa795f2d080338d492106c257cc9efc78a9edf950" Workload="ip--172--31--20--11-k8s-coredns--7d764666f9--cxnd9-eth0" Apr 21 10:07:20.844044 containerd[1955]: 2026-04-21 10:07:20.816 [INFO][4628] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b9259996c4f20e4dffc1ab6aa795f2d080338d492106c257cc9efc78a9edf950" HandleID="k8s-pod-network.b9259996c4f20e4dffc1ab6aa795f2d080338d492106c257cc9efc78a9edf950" Workload="ip--172--31--20--11-k8s-coredns--7d764666f9--cxnd9-eth0" Apr 21 10:07:20.844044 containerd[1955]: 2026-04-21 10:07:20.820 [INFO][4628] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:07:20.844044 containerd[1955]: 2026-04-21 10:07:20.831 [INFO][4544] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b9259996c4f20e4dffc1ab6aa795f2d080338d492106c257cc9efc78a9edf950" Apr 21 10:07:20.849170 containerd[1955]: time="2026-04-21T10:07:20.846592612Z" level=info msg="TearDown network for sandbox \"b9259996c4f20e4dffc1ab6aa795f2d080338d492106c257cc9efc78a9edf950\" successfully" Apr 21 10:07:20.849170 containerd[1955]: time="2026-04-21T10:07:20.846657013Z" level=info msg="StopPodSandbox for \"b9259996c4f20e4dffc1ab6aa795f2d080338d492106c257cc9efc78a9edf950\" returns successfully" Apr 21 10:07:20.858348 systemd[1]: run-netns-cni\x2da5ba33ae\x2d671c\x2d6626\x2d472e\x2df2e88f9c0984.mount: Deactivated successfully. Apr 21 10:07:20.864573 containerd[1955]: time="2026-04-21T10:07:20.862184024Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-n4jjg,Uid:7842ac67-69e7-4228-9787-d5e2cbd9c0b8,Namespace:kube-system,Attempt:1,}" Apr 21 10:07:20.868203 containerd[1955]: time="2026-04-21T10:07:20.868134618Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-cxnd9,Uid:af9f8050-444d-4935-9809-2c239d4c35de,Namespace:kube-system,Attempt:1,}" Apr 21 10:07:20.905466 containerd[1955]: 2026-04-21 10:07:20.513 [INFO][4498] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="5c866df755e1af6a463e3173f093534a10997bc2c2d8feb4bcc9715f12fe346d" Apr 21 10:07:20.905466 containerd[1955]: 2026-04-21 10:07:20.513 [INFO][4498] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5c866df755e1af6a463e3173f093534a10997bc2c2d8feb4bcc9715f12fe346d" iface="eth0" netns="/var/run/netns/cni-47966eee-8c35-0654-6f3e-c5738969a0d5" Apr 21 10:07:20.905466 containerd[1955]: 2026-04-21 10:07:20.514 [INFO][4498] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5c866df755e1af6a463e3173f093534a10997bc2c2d8feb4bcc9715f12fe346d" iface="eth0" netns="/var/run/netns/cni-47966eee-8c35-0654-6f3e-c5738969a0d5" Apr 21 10:07:20.905466 containerd[1955]: 2026-04-21 10:07:20.531 [INFO][4498] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5c866df755e1af6a463e3173f093534a10997bc2c2d8feb4bcc9715f12fe346d" iface="eth0" netns="/var/run/netns/cni-47966eee-8c35-0654-6f3e-c5738969a0d5" Apr 21 10:07:20.905466 containerd[1955]: 2026-04-21 10:07:20.531 [INFO][4498] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="5c866df755e1af6a463e3173f093534a10997bc2c2d8feb4bcc9715f12fe346d" Apr 21 10:07:20.905466 containerd[1955]: 2026-04-21 10:07:20.531 [INFO][4498] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="5c866df755e1af6a463e3173f093534a10997bc2c2d8feb4bcc9715f12fe346d" Apr 21 10:07:20.905466 containerd[1955]: 2026-04-21 10:07:20.781 [INFO][4640] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="5c866df755e1af6a463e3173f093534a10997bc2c2d8feb4bcc9715f12fe346d" HandleID="k8s-pod-network.5c866df755e1af6a463e3173f093534a10997bc2c2d8feb4bcc9715f12fe346d" Workload="ip--172--31--20--11-k8s-csi--node--driver--8sx57-eth0" Apr 21 10:07:20.905466 containerd[1955]: 2026-04-21 10:07:20.781 [INFO][4640] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:07:20.905466 containerd[1955]: 2026-04-21 10:07:20.824 [INFO][4640] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:07:20.905466 containerd[1955]: 2026-04-21 10:07:20.876 [WARNING][4640] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="5c866df755e1af6a463e3173f093534a10997bc2c2d8feb4bcc9715f12fe346d" HandleID="k8s-pod-network.5c866df755e1af6a463e3173f093534a10997bc2c2d8feb4bcc9715f12fe346d" Workload="ip--172--31--20--11-k8s-csi--node--driver--8sx57-eth0" Apr 21 10:07:20.905466 containerd[1955]: 2026-04-21 10:07:20.876 [INFO][4640] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="5c866df755e1af6a463e3173f093534a10997bc2c2d8feb4bcc9715f12fe346d" HandleID="k8s-pod-network.5c866df755e1af6a463e3173f093534a10997bc2c2d8feb4bcc9715f12fe346d" Workload="ip--172--31--20--11-k8s-csi--node--driver--8sx57-eth0" Apr 21 10:07:20.905466 containerd[1955]: 2026-04-21 10:07:20.882 [INFO][4640] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:07:20.905466 containerd[1955]: 2026-04-21 10:07:20.888 [INFO][4498] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="5c866df755e1af6a463e3173f093534a10997bc2c2d8feb4bcc9715f12fe346d" Apr 21 10:07:20.909814 containerd[1955]: time="2026-04-21T10:07:20.909744269Z" level=info msg="TearDown network for sandbox \"5c866df755e1af6a463e3173f093534a10997bc2c2d8feb4bcc9715f12fe346d\" successfully" Apr 21 10:07:20.909814 containerd[1955]: time="2026-04-21T10:07:20.909798044Z" level=info msg="StopPodSandbox for \"5c866df755e1af6a463e3173f093534a10997bc2c2d8feb4bcc9715f12fe346d\" returns successfully" Apr 21 10:07:20.920135 containerd[1955]: time="2026-04-21T10:07:20.919501281Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8sx57,Uid:678541e7-a702-489b-a934-cdd3a561ab11,Namespace:calico-system,Attempt:1,}" Apr 21 10:07:20.930622 systemd[1]: run-netns-cni\x2d47966eee\x2d8c35\x2d0654\x2d6f3e\x2dc5738969a0d5.mount: Deactivated successfully. Apr 21 10:07:21.038408 containerd[1955]: 2026-04-21 10:07:20.511 [INFO][4577] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="c426fd80f098b5a96307306ff075dfee427d004363145efae42a8a28c4de5c7e" Apr 21 10:07:21.038408 containerd[1955]: 2026-04-21 10:07:20.511 [INFO][4577] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c426fd80f098b5a96307306ff075dfee427d004363145efae42a8a28c4de5c7e" iface="eth0" netns="/var/run/netns/cni-ce4ed587-ef38-69f8-d978-b41458586b32" Apr 21 10:07:21.038408 containerd[1955]: 2026-04-21 10:07:20.512 [INFO][4577] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c426fd80f098b5a96307306ff075dfee427d004363145efae42a8a28c4de5c7e" iface="eth0" netns="/var/run/netns/cni-ce4ed587-ef38-69f8-d978-b41458586b32" Apr 21 10:07:21.038408 containerd[1955]: 2026-04-21 10:07:20.515 [INFO][4577] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c426fd80f098b5a96307306ff075dfee427d004363145efae42a8a28c4de5c7e" iface="eth0" netns="/var/run/netns/cni-ce4ed587-ef38-69f8-d978-b41458586b32" Apr 21 10:07:21.038408 containerd[1955]: 2026-04-21 10:07:20.515 [INFO][4577] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="c426fd80f098b5a96307306ff075dfee427d004363145efae42a8a28c4de5c7e" Apr 21 10:07:21.038408 containerd[1955]: 2026-04-21 10:07:20.516 [INFO][4577] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="c426fd80f098b5a96307306ff075dfee427d004363145efae42a8a28c4de5c7e" Apr 21 10:07:21.038408 containerd[1955]: 2026-04-21 10:07:20.810 [INFO][4637] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="c426fd80f098b5a96307306ff075dfee427d004363145efae42a8a28c4de5c7e" HandleID="k8s-pod-network.c426fd80f098b5a96307306ff075dfee427d004363145efae42a8a28c4de5c7e" Workload="ip--172--31--20--11-k8s-calico--kube--controllers--74499fb665--57b24-eth0" Apr 21 10:07:21.038408 containerd[1955]: 2026-04-21 10:07:20.811 [INFO][4637] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:07:21.038408 containerd[1955]: 2026-04-21 10:07:20.882 [INFO][4637] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:07:21.038408 containerd[1955]: 2026-04-21 10:07:20.953 [WARNING][4637] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="c426fd80f098b5a96307306ff075dfee427d004363145efae42a8a28c4de5c7e" HandleID="k8s-pod-network.c426fd80f098b5a96307306ff075dfee427d004363145efae42a8a28c4de5c7e" Workload="ip--172--31--20--11-k8s-calico--kube--controllers--74499fb665--57b24-eth0" Apr 21 10:07:21.038408 containerd[1955]: 2026-04-21 10:07:20.953 [INFO][4637] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="c426fd80f098b5a96307306ff075dfee427d004363145efae42a8a28c4de5c7e" HandleID="k8s-pod-network.c426fd80f098b5a96307306ff075dfee427d004363145efae42a8a28c4de5c7e" Workload="ip--172--31--20--11-k8s-calico--kube--controllers--74499fb665--57b24-eth0" Apr 21 10:07:21.038408 containerd[1955]: 2026-04-21 10:07:20.963 [INFO][4637] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:07:21.038408 containerd[1955]: 2026-04-21 10:07:20.990 [INFO][4577] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="c426fd80f098b5a96307306ff075dfee427d004363145efae42a8a28c4de5c7e" Apr 21 10:07:21.040874 containerd[1955]: time="2026-04-21T10:07:21.040794346Z" level=info msg="TearDown network for sandbox \"c426fd80f098b5a96307306ff075dfee427d004363145efae42a8a28c4de5c7e\" successfully" Apr 21 10:07:21.040874 containerd[1955]: time="2026-04-21T10:07:21.040855120Z" level=info msg="StopPodSandbox for \"c426fd80f098b5a96307306ff075dfee427d004363145efae42a8a28c4de5c7e\" returns successfully" Apr 21 10:07:21.047179 containerd[1955]: time="2026-04-21T10:07:21.047021114Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-74499fb665-57b24,Uid:26d85b28-0419-4bec-8bca-4c6bc1376147,Namespace:calico-system,Attempt:1,}" Apr 21 10:07:21.105387 containerd[1955]: 2026-04-21 10:07:20.510 [INFO][4494] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="3387e0f166e367a3b7fb9f1457e40b12bac4376046f8c938aa64f3351806361b" Apr 21 10:07:21.105387 containerd[1955]: 2026-04-21 10:07:20.510 [INFO][4494] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3387e0f166e367a3b7fb9f1457e40b12bac4376046f8c938aa64f3351806361b" iface="eth0" netns="/var/run/netns/cni-1c55a3d3-0da0-cf73-e31d-8438b4cf19fe" Apr 21 10:07:21.105387 containerd[1955]: 2026-04-21 10:07:20.515 [INFO][4494] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3387e0f166e367a3b7fb9f1457e40b12bac4376046f8c938aa64f3351806361b" iface="eth0" netns="/var/run/netns/cni-1c55a3d3-0da0-cf73-e31d-8438b4cf19fe" Apr 21 10:07:21.105387 containerd[1955]: 2026-04-21 10:07:20.529 [INFO][4494] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3387e0f166e367a3b7fb9f1457e40b12bac4376046f8c938aa64f3351806361b" iface="eth0" netns="/var/run/netns/cni-1c55a3d3-0da0-cf73-e31d-8438b4cf19fe" Apr 21 10:07:21.105387 containerd[1955]: 2026-04-21 10:07:20.529 [INFO][4494] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="3387e0f166e367a3b7fb9f1457e40b12bac4376046f8c938aa64f3351806361b" Apr 21 10:07:21.105387 containerd[1955]: 2026-04-21 10:07:20.529 [INFO][4494] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="3387e0f166e367a3b7fb9f1457e40b12bac4376046f8c938aa64f3351806361b" Apr 21 10:07:21.105387 containerd[1955]: 2026-04-21 10:07:20.827 [INFO][4645] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="3387e0f166e367a3b7fb9f1457e40b12bac4376046f8c938aa64f3351806361b" HandleID="k8s-pod-network.3387e0f166e367a3b7fb9f1457e40b12bac4376046f8c938aa64f3351806361b" Workload="ip--172--31--20--11-k8s-calico--apiserver--756779796c--l52kg-eth0" Apr 21 10:07:21.105387 containerd[1955]: 2026-04-21 10:07:20.828 [INFO][4645] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:07:21.105387 containerd[1955]: 2026-04-21 10:07:20.965 [INFO][4645] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:07:21.105387 containerd[1955]: 2026-04-21 10:07:21.029 [WARNING][4645] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="3387e0f166e367a3b7fb9f1457e40b12bac4376046f8c938aa64f3351806361b" HandleID="k8s-pod-network.3387e0f166e367a3b7fb9f1457e40b12bac4376046f8c938aa64f3351806361b" Workload="ip--172--31--20--11-k8s-calico--apiserver--756779796c--l52kg-eth0" Apr 21 10:07:21.105387 containerd[1955]: 2026-04-21 10:07:21.039 [INFO][4645] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="3387e0f166e367a3b7fb9f1457e40b12bac4376046f8c938aa64f3351806361b" HandleID="k8s-pod-network.3387e0f166e367a3b7fb9f1457e40b12bac4376046f8c938aa64f3351806361b" Workload="ip--172--31--20--11-k8s-calico--apiserver--756779796c--l52kg-eth0" Apr 21 10:07:21.105387 containerd[1955]: 2026-04-21 10:07:21.054 [INFO][4645] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:07:21.105387 containerd[1955]: 2026-04-21 10:07:21.078 [INFO][4494] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="3387e0f166e367a3b7fb9f1457e40b12bac4376046f8c938aa64f3351806361b" Apr 21 10:07:21.115466 containerd[1955]: time="2026-04-21T10:07:21.114682284Z" level=info msg="TearDown network for sandbox \"3387e0f166e367a3b7fb9f1457e40b12bac4376046f8c938aa64f3351806361b\" successfully" Apr 21 10:07:21.115466 containerd[1955]: time="2026-04-21T10:07:21.114745868Z" level=info msg="StopPodSandbox for \"3387e0f166e367a3b7fb9f1457e40b12bac4376046f8c938aa64f3351806361b\" returns successfully" Apr 21 10:07:21.120910 containerd[1955]: time="2026-04-21T10:07:21.120857150Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-756779796c-l52kg,Uid:a8a3131a-1ae1-4f2c-85eb-7bb7a52a06c6,Namespace:calico-system,Attempt:1,}" Apr 21 10:07:21.141528 containerd[1955]: 2026-04-21 10:07:20.504 [INFO][4573] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="fab10588f8e7d4a2b00722325f708644c5b0a0b846bcb03afc2160de84a05ad6" Apr 21 10:07:21.141528 containerd[1955]: 2026-04-21 10:07:20.504 [INFO][4573] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="fab10588f8e7d4a2b00722325f708644c5b0a0b846bcb03afc2160de84a05ad6" iface="eth0" netns="/var/run/netns/cni-7ef3ff5c-995a-b4ee-a01b-b76c089845fa" Apr 21 10:07:21.141528 containerd[1955]: 2026-04-21 10:07:20.506 [INFO][4573] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="fab10588f8e7d4a2b00722325f708644c5b0a0b846bcb03afc2160de84a05ad6" iface="eth0" netns="/var/run/netns/cni-7ef3ff5c-995a-b4ee-a01b-b76c089845fa" Apr 21 10:07:21.141528 containerd[1955]: 2026-04-21 10:07:20.508 [INFO][4573] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="fab10588f8e7d4a2b00722325f708644c5b0a0b846bcb03afc2160de84a05ad6" iface="eth0" netns="/var/run/netns/cni-7ef3ff5c-995a-b4ee-a01b-b76c089845fa" Apr 21 10:07:21.141528 containerd[1955]: 2026-04-21 10:07:20.509 [INFO][4573] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="fab10588f8e7d4a2b00722325f708644c5b0a0b846bcb03afc2160de84a05ad6" Apr 21 10:07:21.141528 containerd[1955]: 2026-04-21 10:07:20.509 [INFO][4573] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="fab10588f8e7d4a2b00722325f708644c5b0a0b846bcb03afc2160de84a05ad6" Apr 21 10:07:21.141528 containerd[1955]: 2026-04-21 10:07:20.822 [INFO][4635] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="fab10588f8e7d4a2b00722325f708644c5b0a0b846bcb03afc2160de84a05ad6" HandleID="k8s-pod-network.fab10588f8e7d4a2b00722325f708644c5b0a0b846bcb03afc2160de84a05ad6" Workload="ip--172--31--20--11-k8s-goldmane--9f7667bb8--fr8h9-eth0" Apr 21 10:07:21.141528 containerd[1955]: 2026-04-21 10:07:20.827 [INFO][4635] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:07:21.141528 containerd[1955]: 2026-04-21 10:07:21.056 [INFO][4635] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:07:21.141528 containerd[1955]: 2026-04-21 10:07:21.094 [WARNING][4635] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="fab10588f8e7d4a2b00722325f708644c5b0a0b846bcb03afc2160de84a05ad6" HandleID="k8s-pod-network.fab10588f8e7d4a2b00722325f708644c5b0a0b846bcb03afc2160de84a05ad6" Workload="ip--172--31--20--11-k8s-goldmane--9f7667bb8--fr8h9-eth0" Apr 21 10:07:21.141528 containerd[1955]: 2026-04-21 10:07:21.094 [INFO][4635] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="fab10588f8e7d4a2b00722325f708644c5b0a0b846bcb03afc2160de84a05ad6" HandleID="k8s-pod-network.fab10588f8e7d4a2b00722325f708644c5b0a0b846bcb03afc2160de84a05ad6" Workload="ip--172--31--20--11-k8s-goldmane--9f7667bb8--fr8h9-eth0" Apr 21 10:07:21.141528 containerd[1955]: 2026-04-21 10:07:21.103 [INFO][4635] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:07:21.141528 containerd[1955]: 2026-04-21 10:07:21.132 [INFO][4573] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="fab10588f8e7d4a2b00722325f708644c5b0a0b846bcb03afc2160de84a05ad6" Apr 21 10:07:21.143635 containerd[1955]: time="2026-04-21T10:07:21.142602994Z" level=info msg="TearDown network for sandbox \"fab10588f8e7d4a2b00722325f708644c5b0a0b846bcb03afc2160de84a05ad6\" successfully" Apr 21 10:07:21.143635 containerd[1955]: time="2026-04-21T10:07:21.142697013Z" level=info msg="StopPodSandbox for \"fab10588f8e7d4a2b00722325f708644c5b0a0b846bcb03afc2160de84a05ad6\" returns successfully" Apr 21 10:07:21.150828 containerd[1955]: time="2026-04-21T10:07:21.150769331Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-fr8h9,Uid:1a31afb8-9d55-404f-8363-717bc3de0918,Namespace:calico-system,Attempt:1,}" Apr 21 10:07:21.214066 containerd[1955]: 2026-04-21 10:07:20.524 [INFO][4578] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="734c7f6214f7cf5d7196daffcbcfbbe97e510515850d3c08bd2886e1174b3685" Apr 21 10:07:21.214066 containerd[1955]: 2026-04-21 10:07:20.527 [INFO][4578] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="734c7f6214f7cf5d7196daffcbcfbbe97e510515850d3c08bd2886e1174b3685" iface="eth0" netns="/var/run/netns/cni-006bf8f1-26b2-dc70-7059-6f659dea0d6c" Apr 21 10:07:21.214066 containerd[1955]: 2026-04-21 10:07:20.528 [INFO][4578] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="734c7f6214f7cf5d7196daffcbcfbbe97e510515850d3c08bd2886e1174b3685" iface="eth0" netns="/var/run/netns/cni-006bf8f1-26b2-dc70-7059-6f659dea0d6c" Apr 21 10:07:21.214066 containerd[1955]: 2026-04-21 10:07:20.532 [INFO][4578] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="734c7f6214f7cf5d7196daffcbcfbbe97e510515850d3c08bd2886e1174b3685" iface="eth0" netns="/var/run/netns/cni-006bf8f1-26b2-dc70-7059-6f659dea0d6c" Apr 21 10:07:21.214066 containerd[1955]: 2026-04-21 10:07:20.532 [INFO][4578] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="734c7f6214f7cf5d7196daffcbcfbbe97e510515850d3c08bd2886e1174b3685" Apr 21 10:07:21.214066 containerd[1955]: 2026-04-21 10:07:20.532 [INFO][4578] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="734c7f6214f7cf5d7196daffcbcfbbe97e510515850d3c08bd2886e1174b3685" Apr 21 10:07:21.214066 containerd[1955]: 2026-04-21 10:07:20.829 [INFO][4646] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="734c7f6214f7cf5d7196daffcbcfbbe97e510515850d3c08bd2886e1174b3685" HandleID="k8s-pod-network.734c7f6214f7cf5d7196daffcbcfbbe97e510515850d3c08bd2886e1174b3685" Workload="ip--172--31--20--11-k8s-calico--apiserver--756779796c--mwdfq-eth0" Apr 21 10:07:21.214066 containerd[1955]: 2026-04-21 10:07:20.829 [INFO][4646] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:07:21.214066 containerd[1955]: 2026-04-21 10:07:21.110 [INFO][4646] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:07:21.214066 containerd[1955]: 2026-04-21 10:07:21.159 [WARNING][4646] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="734c7f6214f7cf5d7196daffcbcfbbe97e510515850d3c08bd2886e1174b3685" HandleID="k8s-pod-network.734c7f6214f7cf5d7196daffcbcfbbe97e510515850d3c08bd2886e1174b3685" Workload="ip--172--31--20--11-k8s-calico--apiserver--756779796c--mwdfq-eth0" Apr 21 10:07:21.214066 containerd[1955]: 2026-04-21 10:07:21.159 [INFO][4646] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="734c7f6214f7cf5d7196daffcbcfbbe97e510515850d3c08bd2886e1174b3685" HandleID="k8s-pod-network.734c7f6214f7cf5d7196daffcbcfbbe97e510515850d3c08bd2886e1174b3685" Workload="ip--172--31--20--11-k8s-calico--apiserver--756779796c--mwdfq-eth0" Apr 21 10:07:21.214066 containerd[1955]: 2026-04-21 10:07:21.168 [INFO][4646] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:07:21.214066 containerd[1955]: 2026-04-21 10:07:21.186 [INFO][4578] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="734c7f6214f7cf5d7196daffcbcfbbe97e510515850d3c08bd2886e1174b3685" Apr 21 10:07:21.216019 containerd[1955]: time="2026-04-21T10:07:21.215024003Z" level=info msg="TearDown network for sandbox \"734c7f6214f7cf5d7196daffcbcfbbe97e510515850d3c08bd2886e1174b3685\" successfully" Apr 21 10:07:21.216019 containerd[1955]: time="2026-04-21T10:07:21.215075208Z" level=info msg="StopPodSandbox for \"734c7f6214f7cf5d7196daffcbcfbbe97e510515850d3c08bd2886e1174b3685\" returns successfully" Apr 21 10:07:21.219825 containerd[1955]: time="2026-04-21T10:07:21.218816554Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-756779796c-mwdfq,Uid:dcac6754-a482-47df-a01f-171bd1cc8980,Namespace:calico-system,Attempt:1,}" Apr 21 10:07:21.255372 containerd[1955]: 2026-04-21 10:07:20.540 [INFO][4582] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="21275ea079eb3066827c0126331c15680105000b3b73bfe6800334903f295186" Apr 21 10:07:21.255372 containerd[1955]: 2026-04-21 10:07:20.541 [INFO][4582] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="21275ea079eb3066827c0126331c15680105000b3b73bfe6800334903f295186" iface="eth0" netns="/var/run/netns/cni-9d761d68-5b08-07b1-55cb-1de8abe67b38" Apr 21 10:07:21.255372 containerd[1955]: 2026-04-21 10:07:20.541 [INFO][4582] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="21275ea079eb3066827c0126331c15680105000b3b73bfe6800334903f295186" iface="eth0" netns="/var/run/netns/cni-9d761d68-5b08-07b1-55cb-1de8abe67b38" Apr 21 10:07:21.255372 containerd[1955]: 2026-04-21 10:07:20.545 [INFO][4582] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="21275ea079eb3066827c0126331c15680105000b3b73bfe6800334903f295186" iface="eth0" netns="/var/run/netns/cni-9d761d68-5b08-07b1-55cb-1de8abe67b38" Apr 21 10:07:21.255372 containerd[1955]: 2026-04-21 10:07:20.545 [INFO][4582] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="21275ea079eb3066827c0126331c15680105000b3b73bfe6800334903f295186" Apr 21 10:07:21.255372 containerd[1955]: 2026-04-21 10:07:20.545 [INFO][4582] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="21275ea079eb3066827c0126331c15680105000b3b73bfe6800334903f295186" Apr 21 10:07:21.255372 containerd[1955]: 2026-04-21 10:07:20.826 [INFO][4652] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="21275ea079eb3066827c0126331c15680105000b3b73bfe6800334903f295186" HandleID="k8s-pod-network.21275ea079eb3066827c0126331c15680105000b3b73bfe6800334903f295186" Workload="ip--172--31--20--11-k8s-whisker--5668cff8fc--xc5cn-eth0" Apr 21 10:07:21.255372 containerd[1955]: 2026-04-21 10:07:20.829 [INFO][4652] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:07:21.255372 containerd[1955]: 2026-04-21 10:07:21.168 [INFO][4652] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:07:21.255372 containerd[1955]: 2026-04-21 10:07:21.208 [WARNING][4652] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="21275ea079eb3066827c0126331c15680105000b3b73bfe6800334903f295186" HandleID="k8s-pod-network.21275ea079eb3066827c0126331c15680105000b3b73bfe6800334903f295186" Workload="ip--172--31--20--11-k8s-whisker--5668cff8fc--xc5cn-eth0" Apr 21 10:07:21.255372 containerd[1955]: 2026-04-21 10:07:21.208 [INFO][4652] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="21275ea079eb3066827c0126331c15680105000b3b73bfe6800334903f295186" HandleID="k8s-pod-network.21275ea079eb3066827c0126331c15680105000b3b73bfe6800334903f295186" Workload="ip--172--31--20--11-k8s-whisker--5668cff8fc--xc5cn-eth0" Apr 21 10:07:21.255372 containerd[1955]: 2026-04-21 10:07:21.214 [INFO][4652] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:07:21.255372 containerd[1955]: 2026-04-21 10:07:21.239 [INFO][4582] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="21275ea079eb3066827c0126331c15680105000b3b73bfe6800334903f295186" Apr 21 10:07:21.259919 containerd[1955]: time="2026-04-21T10:07:21.259845727Z" level=info msg="TearDown network for sandbox \"21275ea079eb3066827c0126331c15680105000b3b73bfe6800334903f295186\" successfully" Apr 21 10:07:21.259919 containerd[1955]: time="2026-04-21T10:07:21.259911231Z" level=info msg="StopPodSandbox for \"21275ea079eb3066827c0126331c15680105000b3b73bfe6800334903f295186\" returns successfully" Apr 21 10:07:21.434335 kubelet[3162]: I0421 10:07:21.432387 3162 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/845525b3-5736-4423-84eb-ae963b0e3b2b-whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/845525b3-5736-4423-84eb-ae963b0e3b2b-whisker-ca-bundle\") pod \"845525b3-5736-4423-84eb-ae963b0e3b2b\" (UID: \"845525b3-5736-4423-84eb-ae963b0e3b2b\") " Apr 21 10:07:21.434335 kubelet[3162]: I0421 10:07:21.432480 3162 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/secret/845525b3-5736-4423-84eb-ae963b0e3b2b-whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/845525b3-5736-4423-84eb-ae963b0e3b2b-whisker-backend-key-pair\") pod \"845525b3-5736-4423-84eb-ae963b0e3b2b\" (UID: \"845525b3-5736-4423-84eb-ae963b0e3b2b\") " Apr 21 10:07:21.434335 kubelet[3162]: I0421 10:07:21.432576 3162 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/projected/845525b3-5736-4423-84eb-ae963b0e3b2b-kube-api-access-2ncnr\" (UniqueName: \"kubernetes.io/projected/845525b3-5736-4423-84eb-ae963b0e3b2b-kube-api-access-2ncnr\") pod \"845525b3-5736-4423-84eb-ae963b0e3b2b\" (UID: \"845525b3-5736-4423-84eb-ae963b0e3b2b\") " Apr 21 10:07:21.434335 kubelet[3162]: I0421 10:07:21.432637 3162 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/845525b3-5736-4423-84eb-ae963b0e3b2b-nginx-config\" (UniqueName: \"kubernetes.io/configmap/845525b3-5736-4423-84eb-ae963b0e3b2b-nginx-config\") pod \"845525b3-5736-4423-84eb-ae963b0e3b2b\" (UID: \"845525b3-5736-4423-84eb-ae963b0e3b2b\") " Apr 21 10:07:21.440673 kubelet[3162]: I0421 10:07:21.440601 3162 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/845525b3-5736-4423-84eb-ae963b0e3b2b-whisker-ca-bundle" pod "845525b3-5736-4423-84eb-ae963b0e3b2b" (UID: "845525b3-5736-4423-84eb-ae963b0e3b2b"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 21 10:07:21.447044 kubelet[3162]: I0421 10:07:21.446951 3162 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/845525b3-5736-4423-84eb-ae963b0e3b2b-nginx-config" pod "845525b3-5736-4423-84eb-ae963b0e3b2b" (UID: "845525b3-5736-4423-84eb-ae963b0e3b2b"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 21 10:07:21.481237 kubelet[3162]: I0421 10:07:21.481144 3162 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/845525b3-5736-4423-84eb-ae963b0e3b2b-kube-api-access-2ncnr" pod "845525b3-5736-4423-84eb-ae963b0e3b2b" (UID: "845525b3-5736-4423-84eb-ae963b0e3b2b"). InnerVolumeSpecName "kube-api-access-2ncnr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 21 10:07:21.495708 kubelet[3162]: I0421 10:07:21.493716 3162 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/845525b3-5736-4423-84eb-ae963b0e3b2b-whisker-backend-key-pair" pod "845525b3-5736-4423-84eb-ae963b0e3b2b" (UID: "845525b3-5736-4423-84eb-ae963b0e3b2b"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 21 10:07:21.534774 kubelet[3162]: I0421 10:07:21.534609 3162 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/845525b3-5736-4423-84eb-ae963b0e3b2b-whisker-backend-key-pair\") on node \"ip-172-31-20-11\" DevicePath \"\"" Apr 21 10:07:21.534774 kubelet[3162]: I0421 10:07:21.534659 3162 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2ncnr\" (UniqueName: \"kubernetes.io/projected/845525b3-5736-4423-84eb-ae963b0e3b2b-kube-api-access-2ncnr\") on node \"ip-172-31-20-11\" DevicePath \"\"" Apr 21 10:07:21.534774 kubelet[3162]: I0421 10:07:21.534683 3162 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/845525b3-5736-4423-84eb-ae963b0e3b2b-nginx-config\") on node \"ip-172-31-20-11\" DevicePath \"\"" Apr 21 10:07:21.534774 kubelet[3162]: I0421 10:07:21.534708 3162 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/845525b3-5736-4423-84eb-ae963b0e3b2b-whisker-ca-bundle\") on node \"ip-172-31-20-11\" DevicePath \"\"" Apr 21 10:07:21.943860 systemd[1]: run-netns-cni\x2d1c55a3d3\x2d0da0\x2dcf73\x2de31d\x2d8438b4cf19fe.mount: Deactivated successfully. Apr 21 10:07:21.944457 systemd[1]: run-netns-cni\x2dce4ed587\x2def38\x2d69f8\x2dd978\x2db41458586b32.mount: Deactivated successfully. Apr 21 10:07:21.945125 systemd[1]: run-netns-cni\x2d7ef3ff5c\x2d995a\x2db4ee\x2da01b\x2db76c089845fa.mount: Deactivated successfully. Apr 21 10:07:21.945809 systemd[1]: run-netns-cni\x2d006bf8f1\x2d26b2\x2ddc70\x2d7059\x2d6f659dea0d6c.mount: Deactivated successfully. Apr 21 10:07:21.946357 systemd[1]: run-netns-cni\x2d9d761d68\x2d5b08\x2d07b1\x2d55cb\x2d1de8abe67b38.mount: Deactivated successfully. Apr 21 10:07:21.946941 systemd[1]: var-lib-kubelet-pods-845525b3\x2d5736\x2d4423\x2d84eb\x2dae963b0e3b2b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2ncnr.mount: Deactivated successfully. Apr 21 10:07:21.947588 systemd[1]: var-lib-kubelet-pods-845525b3\x2d5736\x2d4423\x2d84eb\x2dae963b0e3b2b-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Apr 21 10:07:21.971168 systemd[1]: Removed slice kubepods-besteffort-pod845525b3_5736_4423_84eb_ae963b0e3b2b.slice - libcontainer container kubepods-besteffort-pod845525b3_5736_4423_84eb_ae963b0e3b2b.slice. Apr 21 10:07:22.238374 systemd-networkd[1831]: cali76289bd06d2: Link UP Apr 21 10:07:22.242198 (udev-worker)[4922]: Network interface NamePolicy= disabled on kernel command line. Apr 21 10:07:22.244177 systemd-networkd[1831]: cali76289bd06d2: Gained carrier Apr 21 10:07:22.322547 containerd[1955]: 2026-04-21 10:07:21.349 [ERROR][4746] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 21 10:07:22.322547 containerd[1955]: 2026-04-21 10:07:21.388 [INFO][4746] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--20--11-k8s-calico--apiserver--756779796c--l52kg-eth0 calico-apiserver-756779796c- calico-system a8a3131a-1ae1-4f2c-85eb-7bb7a52a06c6 973 0 2026-04-21 10:06:55 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:756779796c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-20-11 calico-apiserver-756779796c-l52kg eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali76289bd06d2 [] [] }} ContainerID="fa65fbe38059786dc13faa70c366695250dc63cf67ce41c224b9a117398e3254" Namespace="calico-system" Pod="calico-apiserver-756779796c-l52kg" WorkloadEndpoint="ip--172--31--20--11-k8s-calico--apiserver--756779796c--l52kg-" Apr 21 10:07:22.322547 containerd[1955]: 2026-04-21 10:07:21.388 [INFO][4746] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fa65fbe38059786dc13faa70c366695250dc63cf67ce41c224b9a117398e3254" Namespace="calico-system" Pod="calico-apiserver-756779796c-l52kg" WorkloadEndpoint="ip--172--31--20--11-k8s-calico--apiserver--756779796c--l52kg-eth0" Apr 21 10:07:22.322547 containerd[1955]: 2026-04-21 10:07:21.728 [INFO][4814] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fa65fbe38059786dc13faa70c366695250dc63cf67ce41c224b9a117398e3254" HandleID="k8s-pod-network.fa65fbe38059786dc13faa70c366695250dc63cf67ce41c224b9a117398e3254" Workload="ip--172--31--20--11-k8s-calico--apiserver--756779796c--l52kg-eth0" Apr 21 10:07:22.322547 containerd[1955]: 2026-04-21 10:07:21.767 [INFO][4814] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="fa65fbe38059786dc13faa70c366695250dc63cf67ce41c224b9a117398e3254" HandleID="k8s-pod-network.fa65fbe38059786dc13faa70c366695250dc63cf67ce41c224b9a117398e3254" Workload="ip--172--31--20--11-k8s-calico--apiserver--756779796c--l52kg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400027c120), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-20-11", "pod":"calico-apiserver-756779796c-l52kg", "timestamp":"2026-04-21 10:07:21.728720802 +0000 UTC"}, Hostname:"ip-172-31-20-11", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x4000328000)} Apr 21 10:07:22.322547 containerd[1955]: 2026-04-21 10:07:21.768 [INFO][4814] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:07:22.322547 containerd[1955]: 2026-04-21 10:07:21.769 [INFO][4814] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:07:22.322547 containerd[1955]: 2026-04-21 10:07:21.769 [INFO][4814] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-20-11' Apr 21 10:07:22.322547 containerd[1955]: 2026-04-21 10:07:21.790 [INFO][4814] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.fa65fbe38059786dc13faa70c366695250dc63cf67ce41c224b9a117398e3254" host="ip-172-31-20-11" Apr 21 10:07:22.322547 containerd[1955]: 2026-04-21 10:07:21.810 [INFO][4814] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-20-11" Apr 21 10:07:22.322547 containerd[1955]: 2026-04-21 10:07:21.884 [INFO][4814] ipam/ipam.go 526: Trying affinity for 192.168.71.128/26 host="ip-172-31-20-11" Apr 21 10:07:22.322547 containerd[1955]: 2026-04-21 10:07:21.970 [INFO][4814] ipam/ipam.go 160: Attempting to load block cidr=192.168.71.128/26 host="ip-172-31-20-11" Apr 21 10:07:22.322547 containerd[1955]: 2026-04-21 10:07:22.034 [INFO][4814] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.71.128/26 host="ip-172-31-20-11" Apr 21 10:07:22.322547 containerd[1955]: 2026-04-21 10:07:22.034 [INFO][4814] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.71.128/26 handle="k8s-pod-network.fa65fbe38059786dc13faa70c366695250dc63cf67ce41c224b9a117398e3254" host="ip-172-31-20-11" Apr 21 10:07:22.322547 containerd[1955]: 2026-04-21 10:07:22.118 [INFO][4814] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.fa65fbe38059786dc13faa70c366695250dc63cf67ce41c224b9a117398e3254 Apr 21 10:07:22.322547 containerd[1955]: 2026-04-21 10:07:22.168 [INFO][4814] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.71.128/26 handle="k8s-pod-network.fa65fbe38059786dc13faa70c366695250dc63cf67ce41c224b9a117398e3254" host="ip-172-31-20-11" Apr 21 10:07:22.322547 containerd[1955]: 2026-04-21 10:07:22.189 [INFO][4814] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.71.129/26] block=192.168.71.128/26 handle="k8s-pod-network.fa65fbe38059786dc13faa70c366695250dc63cf67ce41c224b9a117398e3254" host="ip-172-31-20-11" Apr 21 10:07:22.322547 containerd[1955]: 2026-04-21 10:07:22.191 [INFO][4814] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.71.129/26] handle="k8s-pod-network.fa65fbe38059786dc13faa70c366695250dc63cf67ce41c224b9a117398e3254" host="ip-172-31-20-11" Apr 21 10:07:22.322547 containerd[1955]: 2026-04-21 10:07:22.191 [INFO][4814] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:07:22.322547 containerd[1955]: 2026-04-21 10:07:22.191 [INFO][4814] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.71.129/26] IPv6=[] ContainerID="fa65fbe38059786dc13faa70c366695250dc63cf67ce41c224b9a117398e3254" HandleID="k8s-pod-network.fa65fbe38059786dc13faa70c366695250dc63cf67ce41c224b9a117398e3254" Workload="ip--172--31--20--11-k8s-calico--apiserver--756779796c--l52kg-eth0" Apr 21 10:07:22.324285 containerd[1955]: 2026-04-21 10:07:22.198 [INFO][4746] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fa65fbe38059786dc13faa70c366695250dc63cf67ce41c224b9a117398e3254" Namespace="calico-system" Pod="calico-apiserver-756779796c-l52kg" WorkloadEndpoint="ip--172--31--20--11-k8s-calico--apiserver--756779796c--l52kg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--11-k8s-calico--apiserver--756779796c--l52kg-eth0", GenerateName:"calico-apiserver-756779796c-", Namespace:"calico-system", SelfLink:"", UID:"a8a3131a-1ae1-4f2c-85eb-7bb7a52a06c6", ResourceVersion:"973", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 6, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"756779796c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-11", ContainerID:"", Pod:"calico-apiserver-756779796c-l52kg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.71.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali76289bd06d2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:07:22.324285 containerd[1955]: 2026-04-21 10:07:22.198 [INFO][4746] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.71.129/32] ContainerID="fa65fbe38059786dc13faa70c366695250dc63cf67ce41c224b9a117398e3254" Namespace="calico-system" Pod="calico-apiserver-756779796c-l52kg" WorkloadEndpoint="ip--172--31--20--11-k8s-calico--apiserver--756779796c--l52kg-eth0" Apr 21 10:07:22.324285 containerd[1955]: 2026-04-21 10:07:22.198 [INFO][4746] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali76289bd06d2 ContainerID="fa65fbe38059786dc13faa70c366695250dc63cf67ce41c224b9a117398e3254" Namespace="calico-system" Pod="calico-apiserver-756779796c-l52kg" WorkloadEndpoint="ip--172--31--20--11-k8s-calico--apiserver--756779796c--l52kg-eth0" Apr 21 10:07:22.324285 containerd[1955]: 2026-04-21 10:07:22.246 [INFO][4746] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fa65fbe38059786dc13faa70c366695250dc63cf67ce41c224b9a117398e3254" Namespace="calico-system" Pod="calico-apiserver-756779796c-l52kg" WorkloadEndpoint="ip--172--31--20--11-k8s-calico--apiserver--756779796c--l52kg-eth0" Apr 21 10:07:22.324285 containerd[1955]: 2026-04-21 10:07:22.247 [INFO][4746] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fa65fbe38059786dc13faa70c366695250dc63cf67ce41c224b9a117398e3254" Namespace="calico-system" Pod="calico-apiserver-756779796c-l52kg" WorkloadEndpoint="ip--172--31--20--11-k8s-calico--apiserver--756779796c--l52kg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--11-k8s-calico--apiserver--756779796c--l52kg-eth0", GenerateName:"calico-apiserver-756779796c-", Namespace:"calico-system", SelfLink:"", UID:"a8a3131a-1ae1-4f2c-85eb-7bb7a52a06c6", ResourceVersion:"973", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 6, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"756779796c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-11", ContainerID:"fa65fbe38059786dc13faa70c366695250dc63cf67ce41c224b9a117398e3254", Pod:"calico-apiserver-756779796c-l52kg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.71.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali76289bd06d2", MAC:"fe:95:cc:9c:09:3f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:07:22.324285 containerd[1955]: 2026-04-21 10:07:22.315 [INFO][4746] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fa65fbe38059786dc13faa70c366695250dc63cf67ce41c224b9a117398e3254" Namespace="calico-system" Pod="calico-apiserver-756779796c-l52kg" WorkloadEndpoint="ip--172--31--20--11-k8s-calico--apiserver--756779796c--l52kg-eth0" Apr 21 10:07:22.344161 kubelet[3162]: I0421 10:07:22.344113 3162 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="845525b3-5736-4423-84eb-ae963b0e3b2b" path="/var/lib/kubelet/pods/845525b3-5736-4423-84eb-ae963b0e3b2b/volumes" Apr 21 10:07:22.381947 containerd[1955]: time="2026-04-21T10:07:22.381392299Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:07:22.381947 containerd[1955]: time="2026-04-21T10:07:22.381530464Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:07:22.381947 containerd[1955]: time="2026-04-21T10:07:22.381588910Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:07:22.381947 containerd[1955]: time="2026-04-21T10:07:22.381778664Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:07:22.429061 systemd[1]: Created slice kubepods-besteffort-pod009c7e1e_6291_4854_a472_a094b3df1eb4.slice - libcontainer container kubepods-besteffort-pod009c7e1e_6291_4854_a472_a094b3df1eb4.slice. Apr 21 10:07:22.458121 kubelet[3162]: I0421 10:07:22.458014 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/009c7e1e-6291-4854-a472-a094b3df1eb4-whisker-ca-bundle\") pod \"whisker-669f97dc6b-fnwbg\" (UID: \"009c7e1e-6291-4854-a472-a094b3df1eb4\") " pod="calico-system/whisker-669f97dc6b-fnwbg" Apr 21 10:07:22.458121 kubelet[3162]: I0421 10:07:22.458106 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-scfhg\" (UniqueName: \"kubernetes.io/projected/009c7e1e-6291-4854-a472-a094b3df1eb4-kube-api-access-scfhg\") pod \"whisker-669f97dc6b-fnwbg\" (UID: \"009c7e1e-6291-4854-a472-a094b3df1eb4\") " pod="calico-system/whisker-669f97dc6b-fnwbg" Apr 21 10:07:22.458806 kubelet[3162]: I0421 10:07:22.458158 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/009c7e1e-6291-4854-a472-a094b3df1eb4-whisker-backend-key-pair\") pod \"whisker-669f97dc6b-fnwbg\" (UID: \"009c7e1e-6291-4854-a472-a094b3df1eb4\") " pod="calico-system/whisker-669f97dc6b-fnwbg" Apr 21 10:07:22.458806 kubelet[3162]: I0421 10:07:22.458232 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/009c7e1e-6291-4854-a472-a094b3df1eb4-nginx-config\") pod \"whisker-669f97dc6b-fnwbg\" (UID: \"009c7e1e-6291-4854-a472-a094b3df1eb4\") " pod="calico-system/whisker-669f97dc6b-fnwbg" Apr 21 10:07:22.538937 systemd[1]: Started cri-containerd-fa65fbe38059786dc13faa70c366695250dc63cf67ce41c224b9a117398e3254.scope - libcontainer container fa65fbe38059786dc13faa70c366695250dc63cf67ce41c224b9a117398e3254. Apr 21 10:07:22.677544 (udev-worker)[4920]: Network interface NamePolicy= disabled on kernel command line. Apr 21 10:07:22.680195 systemd-networkd[1831]: calif0e38021757: Link UP Apr 21 10:07:22.685461 systemd-networkd[1831]: calif0e38021757: Gained carrier Apr 21 10:07:22.778455 containerd[1955]: 2026-04-21 10:07:21.234 [ERROR][4685] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 21 10:07:22.778455 containerd[1955]: 2026-04-21 10:07:21.364 [INFO][4685] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--20--11-k8s-coredns--7d764666f9--n4jjg-eth0 coredns-7d764666f9- kube-system 7842ac67-69e7-4228-9787-d5e2cbd9c0b8 969 0 2026-04-21 10:06:30 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7d764666f9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-20-11 coredns-7d764666f9-n4jjg eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif0e38021757 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="70ca7c82fd913294c155bc0d2bb8f0c5fa6b329adad5067ab1cc25d8044b43e1" Namespace="kube-system" Pod="coredns-7d764666f9-n4jjg" WorkloadEndpoint="ip--172--31--20--11-k8s-coredns--7d764666f9--n4jjg-" Apr 21 10:07:22.778455 containerd[1955]: 2026-04-21 10:07:21.364 [INFO][4685] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="70ca7c82fd913294c155bc0d2bb8f0c5fa6b329adad5067ab1cc25d8044b43e1" Namespace="kube-system" Pod="coredns-7d764666f9-n4jjg" WorkloadEndpoint="ip--172--31--20--11-k8s-coredns--7d764666f9--n4jjg-eth0" Apr 21 10:07:22.778455 containerd[1955]: 2026-04-21 10:07:21.799 [INFO][4823] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="70ca7c82fd913294c155bc0d2bb8f0c5fa6b329adad5067ab1cc25d8044b43e1" HandleID="k8s-pod-network.70ca7c82fd913294c155bc0d2bb8f0c5fa6b329adad5067ab1cc25d8044b43e1" Workload="ip--172--31--20--11-k8s-coredns--7d764666f9--n4jjg-eth0" Apr 21 10:07:22.778455 containerd[1955]: 2026-04-21 10:07:21.866 [INFO][4823] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="70ca7c82fd913294c155bc0d2bb8f0c5fa6b329adad5067ab1cc25d8044b43e1" HandleID="k8s-pod-network.70ca7c82fd913294c155bc0d2bb8f0c5fa6b329adad5067ab1cc25d8044b43e1" Workload="ip--172--31--20--11-k8s-coredns--7d764666f9--n4jjg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004c4e0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-20-11", "pod":"coredns-7d764666f9-n4jjg", "timestamp":"2026-04-21 10:07:21.79923796 +0000 UTC"}, Hostname:"ip-172-31-20-11", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x4000392000)} Apr 21 10:07:22.778455 containerd[1955]: 2026-04-21 10:07:21.866 [INFO][4823] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:07:22.778455 containerd[1955]: 2026-04-21 10:07:22.191 [INFO][4823] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:07:22.778455 containerd[1955]: 2026-04-21 10:07:22.192 [INFO][4823] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-20-11' Apr 21 10:07:22.778455 containerd[1955]: 2026-04-21 10:07:22.229 [INFO][4823] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.70ca7c82fd913294c155bc0d2bb8f0c5fa6b329adad5067ab1cc25d8044b43e1" host="ip-172-31-20-11" Apr 21 10:07:22.778455 containerd[1955]: 2026-04-21 10:07:22.306 [INFO][4823] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-20-11" Apr 21 10:07:22.778455 containerd[1955]: 2026-04-21 10:07:22.545 [INFO][4823] ipam/ipam.go 526: Trying affinity for 192.168.71.128/26 host="ip-172-31-20-11" Apr 21 10:07:22.778455 containerd[1955]: 2026-04-21 10:07:22.559 [INFO][4823] ipam/ipam.go 160: Attempting to load block cidr=192.168.71.128/26 host="ip-172-31-20-11" Apr 21 10:07:22.778455 containerd[1955]: 2026-04-21 10:07:22.591 [INFO][4823] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.71.128/26 host="ip-172-31-20-11" Apr 21 10:07:22.778455 containerd[1955]: 2026-04-21 10:07:22.592 [INFO][4823] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.71.128/26 handle="k8s-pod-network.70ca7c82fd913294c155bc0d2bb8f0c5fa6b329adad5067ab1cc25d8044b43e1" host="ip-172-31-20-11" Apr 21 10:07:22.778455 containerd[1955]: 2026-04-21 10:07:22.605 [INFO][4823] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.70ca7c82fd913294c155bc0d2bb8f0c5fa6b329adad5067ab1cc25d8044b43e1 Apr 21 10:07:22.778455 containerd[1955]: 2026-04-21 10:07:22.625 [INFO][4823] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.71.128/26 handle="k8s-pod-network.70ca7c82fd913294c155bc0d2bb8f0c5fa6b329adad5067ab1cc25d8044b43e1" host="ip-172-31-20-11" Apr 21 10:07:22.778455 containerd[1955]: 2026-04-21 10:07:22.668 [INFO][4823] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.71.130/26] block=192.168.71.128/26 handle="k8s-pod-network.70ca7c82fd913294c155bc0d2bb8f0c5fa6b329adad5067ab1cc25d8044b43e1" host="ip-172-31-20-11" Apr 21 10:07:22.778455 containerd[1955]: 2026-04-21 10:07:22.669 [INFO][4823] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.71.130/26] handle="k8s-pod-network.70ca7c82fd913294c155bc0d2bb8f0c5fa6b329adad5067ab1cc25d8044b43e1" host="ip-172-31-20-11" Apr 21 10:07:22.778455 containerd[1955]: 2026-04-21 10:07:22.669 [INFO][4823] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:07:22.778455 containerd[1955]: 2026-04-21 10:07:22.669 [INFO][4823] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.71.130/26] IPv6=[] ContainerID="70ca7c82fd913294c155bc0d2bb8f0c5fa6b329adad5067ab1cc25d8044b43e1" HandleID="k8s-pod-network.70ca7c82fd913294c155bc0d2bb8f0c5fa6b329adad5067ab1cc25d8044b43e1" Workload="ip--172--31--20--11-k8s-coredns--7d764666f9--n4jjg-eth0" Apr 21 10:07:22.781552 containerd[1955]: 2026-04-21 10:07:22.673 [INFO][4685] cni-plugin/k8s.go 418: Populated endpoint ContainerID="70ca7c82fd913294c155bc0d2bb8f0c5fa6b329adad5067ab1cc25d8044b43e1" Namespace="kube-system" Pod="coredns-7d764666f9-n4jjg" WorkloadEndpoint="ip--172--31--20--11-k8s-coredns--7d764666f9--n4jjg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--11-k8s-coredns--7d764666f9--n4jjg-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"7842ac67-69e7-4228-9787-d5e2cbd9c0b8", ResourceVersion:"969", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 6, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-11", ContainerID:"", Pod:"coredns-7d764666f9-n4jjg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.71.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif0e38021757", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:07:22.781552 containerd[1955]: 2026-04-21 10:07:22.673 [INFO][4685] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.71.130/32] ContainerID="70ca7c82fd913294c155bc0d2bb8f0c5fa6b329adad5067ab1cc25d8044b43e1" Namespace="kube-system" Pod="coredns-7d764666f9-n4jjg" WorkloadEndpoint="ip--172--31--20--11-k8s-coredns--7d764666f9--n4jjg-eth0" Apr 21 10:07:22.781552 containerd[1955]: 2026-04-21 10:07:22.674 [INFO][4685] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif0e38021757 ContainerID="70ca7c82fd913294c155bc0d2bb8f0c5fa6b329adad5067ab1cc25d8044b43e1" Namespace="kube-system" Pod="coredns-7d764666f9-n4jjg" WorkloadEndpoint="ip--172--31--20--11-k8s-coredns--7d764666f9--n4jjg-eth0" Apr 21 10:07:22.781552 containerd[1955]: 2026-04-21 10:07:22.685 [INFO][4685] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="70ca7c82fd913294c155bc0d2bb8f0c5fa6b329adad5067ab1cc25d8044b43e1" Namespace="kube-system" Pod="coredns-7d764666f9-n4jjg" WorkloadEndpoint="ip--172--31--20--11-k8s-coredns--7d764666f9--n4jjg-eth0" Apr 21 10:07:22.781552 containerd[1955]: 2026-04-21 10:07:22.689 [INFO][4685] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="70ca7c82fd913294c155bc0d2bb8f0c5fa6b329adad5067ab1cc25d8044b43e1" Namespace="kube-system" Pod="coredns-7d764666f9-n4jjg" WorkloadEndpoint="ip--172--31--20--11-k8s-coredns--7d764666f9--n4jjg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--11-k8s-coredns--7d764666f9--n4jjg-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"7842ac67-69e7-4228-9787-d5e2cbd9c0b8", ResourceVersion:"969", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 6, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-11", ContainerID:"70ca7c82fd913294c155bc0d2bb8f0c5fa6b329adad5067ab1cc25d8044b43e1", Pod:"coredns-7d764666f9-n4jjg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.71.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif0e38021757", MAC:"1e:51:e2:4d:a8:40", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:07:22.781552 containerd[1955]: 2026-04-21 10:07:22.772 [INFO][4685] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="70ca7c82fd913294c155bc0d2bb8f0c5fa6b329adad5067ab1cc25d8044b43e1" Namespace="kube-system" Pod="coredns-7d764666f9-n4jjg" WorkloadEndpoint="ip--172--31--20--11-k8s-coredns--7d764666f9--n4jjg-eth0" Apr 21 10:07:22.792809 containerd[1955]: time="2026-04-21T10:07:22.792640604Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-669f97dc6b-fnwbg,Uid:009c7e1e-6291-4854-a472-a094b3df1eb4,Namespace:calico-system,Attempt:0,}" Apr 21 10:07:22.847845 containerd[1955]: time="2026-04-21T10:07:22.845867224Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:07:22.847845 containerd[1955]: time="2026-04-21T10:07:22.845973489Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:07:22.847845 containerd[1955]: time="2026-04-21T10:07:22.846010672Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:07:22.847845 containerd[1955]: time="2026-04-21T10:07:22.846180689Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:07:22.916881 systemd[1]: Started cri-containerd-70ca7c82fd913294c155bc0d2bb8f0c5fa6b329adad5067ab1cc25d8044b43e1.scope - libcontainer container 70ca7c82fd913294c155bc0d2bb8f0c5fa6b329adad5067ab1cc25d8044b43e1. Apr 21 10:07:22.948666 systemd-networkd[1831]: calia12692d9c53: Link UP Apr 21 10:07:22.953376 systemd-networkd[1831]: calia12692d9c53: Gained carrier Apr 21 10:07:23.069908 systemd-networkd[1831]: cali97f1de98980: Link UP Apr 21 10:07:23.072525 systemd-networkd[1831]: cali97f1de98980: Gained carrier Apr 21 10:07:23.099268 containerd[1955]: 2026-04-21 10:07:21.305 [ERROR][4702] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 21 10:07:23.099268 containerd[1955]: 2026-04-21 10:07:21.358 [INFO][4702] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--20--11-k8s-coredns--7d764666f9--cxnd9-eth0 coredns-7d764666f9- kube-system af9f8050-444d-4935-9809-2c239d4c35de 968 0 2026-04-21 10:06:29 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7d764666f9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-20-11 coredns-7d764666f9-cxnd9 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia12692d9c53 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="2b171adbaab7eecb1843024cd3493ee2fde6bbc07140dd0f25f0dced4f8be503" Namespace="kube-system" Pod="coredns-7d764666f9-cxnd9" WorkloadEndpoint="ip--172--31--20--11-k8s-coredns--7d764666f9--cxnd9-" Apr 21 10:07:23.099268 containerd[1955]: 2026-04-21 10:07:21.358 [INFO][4702] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2b171adbaab7eecb1843024cd3493ee2fde6bbc07140dd0f25f0dced4f8be503" Namespace="kube-system" Pod="coredns-7d764666f9-cxnd9" WorkloadEndpoint="ip--172--31--20--11-k8s-coredns--7d764666f9--cxnd9-eth0" Apr 21 10:07:23.099268 containerd[1955]: 2026-04-21 10:07:21.864 [INFO][4827] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2b171adbaab7eecb1843024cd3493ee2fde6bbc07140dd0f25f0dced4f8be503" HandleID="k8s-pod-network.2b171adbaab7eecb1843024cd3493ee2fde6bbc07140dd0f25f0dced4f8be503" Workload="ip--172--31--20--11-k8s-coredns--7d764666f9--cxnd9-eth0" Apr 21 10:07:23.099268 containerd[1955]: 2026-04-21 10:07:21.974 [INFO][4827] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="2b171adbaab7eecb1843024cd3493ee2fde6bbc07140dd0f25f0dced4f8be503" HandleID="k8s-pod-network.2b171adbaab7eecb1843024cd3493ee2fde6bbc07140dd0f25f0dced4f8be503" Workload="ip--172--31--20--11-k8s-coredns--7d764666f9--cxnd9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000385470), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-20-11", "pod":"coredns-7d764666f9-cxnd9", "timestamp":"2026-04-21 10:07:21.864045123 +0000 UTC"}, Hostname:"ip-172-31-20-11", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40004f8580)} Apr 21 10:07:23.099268 containerd[1955]: 2026-04-21 10:07:21.974 [INFO][4827] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:07:23.099268 containerd[1955]: 2026-04-21 10:07:22.669 [INFO][4827] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:07:23.099268 containerd[1955]: 2026-04-21 10:07:22.669 [INFO][4827] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-20-11' Apr 21 10:07:23.099268 containerd[1955]: 2026-04-21 10:07:22.681 [INFO][4827] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.2b171adbaab7eecb1843024cd3493ee2fde6bbc07140dd0f25f0dced4f8be503" host="ip-172-31-20-11" Apr 21 10:07:23.099268 containerd[1955]: 2026-04-21 10:07:22.711 [INFO][4827] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-20-11" Apr 21 10:07:23.099268 containerd[1955]: 2026-04-21 10:07:22.738 [INFO][4827] ipam/ipam.go 526: Trying affinity for 192.168.71.128/26 host="ip-172-31-20-11" Apr 21 10:07:23.099268 containerd[1955]: 2026-04-21 10:07:22.762 [INFO][4827] ipam/ipam.go 160: Attempting to load block cidr=192.168.71.128/26 host="ip-172-31-20-11" Apr 21 10:07:23.099268 containerd[1955]: 2026-04-21 10:07:22.787 [INFO][4827] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.71.128/26 host="ip-172-31-20-11" Apr 21 10:07:23.099268 containerd[1955]: 2026-04-21 10:07:22.787 [INFO][4827] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.71.128/26 handle="k8s-pod-network.2b171adbaab7eecb1843024cd3493ee2fde6bbc07140dd0f25f0dced4f8be503" host="ip-172-31-20-11" Apr 21 10:07:23.099268 containerd[1955]: 2026-04-21 10:07:22.794 [INFO][4827] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.2b171adbaab7eecb1843024cd3493ee2fde6bbc07140dd0f25f0dced4f8be503 Apr 21 10:07:23.099268 containerd[1955]: 2026-04-21 10:07:22.835 [INFO][4827] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.71.128/26 handle="k8s-pod-network.2b171adbaab7eecb1843024cd3493ee2fde6bbc07140dd0f25f0dced4f8be503" host="ip-172-31-20-11" Apr 21 10:07:23.099268 containerd[1955]: 2026-04-21 10:07:22.880 [INFO][4827] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.71.131/26] block=192.168.71.128/26 handle="k8s-pod-network.2b171adbaab7eecb1843024cd3493ee2fde6bbc07140dd0f25f0dced4f8be503" host="ip-172-31-20-11" Apr 21 10:07:23.099268 containerd[1955]: 2026-04-21 10:07:22.880 [INFO][4827] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.71.131/26] handle="k8s-pod-network.2b171adbaab7eecb1843024cd3493ee2fde6bbc07140dd0f25f0dced4f8be503" host="ip-172-31-20-11" Apr 21 10:07:23.099268 containerd[1955]: 2026-04-21 10:07:22.880 [INFO][4827] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:07:23.099268 containerd[1955]: 2026-04-21 10:07:22.880 [INFO][4827] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.71.131/26] IPv6=[] ContainerID="2b171adbaab7eecb1843024cd3493ee2fde6bbc07140dd0f25f0dced4f8be503" HandleID="k8s-pod-network.2b171adbaab7eecb1843024cd3493ee2fde6bbc07140dd0f25f0dced4f8be503" Workload="ip--172--31--20--11-k8s-coredns--7d764666f9--cxnd9-eth0" Apr 21 10:07:23.103178 containerd[1955]: 2026-04-21 10:07:22.932 [INFO][4702] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2b171adbaab7eecb1843024cd3493ee2fde6bbc07140dd0f25f0dced4f8be503" Namespace="kube-system" Pod="coredns-7d764666f9-cxnd9" WorkloadEndpoint="ip--172--31--20--11-k8s-coredns--7d764666f9--cxnd9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--11-k8s-coredns--7d764666f9--cxnd9-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"af9f8050-444d-4935-9809-2c239d4c35de", ResourceVersion:"968", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 6, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-11", ContainerID:"", Pod:"coredns-7d764666f9-cxnd9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.71.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia12692d9c53", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:07:23.103178 containerd[1955]: 2026-04-21 10:07:22.932 [INFO][4702] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.71.131/32] ContainerID="2b171adbaab7eecb1843024cd3493ee2fde6bbc07140dd0f25f0dced4f8be503" Namespace="kube-system" Pod="coredns-7d764666f9-cxnd9" WorkloadEndpoint="ip--172--31--20--11-k8s-coredns--7d764666f9--cxnd9-eth0" Apr 21 10:07:23.103178 containerd[1955]: 2026-04-21 10:07:22.932 [INFO][4702] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia12692d9c53 ContainerID="2b171adbaab7eecb1843024cd3493ee2fde6bbc07140dd0f25f0dced4f8be503" Namespace="kube-system" Pod="coredns-7d764666f9-cxnd9" WorkloadEndpoint="ip--172--31--20--11-k8s-coredns--7d764666f9--cxnd9-eth0" Apr 21 10:07:23.103178 containerd[1955]: 2026-04-21 10:07:22.971 [INFO][4702] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2b171adbaab7eecb1843024cd3493ee2fde6bbc07140dd0f25f0dced4f8be503" Namespace="kube-system" Pod="coredns-7d764666f9-cxnd9" WorkloadEndpoint="ip--172--31--20--11-k8s-coredns--7d764666f9--cxnd9-eth0" Apr 21 10:07:23.103178 containerd[1955]: 2026-04-21 10:07:22.980 [INFO][4702] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2b171adbaab7eecb1843024cd3493ee2fde6bbc07140dd0f25f0dced4f8be503" Namespace="kube-system" Pod="coredns-7d764666f9-cxnd9" WorkloadEndpoint="ip--172--31--20--11-k8s-coredns--7d764666f9--cxnd9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--11-k8s-coredns--7d764666f9--cxnd9-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"af9f8050-444d-4935-9809-2c239d4c35de", ResourceVersion:"968", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 6, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-11", ContainerID:"2b171adbaab7eecb1843024cd3493ee2fde6bbc07140dd0f25f0dced4f8be503", Pod:"coredns-7d764666f9-cxnd9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.71.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia12692d9c53", MAC:"2a:f9:f8:e5:60:58", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:07:23.103178 containerd[1955]: 2026-04-21 10:07:23.045 [INFO][4702] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2b171adbaab7eecb1843024cd3493ee2fde6bbc07140dd0f25f0dced4f8be503" Namespace="kube-system" Pod="coredns-7d764666f9-cxnd9" WorkloadEndpoint="ip--172--31--20--11-k8s-coredns--7d764666f9--cxnd9-eth0" Apr 21 10:07:23.146534 containerd[1955]: 2026-04-21 10:07:21.515 [ERROR][4736] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 21 10:07:23.146534 containerd[1955]: 2026-04-21 10:07:21.632 [INFO][4736] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--20--11-k8s-calico--kube--controllers--74499fb665--57b24-eth0 calico-kube-controllers-74499fb665- calico-system 26d85b28-0419-4bec-8bca-4c6bc1376147 972 0 2026-04-21 10:07:00 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:74499fb665 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-20-11 calico-kube-controllers-74499fb665-57b24 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali97f1de98980 [] [] }} ContainerID="25216af02af7751975e027390ff59ade0564686f35cd1fc89261d39f8a7407b9" Namespace="calico-system" Pod="calico-kube-controllers-74499fb665-57b24" WorkloadEndpoint="ip--172--31--20--11-k8s-calico--kube--controllers--74499fb665--57b24-" Apr 21 10:07:23.146534 containerd[1955]: 2026-04-21 10:07:21.633 [INFO][4736] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="25216af02af7751975e027390ff59ade0564686f35cd1fc89261d39f8a7407b9" Namespace="calico-system" Pod="calico-kube-controllers-74499fb665-57b24" WorkloadEndpoint="ip--172--31--20--11-k8s-calico--kube--controllers--74499fb665--57b24-eth0" Apr 21 10:07:23.146534 containerd[1955]: 2026-04-21 10:07:22.018 [INFO][4861] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="25216af02af7751975e027390ff59ade0564686f35cd1fc89261d39f8a7407b9" HandleID="k8s-pod-network.25216af02af7751975e027390ff59ade0564686f35cd1fc89261d39f8a7407b9" Workload="ip--172--31--20--11-k8s-calico--kube--controllers--74499fb665--57b24-eth0" Apr 21 10:07:23.146534 containerd[1955]: 2026-04-21 10:07:22.142 [INFO][4861] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="25216af02af7751975e027390ff59ade0564686f35cd1fc89261d39f8a7407b9" HandleID="k8s-pod-network.25216af02af7751975e027390ff59ade0564686f35cd1fc89261d39f8a7407b9" Workload="ip--172--31--20--11-k8s-calico--kube--controllers--74499fb665--57b24-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000394b10), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-20-11", "pod":"calico-kube-controllers-74499fb665-57b24", "timestamp":"2026-04-21 10:07:22.01847691 +0000 UTC"}, Hostname:"ip-172-31-20-11", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x4000220c60)} Apr 21 10:07:23.146534 containerd[1955]: 2026-04-21 10:07:22.143 [INFO][4861] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:07:23.146534 containerd[1955]: 2026-04-21 10:07:22.883 [INFO][4861] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:07:23.146534 containerd[1955]: 2026-04-21 10:07:22.883 [INFO][4861] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-20-11' Apr 21 10:07:23.146534 containerd[1955]: 2026-04-21 10:07:22.919 [INFO][4861] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.25216af02af7751975e027390ff59ade0564686f35cd1fc89261d39f8a7407b9" host="ip-172-31-20-11" Apr 21 10:07:23.146534 containerd[1955]: 2026-04-21 10:07:22.940 [INFO][4861] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-20-11" Apr 21 10:07:23.146534 containerd[1955]: 2026-04-21 10:07:22.990 [INFO][4861] ipam/ipam.go 526: Trying affinity for 192.168.71.128/26 host="ip-172-31-20-11" Apr 21 10:07:23.146534 containerd[1955]: 2026-04-21 10:07:22.999 [INFO][4861] ipam/ipam.go 160: Attempting to load block cidr=192.168.71.128/26 host="ip-172-31-20-11" Apr 21 10:07:23.146534 containerd[1955]: 2026-04-21 10:07:23.005 [INFO][4861] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.71.128/26 host="ip-172-31-20-11" Apr 21 10:07:23.146534 containerd[1955]: 2026-04-21 10:07:23.005 [INFO][4861] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.71.128/26 handle="k8s-pod-network.25216af02af7751975e027390ff59ade0564686f35cd1fc89261d39f8a7407b9" host="ip-172-31-20-11" Apr 21 10:07:23.146534 containerd[1955]: 2026-04-21 10:07:23.007 [INFO][4861] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.25216af02af7751975e027390ff59ade0564686f35cd1fc89261d39f8a7407b9 Apr 21 10:07:23.146534 containerd[1955]: 2026-04-21 10:07:23.015 [INFO][4861] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.71.128/26 handle="k8s-pod-network.25216af02af7751975e027390ff59ade0564686f35cd1fc89261d39f8a7407b9" host="ip-172-31-20-11" Apr 21 10:07:23.146534 containerd[1955]: 2026-04-21 10:07:23.048 [INFO][4861] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.71.132/26] block=192.168.71.128/26 handle="k8s-pod-network.25216af02af7751975e027390ff59ade0564686f35cd1fc89261d39f8a7407b9" host="ip-172-31-20-11" Apr 21 10:07:23.146534 containerd[1955]: 2026-04-21 10:07:23.048 [INFO][4861] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.71.132/26] handle="k8s-pod-network.25216af02af7751975e027390ff59ade0564686f35cd1fc89261d39f8a7407b9" host="ip-172-31-20-11" Apr 21 10:07:23.146534 containerd[1955]: 2026-04-21 10:07:23.048 [INFO][4861] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:07:23.146534 containerd[1955]: 2026-04-21 10:07:23.048 [INFO][4861] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.71.132/26] IPv6=[] ContainerID="25216af02af7751975e027390ff59ade0564686f35cd1fc89261d39f8a7407b9" HandleID="k8s-pod-network.25216af02af7751975e027390ff59ade0564686f35cd1fc89261d39f8a7407b9" Workload="ip--172--31--20--11-k8s-calico--kube--controllers--74499fb665--57b24-eth0" Apr 21 10:07:23.147850 containerd[1955]: 2026-04-21 10:07:23.054 [INFO][4736] cni-plugin/k8s.go 418: Populated endpoint ContainerID="25216af02af7751975e027390ff59ade0564686f35cd1fc89261d39f8a7407b9" Namespace="calico-system" Pod="calico-kube-controllers-74499fb665-57b24" WorkloadEndpoint="ip--172--31--20--11-k8s-calico--kube--controllers--74499fb665--57b24-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--11-k8s-calico--kube--controllers--74499fb665--57b24-eth0", GenerateName:"calico-kube-controllers-74499fb665-", Namespace:"calico-system", SelfLink:"", UID:"26d85b28-0419-4bec-8bca-4c6bc1376147", ResourceVersion:"972", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 7, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"74499fb665", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-11", ContainerID:"", Pod:"calico-kube-controllers-74499fb665-57b24", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.71.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali97f1de98980", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:07:23.147850 containerd[1955]: 2026-04-21 10:07:23.055 [INFO][4736] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.71.132/32] ContainerID="25216af02af7751975e027390ff59ade0564686f35cd1fc89261d39f8a7407b9" Namespace="calico-system" Pod="calico-kube-controllers-74499fb665-57b24" WorkloadEndpoint="ip--172--31--20--11-k8s-calico--kube--controllers--74499fb665--57b24-eth0" Apr 21 10:07:23.147850 containerd[1955]: 2026-04-21 10:07:23.055 [INFO][4736] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali97f1de98980 ContainerID="25216af02af7751975e027390ff59ade0564686f35cd1fc89261d39f8a7407b9" Namespace="calico-system" Pod="calico-kube-controllers-74499fb665-57b24" WorkloadEndpoint="ip--172--31--20--11-k8s-calico--kube--controllers--74499fb665--57b24-eth0" Apr 21 10:07:23.147850 containerd[1955]: 2026-04-21 10:07:23.075 [INFO][4736] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="25216af02af7751975e027390ff59ade0564686f35cd1fc89261d39f8a7407b9" Namespace="calico-system" Pod="calico-kube-controllers-74499fb665-57b24" WorkloadEndpoint="ip--172--31--20--11-k8s-calico--kube--controllers--74499fb665--57b24-eth0" Apr 21 10:07:23.147850 containerd[1955]: 2026-04-21 10:07:23.094 [INFO][4736] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="25216af02af7751975e027390ff59ade0564686f35cd1fc89261d39f8a7407b9" Namespace="calico-system" Pod="calico-kube-controllers-74499fb665-57b24" WorkloadEndpoint="ip--172--31--20--11-k8s-calico--kube--controllers--74499fb665--57b24-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--11-k8s-calico--kube--controllers--74499fb665--57b24-eth0", GenerateName:"calico-kube-controllers-74499fb665-", Namespace:"calico-system", SelfLink:"", UID:"26d85b28-0419-4bec-8bca-4c6bc1376147", ResourceVersion:"972", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 7, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"74499fb665", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-11", ContainerID:"25216af02af7751975e027390ff59ade0564686f35cd1fc89261d39f8a7407b9", Pod:"calico-kube-controllers-74499fb665-57b24", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.71.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali97f1de98980", MAC:"b6:a2:d6:fc:83:f3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:07:23.147850 containerd[1955]: 2026-04-21 10:07:23.137 [INFO][4736] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="25216af02af7751975e027390ff59ade0564686f35cd1fc89261d39f8a7407b9" Namespace="calico-system" Pod="calico-kube-controllers-74499fb665-57b24" WorkloadEndpoint="ip--172--31--20--11-k8s-calico--kube--controllers--74499fb665--57b24-eth0" Apr 21 10:07:23.231946 containerd[1955]: time="2026-04-21T10:07:23.230402610Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:07:23.235572 containerd[1955]: time="2026-04-21T10:07:23.231893635Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:07:23.236266 containerd[1955]: time="2026-04-21T10:07:23.235830271Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:07:23.236266 containerd[1955]: time="2026-04-21T10:07:23.236118451Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:07:23.253339 systemd-networkd[1831]: calicdd7c039a12: Link UP Apr 21 10:07:23.255171 systemd-networkd[1831]: calicdd7c039a12: Gained carrier Apr 21 10:07:23.311020 containerd[1955]: time="2026-04-21T10:07:23.310934165Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-n4jjg,Uid:7842ac67-69e7-4228-9787-d5e2cbd9c0b8,Namespace:kube-system,Attempt:1,} returns sandbox id \"70ca7c82fd913294c155bc0d2bb8f0c5fa6b329adad5067ab1cc25d8044b43e1\"" Apr 21 10:07:23.333277 containerd[1955]: time="2026-04-21T10:07:23.332602882Z" level=info msg="CreateContainer within sandbox \"70ca7c82fd913294c155bc0d2bb8f0c5fa6b329adad5067ab1cc25d8044b43e1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 21 10:07:23.386001 containerd[1955]: time="2026-04-21T10:07:23.381935332Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:07:23.386001 containerd[1955]: time="2026-04-21T10:07:23.382081349Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:07:23.386001 containerd[1955]: time="2026-04-21T10:07:23.382119804Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:07:23.395891 containerd[1955]: time="2026-04-21T10:07:23.392996448Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:07:23.407559 containerd[1955]: 2026-04-21 10:07:21.329 [ERROR][4722] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 21 10:07:23.407559 containerd[1955]: 2026-04-21 10:07:21.514 [INFO][4722] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--20--11-k8s-csi--node--driver--8sx57-eth0 csi-node-driver- calico-system 678541e7-a702-489b-a934-cdd3a561ab11 971 0 2026-04-21 10:06:59 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:589b8b8d94 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-20-11 csi-node-driver-8sx57 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calicdd7c039a12 [] [] }} ContainerID="a3e8435b7cc3ae195e8b99654e6660a416f9c392038ac8913470a76b2597968d" Namespace="calico-system" Pod="csi-node-driver-8sx57" WorkloadEndpoint="ip--172--31--20--11-k8s-csi--node--driver--8sx57-" Apr 21 10:07:23.407559 containerd[1955]: 2026-04-21 10:07:21.514 [INFO][4722] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a3e8435b7cc3ae195e8b99654e6660a416f9c392038ac8913470a76b2597968d" Namespace="calico-system" Pod="csi-node-driver-8sx57" WorkloadEndpoint="ip--172--31--20--11-k8s-csi--node--driver--8sx57-eth0" Apr 21 10:07:23.407559 containerd[1955]: 2026-04-21 10:07:22.064 [INFO][4850] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a3e8435b7cc3ae195e8b99654e6660a416f9c392038ac8913470a76b2597968d" HandleID="k8s-pod-network.a3e8435b7cc3ae195e8b99654e6660a416f9c392038ac8913470a76b2597968d" Workload="ip--172--31--20--11-k8s-csi--node--driver--8sx57-eth0" Apr 21 10:07:23.407559 containerd[1955]: 2026-04-21 10:07:22.153 [INFO][4850] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="a3e8435b7cc3ae195e8b99654e6660a416f9c392038ac8913470a76b2597968d" HandleID="k8s-pod-network.a3e8435b7cc3ae195e8b99654e6660a416f9c392038ac8913470a76b2597968d" Workload="ip--172--31--20--11-k8s-csi--node--driver--8sx57-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003b3980), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-20-11", "pod":"csi-node-driver-8sx57", "timestamp":"2026-04-21 10:07:22.064966988 +0000 UTC"}, Hostname:"ip-172-31-20-11", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x4000186160)} Apr 21 10:07:23.407559 containerd[1955]: 2026-04-21 10:07:22.153 [INFO][4850] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:07:23.407559 containerd[1955]: 2026-04-21 10:07:23.052 [INFO][4850] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:07:23.407559 containerd[1955]: 2026-04-21 10:07:23.053 [INFO][4850] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-20-11' Apr 21 10:07:23.407559 containerd[1955]: 2026-04-21 10:07:23.066 [INFO][4850] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.a3e8435b7cc3ae195e8b99654e6660a416f9c392038ac8913470a76b2597968d" host="ip-172-31-20-11" Apr 21 10:07:23.407559 containerd[1955]: 2026-04-21 10:07:23.110 [INFO][4850] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-20-11" Apr 21 10:07:23.407559 containerd[1955]: 2026-04-21 10:07:23.150 [INFO][4850] ipam/ipam.go 526: Trying affinity for 192.168.71.128/26 host="ip-172-31-20-11" Apr 21 10:07:23.407559 containerd[1955]: 2026-04-21 10:07:23.159 [INFO][4850] ipam/ipam.go 160: Attempting to load block cidr=192.168.71.128/26 host="ip-172-31-20-11" Apr 21 10:07:23.407559 containerd[1955]: 2026-04-21 10:07:23.170 [INFO][4850] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.71.128/26 host="ip-172-31-20-11" Apr 21 10:07:23.407559 containerd[1955]: 2026-04-21 10:07:23.171 [INFO][4850] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.71.128/26 handle="k8s-pod-network.a3e8435b7cc3ae195e8b99654e6660a416f9c392038ac8913470a76b2597968d" host="ip-172-31-20-11" Apr 21 10:07:23.407559 containerd[1955]: 2026-04-21 10:07:23.176 [INFO][4850] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.a3e8435b7cc3ae195e8b99654e6660a416f9c392038ac8913470a76b2597968d Apr 21 10:07:23.407559 containerd[1955]: 2026-04-21 10:07:23.200 [INFO][4850] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.71.128/26 handle="k8s-pod-network.a3e8435b7cc3ae195e8b99654e6660a416f9c392038ac8913470a76b2597968d" host="ip-172-31-20-11" Apr 21 10:07:23.407559 containerd[1955]: 2026-04-21 10:07:23.223 [INFO][4850] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.71.133/26] block=192.168.71.128/26 handle="k8s-pod-network.a3e8435b7cc3ae195e8b99654e6660a416f9c392038ac8913470a76b2597968d" host="ip-172-31-20-11" Apr 21 10:07:23.407559 containerd[1955]: 2026-04-21 10:07:23.223 [INFO][4850] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.71.133/26] handle="k8s-pod-network.a3e8435b7cc3ae195e8b99654e6660a416f9c392038ac8913470a76b2597968d" host="ip-172-31-20-11" Apr 21 10:07:23.407559 containerd[1955]: 2026-04-21 10:07:23.223 [INFO][4850] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:07:23.407559 containerd[1955]: 2026-04-21 10:07:23.223 [INFO][4850] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.71.133/26] IPv6=[] ContainerID="a3e8435b7cc3ae195e8b99654e6660a416f9c392038ac8913470a76b2597968d" HandleID="k8s-pod-network.a3e8435b7cc3ae195e8b99654e6660a416f9c392038ac8913470a76b2597968d" Workload="ip--172--31--20--11-k8s-csi--node--driver--8sx57-eth0" Apr 21 10:07:23.410434 containerd[1955]: 2026-04-21 10:07:23.245 [INFO][4722] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a3e8435b7cc3ae195e8b99654e6660a416f9c392038ac8913470a76b2597968d" Namespace="calico-system" Pod="csi-node-driver-8sx57" WorkloadEndpoint="ip--172--31--20--11-k8s-csi--node--driver--8sx57-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--11-k8s-csi--node--driver--8sx57-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"678541e7-a702-489b-a934-cdd3a561ab11", ResourceVersion:"971", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 6, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-11", ContainerID:"", Pod:"csi-node-driver-8sx57", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.71.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calicdd7c039a12", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:07:23.410434 containerd[1955]: 2026-04-21 10:07:23.247 [INFO][4722] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.71.133/32] ContainerID="a3e8435b7cc3ae195e8b99654e6660a416f9c392038ac8913470a76b2597968d" Namespace="calico-system" Pod="csi-node-driver-8sx57" WorkloadEndpoint="ip--172--31--20--11-k8s-csi--node--driver--8sx57-eth0" Apr 21 10:07:23.410434 containerd[1955]: 2026-04-21 10:07:23.247 [INFO][4722] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicdd7c039a12 ContainerID="a3e8435b7cc3ae195e8b99654e6660a416f9c392038ac8913470a76b2597968d" Namespace="calico-system" Pod="csi-node-driver-8sx57" WorkloadEndpoint="ip--172--31--20--11-k8s-csi--node--driver--8sx57-eth0" Apr 21 10:07:23.410434 containerd[1955]: 2026-04-21 10:07:23.258 [INFO][4722] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a3e8435b7cc3ae195e8b99654e6660a416f9c392038ac8913470a76b2597968d" Namespace="calico-system" Pod="csi-node-driver-8sx57" WorkloadEndpoint="ip--172--31--20--11-k8s-csi--node--driver--8sx57-eth0" Apr 21 10:07:23.410434 containerd[1955]: 2026-04-21 10:07:23.260 [INFO][4722] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a3e8435b7cc3ae195e8b99654e6660a416f9c392038ac8913470a76b2597968d" Namespace="calico-system" Pod="csi-node-driver-8sx57" WorkloadEndpoint="ip--172--31--20--11-k8s-csi--node--driver--8sx57-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--11-k8s-csi--node--driver--8sx57-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"678541e7-a702-489b-a934-cdd3a561ab11", ResourceVersion:"971", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 6, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-11", ContainerID:"a3e8435b7cc3ae195e8b99654e6660a416f9c392038ac8913470a76b2597968d", Pod:"csi-node-driver-8sx57", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.71.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calicdd7c039a12", MAC:"32:10:48:3f:95:14", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:07:23.410434 containerd[1955]: 2026-04-21 10:07:23.370 [INFO][4722] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a3e8435b7cc3ae195e8b99654e6660a416f9c392038ac8913470a76b2597968d" Namespace="calico-system" Pod="csi-node-driver-8sx57" WorkloadEndpoint="ip--172--31--20--11-k8s-csi--node--driver--8sx57-eth0" Apr 21 10:07:23.498633 containerd[1955]: time="2026-04-21T10:07:23.498345389Z" level=info msg="CreateContainer within sandbox \"70ca7c82fd913294c155bc0d2bb8f0c5fa6b329adad5067ab1cc25d8044b43e1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"13d2fcfd9e12bd3860e42dac7f0ca5a7a7d9379c7122fdad2b0a11c3d136d6f3\"" Apr 21 10:07:23.509349 containerd[1955]: time="2026-04-21T10:07:23.509051055Z" level=info msg="StartContainer for \"13d2fcfd9e12bd3860e42dac7f0ca5a7a7d9379c7122fdad2b0a11c3d136d6f3\"" Apr 21 10:07:23.532758 systemd-networkd[1831]: cali76289bd06d2: Gained IPv6LL Apr 21 10:07:23.553081 systemd-networkd[1831]: cali40de97ef5a8: Link UP Apr 21 10:07:23.570469 containerd[1955]: time="2026-04-21T10:07:23.569671582Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-756779796c-l52kg,Uid:a8a3131a-1ae1-4f2c-85eb-7bb7a52a06c6,Namespace:calico-system,Attempt:1,} returns sandbox id \"fa65fbe38059786dc13faa70c366695250dc63cf67ce41c224b9a117398e3254\"" Apr 21 10:07:23.576391 systemd-networkd[1831]: cali40de97ef5a8: Gained carrier Apr 21 10:07:23.653905 systemd[1]: Started cri-containerd-2b171adbaab7eecb1843024cd3493ee2fde6bbc07140dd0f25f0dced4f8be503.scope - libcontainer container 2b171adbaab7eecb1843024cd3493ee2fde6bbc07140dd0f25f0dced4f8be503. Apr 21 10:07:23.680144 systemd[1]: Started cri-containerd-25216af02af7751975e027390ff59ade0564686f35cd1fc89261d39f8a7407b9.scope - libcontainer container 25216af02af7751975e027390ff59ade0564686f35cd1fc89261d39f8a7407b9. Apr 21 10:07:23.699869 containerd[1955]: time="2026-04-21T10:07:23.699584307Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 21 10:07:23.706641 containerd[1955]: 2026-04-21 10:07:21.543 [ERROR][4783] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 21 10:07:23.706641 containerd[1955]: 2026-04-21 10:07:21.650 [INFO][4783] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--20--11-k8s-calico--apiserver--756779796c--mwdfq-eth0 calico-apiserver-756779796c- calico-system dcac6754-a482-47df-a01f-171bd1cc8980 974 0 2026-04-21 10:06:55 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:756779796c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-20-11 calico-apiserver-756779796c-mwdfq eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali40de97ef5a8 [] [] }} ContainerID="9214cad41c6f799db953f6fce7c082d158e7dbcd2ed3c99d1bfcf4e9492cf6d3" Namespace="calico-system" Pod="calico-apiserver-756779796c-mwdfq" WorkloadEndpoint="ip--172--31--20--11-k8s-calico--apiserver--756779796c--mwdfq-" Apr 21 10:07:23.706641 containerd[1955]: 2026-04-21 10:07:21.650 [INFO][4783] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9214cad41c6f799db953f6fce7c082d158e7dbcd2ed3c99d1bfcf4e9492cf6d3" Namespace="calico-system" Pod="calico-apiserver-756779796c-mwdfq" WorkloadEndpoint="ip--172--31--20--11-k8s-calico--apiserver--756779796c--mwdfq-eth0" Apr 21 10:07:23.706641 containerd[1955]: 2026-04-21 10:07:22.089 [INFO][4863] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9214cad41c6f799db953f6fce7c082d158e7dbcd2ed3c99d1bfcf4e9492cf6d3" HandleID="k8s-pod-network.9214cad41c6f799db953f6fce7c082d158e7dbcd2ed3c99d1bfcf4e9492cf6d3" Workload="ip--172--31--20--11-k8s-calico--apiserver--756779796c--mwdfq-eth0" Apr 21 10:07:23.706641 containerd[1955]: 2026-04-21 10:07:22.170 [INFO][4863] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="9214cad41c6f799db953f6fce7c082d158e7dbcd2ed3c99d1bfcf4e9492cf6d3" HandleID="k8s-pod-network.9214cad41c6f799db953f6fce7c082d158e7dbcd2ed3c99d1bfcf4e9492cf6d3" Workload="ip--172--31--20--11-k8s-calico--apiserver--756779796c--mwdfq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003b1820), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-20-11", "pod":"calico-apiserver-756779796c-mwdfq", "timestamp":"2026-04-21 10:07:22.089036581 +0000 UTC"}, Hostname:"ip-172-31-20-11", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40005e4420)} Apr 21 10:07:23.706641 containerd[1955]: 2026-04-21 10:07:22.170 [INFO][4863] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:07:23.706641 containerd[1955]: 2026-04-21 10:07:23.226 [INFO][4863] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:07:23.706641 containerd[1955]: 2026-04-21 10:07:23.226 [INFO][4863] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-20-11' Apr 21 10:07:23.706641 containerd[1955]: 2026-04-21 10:07:23.233 [INFO][4863] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.9214cad41c6f799db953f6fce7c082d158e7dbcd2ed3c99d1bfcf4e9492cf6d3" host="ip-172-31-20-11" Apr 21 10:07:23.706641 containerd[1955]: 2026-04-21 10:07:23.268 [INFO][4863] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-20-11" Apr 21 10:07:23.706641 containerd[1955]: 2026-04-21 10:07:23.346 [INFO][4863] ipam/ipam.go 526: Trying affinity for 192.168.71.128/26 host="ip-172-31-20-11" Apr 21 10:07:23.706641 containerd[1955]: 2026-04-21 10:07:23.369 [INFO][4863] ipam/ipam.go 160: Attempting to load block cidr=192.168.71.128/26 host="ip-172-31-20-11" Apr 21 10:07:23.706641 containerd[1955]: 2026-04-21 10:07:23.382 [INFO][4863] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.71.128/26 host="ip-172-31-20-11" Apr 21 10:07:23.706641 containerd[1955]: 2026-04-21 10:07:23.382 [INFO][4863] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.71.128/26 handle="k8s-pod-network.9214cad41c6f799db953f6fce7c082d158e7dbcd2ed3c99d1bfcf4e9492cf6d3" host="ip-172-31-20-11" Apr 21 10:07:23.706641 containerd[1955]: 2026-04-21 10:07:23.398 [INFO][4863] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.9214cad41c6f799db953f6fce7c082d158e7dbcd2ed3c99d1bfcf4e9492cf6d3 Apr 21 10:07:23.706641 containerd[1955]: 2026-04-21 10:07:23.422 [INFO][4863] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.71.128/26 handle="k8s-pod-network.9214cad41c6f799db953f6fce7c082d158e7dbcd2ed3c99d1bfcf4e9492cf6d3" host="ip-172-31-20-11" Apr 21 10:07:23.706641 containerd[1955]: 2026-04-21 10:07:23.472 [INFO][4863] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.71.134/26] block=192.168.71.128/26 handle="k8s-pod-network.9214cad41c6f799db953f6fce7c082d158e7dbcd2ed3c99d1bfcf4e9492cf6d3" host="ip-172-31-20-11" Apr 21 10:07:23.706641 containerd[1955]: 2026-04-21 10:07:23.472 [INFO][4863] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.71.134/26] handle="k8s-pod-network.9214cad41c6f799db953f6fce7c082d158e7dbcd2ed3c99d1bfcf4e9492cf6d3" host="ip-172-31-20-11" Apr 21 10:07:23.706641 containerd[1955]: 2026-04-21 10:07:23.476 [INFO][4863] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:07:23.706641 containerd[1955]: 2026-04-21 10:07:23.479 [INFO][4863] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.71.134/26] IPv6=[] ContainerID="9214cad41c6f799db953f6fce7c082d158e7dbcd2ed3c99d1bfcf4e9492cf6d3" HandleID="k8s-pod-network.9214cad41c6f799db953f6fce7c082d158e7dbcd2ed3c99d1bfcf4e9492cf6d3" Workload="ip--172--31--20--11-k8s-calico--apiserver--756779796c--mwdfq-eth0" Apr 21 10:07:23.709725 containerd[1955]: 2026-04-21 10:07:23.517 [INFO][4783] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9214cad41c6f799db953f6fce7c082d158e7dbcd2ed3c99d1bfcf4e9492cf6d3" Namespace="calico-system" Pod="calico-apiserver-756779796c-mwdfq" WorkloadEndpoint="ip--172--31--20--11-k8s-calico--apiserver--756779796c--mwdfq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--11-k8s-calico--apiserver--756779796c--mwdfq-eth0", GenerateName:"calico-apiserver-756779796c-", Namespace:"calico-system", SelfLink:"", UID:"dcac6754-a482-47df-a01f-171bd1cc8980", ResourceVersion:"974", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 6, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"756779796c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-11", ContainerID:"", Pod:"calico-apiserver-756779796c-mwdfq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.71.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali40de97ef5a8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:07:23.709725 containerd[1955]: 2026-04-21 10:07:23.518 [INFO][4783] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.71.134/32] ContainerID="9214cad41c6f799db953f6fce7c082d158e7dbcd2ed3c99d1bfcf4e9492cf6d3" Namespace="calico-system" Pod="calico-apiserver-756779796c-mwdfq" WorkloadEndpoint="ip--172--31--20--11-k8s-calico--apiserver--756779796c--mwdfq-eth0" Apr 21 10:07:23.709725 containerd[1955]: 2026-04-21 10:07:23.518 [INFO][4783] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali40de97ef5a8 ContainerID="9214cad41c6f799db953f6fce7c082d158e7dbcd2ed3c99d1bfcf4e9492cf6d3" Namespace="calico-system" Pod="calico-apiserver-756779796c-mwdfq" WorkloadEndpoint="ip--172--31--20--11-k8s-calico--apiserver--756779796c--mwdfq-eth0" Apr 21 10:07:23.709725 containerd[1955]: 2026-04-21 10:07:23.598 [INFO][4783] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9214cad41c6f799db953f6fce7c082d158e7dbcd2ed3c99d1bfcf4e9492cf6d3" Namespace="calico-system" Pod="calico-apiserver-756779796c-mwdfq" WorkloadEndpoint="ip--172--31--20--11-k8s-calico--apiserver--756779796c--mwdfq-eth0" Apr 21 10:07:23.709725 containerd[1955]: 2026-04-21 10:07:23.598 [INFO][4783] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9214cad41c6f799db953f6fce7c082d158e7dbcd2ed3c99d1bfcf4e9492cf6d3" Namespace="calico-system" Pod="calico-apiserver-756779796c-mwdfq" WorkloadEndpoint="ip--172--31--20--11-k8s-calico--apiserver--756779796c--mwdfq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--11-k8s-calico--apiserver--756779796c--mwdfq-eth0", GenerateName:"calico-apiserver-756779796c-", Namespace:"calico-system", SelfLink:"", UID:"dcac6754-a482-47df-a01f-171bd1cc8980", ResourceVersion:"974", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 6, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"756779796c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-11", ContainerID:"9214cad41c6f799db953f6fce7c082d158e7dbcd2ed3c99d1bfcf4e9492cf6d3", Pod:"calico-apiserver-756779796c-mwdfq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.71.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali40de97ef5a8", MAC:"36:dc:01:a9:65:9f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:07:23.709725 containerd[1955]: 2026-04-21 10:07:23.643 [INFO][4783] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9214cad41c6f799db953f6fce7c082d158e7dbcd2ed3c99d1bfcf4e9492cf6d3" Namespace="calico-system" Pod="calico-apiserver-756779796c-mwdfq" WorkloadEndpoint="ip--172--31--20--11-k8s-calico--apiserver--756779796c--mwdfq-eth0" Apr 21 10:07:23.762342 containerd[1955]: time="2026-04-21T10:07:23.759355386Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:07:23.762342 containerd[1955]: time="2026-04-21T10:07:23.759451962Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:07:23.762342 containerd[1955]: time="2026-04-21T10:07:23.759480597Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:07:23.762342 containerd[1955]: time="2026-04-21T10:07:23.759670244Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:07:23.814926 systemd[1]: Started cri-containerd-13d2fcfd9e12bd3860e42dac7f0ca5a7a7d9379c7122fdad2b0a11c3d136d6f3.scope - libcontainer container 13d2fcfd9e12bd3860e42dac7f0ca5a7a7d9379c7122fdad2b0a11c3d136d6f3. Apr 21 10:07:23.848017 systemd-networkd[1831]: calib59fb0aaf00: Link UP Apr 21 10:07:23.855938 systemd-networkd[1831]: calib59fb0aaf00: Gained carrier Apr 21 10:07:23.989815 containerd[1955]: 2026-04-21 10:07:21.667 [ERROR][4758] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 21 10:07:23.989815 containerd[1955]: 2026-04-21 10:07:21.759 [INFO][4758] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--20--11-k8s-goldmane--9f7667bb8--fr8h9-eth0 goldmane-9f7667bb8- calico-system 1a31afb8-9d55-404f-8363-717bc3de0918 970 0 2026-04-21 10:06:55 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:9f7667bb8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-20-11 goldmane-9f7667bb8-fr8h9 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calib59fb0aaf00 [] [] }} ContainerID="fb8943c7b1587acbb7ccd5a3c585cd44b8f121f82c321d10f574f5d825ef56cc" Namespace="calico-system" Pod="goldmane-9f7667bb8-fr8h9" WorkloadEndpoint="ip--172--31--20--11-k8s-goldmane--9f7667bb8--fr8h9-" Apr 21 10:07:23.989815 containerd[1955]: 2026-04-21 10:07:21.759 [INFO][4758] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fb8943c7b1587acbb7ccd5a3c585cd44b8f121f82c321d10f574f5d825ef56cc" Namespace="calico-system" Pod="goldmane-9f7667bb8-fr8h9" WorkloadEndpoint="ip--172--31--20--11-k8s-goldmane--9f7667bb8--fr8h9-eth0" Apr 21 10:07:23.989815 containerd[1955]: 2026-04-21 10:07:22.132 [INFO][4887] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fb8943c7b1587acbb7ccd5a3c585cd44b8f121f82c321d10f574f5d825ef56cc" HandleID="k8s-pod-network.fb8943c7b1587acbb7ccd5a3c585cd44b8f121f82c321d10f574f5d825ef56cc" Workload="ip--172--31--20--11-k8s-goldmane--9f7667bb8--fr8h9-eth0" Apr 21 10:07:23.989815 containerd[1955]: 2026-04-21 10:07:22.181 [INFO][4887] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="fb8943c7b1587acbb7ccd5a3c585cd44b8f121f82c321d10f574f5d825ef56cc" HandleID="k8s-pod-network.fb8943c7b1587acbb7ccd5a3c585cd44b8f121f82c321d10f574f5d825ef56cc" Workload="ip--172--31--20--11-k8s-goldmane--9f7667bb8--fr8h9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002e96c0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-20-11", "pod":"goldmane-9f7667bb8-fr8h9", "timestamp":"2026-04-21 10:07:22.132315077 +0000 UTC"}, Hostname:"ip-172-31-20-11", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40003be580)} Apr 21 10:07:23.989815 containerd[1955]: 2026-04-21 10:07:22.181 [INFO][4887] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:07:23.989815 containerd[1955]: 2026-04-21 10:07:23.473 [INFO][4887] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:07:23.989815 containerd[1955]: 2026-04-21 10:07:23.473 [INFO][4887] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-20-11' Apr 21 10:07:23.989815 containerd[1955]: 2026-04-21 10:07:23.501 [INFO][4887] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.fb8943c7b1587acbb7ccd5a3c585cd44b8f121f82c321d10f574f5d825ef56cc" host="ip-172-31-20-11" Apr 21 10:07:23.989815 containerd[1955]: 2026-04-21 10:07:23.542 [INFO][4887] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-20-11" Apr 21 10:07:23.989815 containerd[1955]: 2026-04-21 10:07:23.597 [INFO][4887] ipam/ipam.go 526: Trying affinity for 192.168.71.128/26 host="ip-172-31-20-11" Apr 21 10:07:23.989815 containerd[1955]: 2026-04-21 10:07:23.611 [INFO][4887] ipam/ipam.go 160: Attempting to load block cidr=192.168.71.128/26 host="ip-172-31-20-11" Apr 21 10:07:23.989815 containerd[1955]: 2026-04-21 10:07:23.640 [INFO][4887] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.71.128/26 host="ip-172-31-20-11" Apr 21 10:07:23.989815 containerd[1955]: 2026-04-21 10:07:23.640 [INFO][4887] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.71.128/26 handle="k8s-pod-network.fb8943c7b1587acbb7ccd5a3c585cd44b8f121f82c321d10f574f5d825ef56cc" host="ip-172-31-20-11" Apr 21 10:07:23.989815 containerd[1955]: 2026-04-21 10:07:23.665 [INFO][4887] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.fb8943c7b1587acbb7ccd5a3c585cd44b8f121f82c321d10f574f5d825ef56cc Apr 21 10:07:23.989815 containerd[1955]: 2026-04-21 10:07:23.715 [INFO][4887] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.71.128/26 handle="k8s-pod-network.fb8943c7b1587acbb7ccd5a3c585cd44b8f121f82c321d10f574f5d825ef56cc" host="ip-172-31-20-11" Apr 21 10:07:23.989815 containerd[1955]: 2026-04-21 10:07:23.782 [INFO][4887] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.71.135/26] block=192.168.71.128/26 handle="k8s-pod-network.fb8943c7b1587acbb7ccd5a3c585cd44b8f121f82c321d10f574f5d825ef56cc" host="ip-172-31-20-11" Apr 21 10:07:23.989815 containerd[1955]: 2026-04-21 10:07:23.789 [INFO][4887] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.71.135/26] handle="k8s-pod-network.fb8943c7b1587acbb7ccd5a3c585cd44b8f121f82c321d10f574f5d825ef56cc" host="ip-172-31-20-11" Apr 21 10:07:23.989815 containerd[1955]: 2026-04-21 10:07:23.789 [INFO][4887] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:07:23.989815 containerd[1955]: 2026-04-21 10:07:23.789 [INFO][4887] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.71.135/26] IPv6=[] ContainerID="fb8943c7b1587acbb7ccd5a3c585cd44b8f121f82c321d10f574f5d825ef56cc" HandleID="k8s-pod-network.fb8943c7b1587acbb7ccd5a3c585cd44b8f121f82c321d10f574f5d825ef56cc" Workload="ip--172--31--20--11-k8s-goldmane--9f7667bb8--fr8h9-eth0" Apr 21 10:07:23.993336 containerd[1955]: 2026-04-21 10:07:23.829 [INFO][4758] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fb8943c7b1587acbb7ccd5a3c585cd44b8f121f82c321d10f574f5d825ef56cc" Namespace="calico-system" Pod="goldmane-9f7667bb8-fr8h9" WorkloadEndpoint="ip--172--31--20--11-k8s-goldmane--9f7667bb8--fr8h9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--11-k8s-goldmane--9f7667bb8--fr8h9-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"1a31afb8-9d55-404f-8363-717bc3de0918", ResourceVersion:"970", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 6, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-11", ContainerID:"", Pod:"goldmane-9f7667bb8-fr8h9", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.71.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib59fb0aaf00", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:07:23.993336 containerd[1955]: 2026-04-21 10:07:23.830 [INFO][4758] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.71.135/32] ContainerID="fb8943c7b1587acbb7ccd5a3c585cd44b8f121f82c321d10f574f5d825ef56cc" Namespace="calico-system" Pod="goldmane-9f7667bb8-fr8h9" WorkloadEndpoint="ip--172--31--20--11-k8s-goldmane--9f7667bb8--fr8h9-eth0" Apr 21 10:07:23.993336 containerd[1955]: 2026-04-21 10:07:23.830 [INFO][4758] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib59fb0aaf00 ContainerID="fb8943c7b1587acbb7ccd5a3c585cd44b8f121f82c321d10f574f5d825ef56cc" Namespace="calico-system" Pod="goldmane-9f7667bb8-fr8h9" WorkloadEndpoint="ip--172--31--20--11-k8s-goldmane--9f7667bb8--fr8h9-eth0" Apr 21 10:07:23.993336 containerd[1955]: 2026-04-21 10:07:23.885 [INFO][4758] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fb8943c7b1587acbb7ccd5a3c585cd44b8f121f82c321d10f574f5d825ef56cc" Namespace="calico-system" Pod="goldmane-9f7667bb8-fr8h9" WorkloadEndpoint="ip--172--31--20--11-k8s-goldmane--9f7667bb8--fr8h9-eth0" Apr 21 10:07:23.993336 containerd[1955]: 2026-04-21 10:07:23.890 [INFO][4758] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fb8943c7b1587acbb7ccd5a3c585cd44b8f121f82c321d10f574f5d825ef56cc" Namespace="calico-system" Pod="goldmane-9f7667bb8-fr8h9" WorkloadEndpoint="ip--172--31--20--11-k8s-goldmane--9f7667bb8--fr8h9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--11-k8s-goldmane--9f7667bb8--fr8h9-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"1a31afb8-9d55-404f-8363-717bc3de0918", ResourceVersion:"970", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 6, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-11", ContainerID:"fb8943c7b1587acbb7ccd5a3c585cd44b8f121f82c321d10f574f5d825ef56cc", Pod:"goldmane-9f7667bb8-fr8h9", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.71.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib59fb0aaf00", MAC:"8a:9a:73:14:31:1e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:07:23.993336 containerd[1955]: 2026-04-21 10:07:23.957 [INFO][4758] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fb8943c7b1587acbb7ccd5a3c585cd44b8f121f82c321d10f574f5d825ef56cc" Namespace="calico-system" Pod="goldmane-9f7667bb8-fr8h9" WorkloadEndpoint="ip--172--31--20--11-k8s-goldmane--9f7667bb8--fr8h9-eth0" Apr 21 10:07:24.041769 systemd[1]: Started cri-containerd-a3e8435b7cc3ae195e8b99654e6660a416f9c392038ac8913470a76b2597968d.scope - libcontainer container a3e8435b7cc3ae195e8b99654e6660a416f9c392038ac8913470a76b2597968d. Apr 21 10:07:24.057687 containerd[1955]: time="2026-04-21T10:07:24.057287268Z" level=info msg="StartContainer for \"13d2fcfd9e12bd3860e42dac7f0ca5a7a7d9379c7122fdad2b0a11c3d136d6f3\" returns successfully" Apr 21 10:07:24.108178 systemd-networkd[1831]: cali157128cdcd1: Link UP Apr 21 10:07:24.111247 systemd-networkd[1831]: cali157128cdcd1: Gained carrier Apr 21 10:07:24.117668 containerd[1955]: time="2026-04-21T10:07:24.114609123Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:07:24.117668 containerd[1955]: time="2026-04-21T10:07:24.114700176Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:07:24.117668 containerd[1955]: time="2026-04-21T10:07:24.114725521Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:07:24.117668 containerd[1955]: time="2026-04-21T10:07:24.114872474Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:07:24.150544 containerd[1955]: time="2026-04-21T10:07:24.150187944Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-cxnd9,Uid:af9f8050-444d-4935-9809-2c239d4c35de,Namespace:kube-system,Attempt:1,} returns sandbox id \"2b171adbaab7eecb1843024cd3493ee2fde6bbc07140dd0f25f0dced4f8be503\"" Apr 21 10:07:24.175872 containerd[1955]: time="2026-04-21T10:07:24.175818076Z" level=info msg="CreateContainer within sandbox \"2b171adbaab7eecb1843024cd3493ee2fde6bbc07140dd0f25f0dced4f8be503\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 21 10:07:24.204586 containerd[1955]: time="2026-04-21T10:07:24.204204907Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:07:24.207326 containerd[1955]: time="2026-04-21T10:07:24.206989122Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:07:24.207326 containerd[1955]: time="2026-04-21T10:07:24.207058637Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:07:24.210280 containerd[1955]: time="2026-04-21T10:07:24.209177864Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:07:24.214077 containerd[1955]: 2026-04-21 10:07:23.018 [ERROR][4997] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 21 10:07:24.214077 containerd[1955]: 2026-04-21 10:07:23.111 [INFO][4997] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--20--11-k8s-whisker--669f97dc6b--fnwbg-eth0 whisker-669f97dc6b- calico-system 009c7e1e-6291-4854-a472-a094b3df1eb4 998 0 2026-04-21 10:07:22 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:669f97dc6b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-20-11 whisker-669f97dc6b-fnwbg eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali157128cdcd1 [] [] }} ContainerID="7638781ae575ab861c0999d5595fd36110fceb37cb7e49d9a734ed04ee49dbc0" Namespace="calico-system" Pod="whisker-669f97dc6b-fnwbg" WorkloadEndpoint="ip--172--31--20--11-k8s-whisker--669f97dc6b--fnwbg-" Apr 21 10:07:24.214077 containerd[1955]: 2026-04-21 10:07:23.112 [INFO][4997] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7638781ae575ab861c0999d5595fd36110fceb37cb7e49d9a734ed04ee49dbc0" Namespace="calico-system" Pod="whisker-669f97dc6b-fnwbg" WorkloadEndpoint="ip--172--31--20--11-k8s-whisker--669f97dc6b--fnwbg-eth0" Apr 21 10:07:24.214077 containerd[1955]: 2026-04-21 10:07:23.732 [INFO][5045] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7638781ae575ab861c0999d5595fd36110fceb37cb7e49d9a734ed04ee49dbc0" HandleID="k8s-pod-network.7638781ae575ab861c0999d5595fd36110fceb37cb7e49d9a734ed04ee49dbc0" Workload="ip--172--31--20--11-k8s-whisker--669f97dc6b--fnwbg-eth0" Apr 21 10:07:24.214077 containerd[1955]: 2026-04-21 10:07:23.817 [INFO][5045] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="7638781ae575ab861c0999d5595fd36110fceb37cb7e49d9a734ed04ee49dbc0" HandleID="k8s-pod-network.7638781ae575ab861c0999d5595fd36110fceb37cb7e49d9a734ed04ee49dbc0" Workload="ip--172--31--20--11-k8s-whisker--669f97dc6b--fnwbg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001fd8f0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-20-11", "pod":"whisker-669f97dc6b-fnwbg", "timestamp":"2026-04-21 10:07:23.732918882 +0000 UTC"}, Hostname:"ip-172-31-20-11", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40003da000)} Apr 21 10:07:24.214077 containerd[1955]: 2026-04-21 10:07:23.818 [INFO][5045] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:07:24.214077 containerd[1955]: 2026-04-21 10:07:23.822 [INFO][5045] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:07:24.214077 containerd[1955]: 2026-04-21 10:07:23.822 [INFO][5045] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-20-11' Apr 21 10:07:24.214077 containerd[1955]: 2026-04-21 10:07:23.858 [INFO][5045] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.7638781ae575ab861c0999d5595fd36110fceb37cb7e49d9a734ed04ee49dbc0" host="ip-172-31-20-11" Apr 21 10:07:24.214077 containerd[1955]: 2026-04-21 10:07:23.921 [INFO][5045] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-20-11" Apr 21 10:07:24.214077 containerd[1955]: 2026-04-21 10:07:23.970 [INFO][5045] ipam/ipam.go 526: Trying affinity for 192.168.71.128/26 host="ip-172-31-20-11" Apr 21 10:07:24.214077 containerd[1955]: 2026-04-21 10:07:23.982 [INFO][5045] ipam/ipam.go 160: Attempting to load block cidr=192.168.71.128/26 host="ip-172-31-20-11" Apr 21 10:07:24.214077 containerd[1955]: 2026-04-21 10:07:24.006 [INFO][5045] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.71.128/26 host="ip-172-31-20-11" Apr 21 10:07:24.214077 containerd[1955]: 2026-04-21 10:07:24.006 [INFO][5045] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.71.128/26 handle="k8s-pod-network.7638781ae575ab861c0999d5595fd36110fceb37cb7e49d9a734ed04ee49dbc0" host="ip-172-31-20-11" Apr 21 10:07:24.214077 containerd[1955]: 2026-04-21 10:07:24.017 [INFO][5045] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.7638781ae575ab861c0999d5595fd36110fceb37cb7e49d9a734ed04ee49dbc0 Apr 21 10:07:24.214077 containerd[1955]: 2026-04-21 10:07:24.037 [INFO][5045] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.71.128/26 handle="k8s-pod-network.7638781ae575ab861c0999d5595fd36110fceb37cb7e49d9a734ed04ee49dbc0" host="ip-172-31-20-11" Apr 21 10:07:24.214077 containerd[1955]: 2026-04-21 10:07:24.060 [INFO][5045] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.71.136/26] block=192.168.71.128/26 handle="k8s-pod-network.7638781ae575ab861c0999d5595fd36110fceb37cb7e49d9a734ed04ee49dbc0" host="ip-172-31-20-11" Apr 21 10:07:24.214077 containerd[1955]: 2026-04-21 10:07:24.060 [INFO][5045] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.71.136/26] handle="k8s-pod-network.7638781ae575ab861c0999d5595fd36110fceb37cb7e49d9a734ed04ee49dbc0" host="ip-172-31-20-11" Apr 21 10:07:24.214077 containerd[1955]: 2026-04-21 10:07:24.060 [INFO][5045] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:07:24.214077 containerd[1955]: 2026-04-21 10:07:24.060 [INFO][5045] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.71.136/26] IPv6=[] ContainerID="7638781ae575ab861c0999d5595fd36110fceb37cb7e49d9a734ed04ee49dbc0" HandleID="k8s-pod-network.7638781ae575ab861c0999d5595fd36110fceb37cb7e49d9a734ed04ee49dbc0" Workload="ip--172--31--20--11-k8s-whisker--669f97dc6b--fnwbg-eth0" Apr 21 10:07:24.216305 containerd[1955]: 2026-04-21 10:07:24.089 [INFO][4997] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7638781ae575ab861c0999d5595fd36110fceb37cb7e49d9a734ed04ee49dbc0" Namespace="calico-system" Pod="whisker-669f97dc6b-fnwbg" WorkloadEndpoint="ip--172--31--20--11-k8s-whisker--669f97dc6b--fnwbg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--11-k8s-whisker--669f97dc6b--fnwbg-eth0", GenerateName:"whisker-669f97dc6b-", Namespace:"calico-system", SelfLink:"", UID:"009c7e1e-6291-4854-a472-a094b3df1eb4", ResourceVersion:"998", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 7, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"669f97dc6b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-11", ContainerID:"", Pod:"whisker-669f97dc6b-fnwbg", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.71.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali157128cdcd1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:07:24.216305 containerd[1955]: 2026-04-21 10:07:24.090 [INFO][4997] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.71.136/32] ContainerID="7638781ae575ab861c0999d5595fd36110fceb37cb7e49d9a734ed04ee49dbc0" Namespace="calico-system" Pod="whisker-669f97dc6b-fnwbg" WorkloadEndpoint="ip--172--31--20--11-k8s-whisker--669f97dc6b--fnwbg-eth0" Apr 21 10:07:24.216305 containerd[1955]: 2026-04-21 10:07:24.090 [INFO][4997] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali157128cdcd1 ContainerID="7638781ae575ab861c0999d5595fd36110fceb37cb7e49d9a734ed04ee49dbc0" Namespace="calico-system" Pod="whisker-669f97dc6b-fnwbg" WorkloadEndpoint="ip--172--31--20--11-k8s-whisker--669f97dc6b--fnwbg-eth0" Apr 21 10:07:24.216305 containerd[1955]: 2026-04-21 10:07:24.119 [INFO][4997] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7638781ae575ab861c0999d5595fd36110fceb37cb7e49d9a734ed04ee49dbc0" Namespace="calico-system" Pod="whisker-669f97dc6b-fnwbg" WorkloadEndpoint="ip--172--31--20--11-k8s-whisker--669f97dc6b--fnwbg-eth0" Apr 21 10:07:24.216305 containerd[1955]: 2026-04-21 10:07:24.129 [INFO][4997] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7638781ae575ab861c0999d5595fd36110fceb37cb7e49d9a734ed04ee49dbc0" Namespace="calico-system" Pod="whisker-669f97dc6b-fnwbg" WorkloadEndpoint="ip--172--31--20--11-k8s-whisker--669f97dc6b--fnwbg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--11-k8s-whisker--669f97dc6b--fnwbg-eth0", GenerateName:"whisker-669f97dc6b-", Namespace:"calico-system", SelfLink:"", UID:"009c7e1e-6291-4854-a472-a094b3df1eb4", ResourceVersion:"998", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 7, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"669f97dc6b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-11", ContainerID:"7638781ae575ab861c0999d5595fd36110fceb37cb7e49d9a734ed04ee49dbc0", Pod:"whisker-669f97dc6b-fnwbg", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.71.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali157128cdcd1", MAC:"ca:85:23:5c:a7:65", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:07:24.216305 containerd[1955]: 2026-04-21 10:07:24.184 [INFO][4997] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7638781ae575ab861c0999d5595fd36110fceb37cb7e49d9a734ed04ee49dbc0" Namespace="calico-system" Pod="whisker-669f97dc6b-fnwbg" WorkloadEndpoint="ip--172--31--20--11-k8s-whisker--669f97dc6b--fnwbg-eth0" Apr 21 10:07:24.229670 systemd-networkd[1831]: calif0e38021757: Gained IPv6LL Apr 21 10:07:24.286003 containerd[1955]: time="2026-04-21T10:07:24.285916651Z" level=info msg="CreateContainer within sandbox \"2b171adbaab7eecb1843024cd3493ee2fde6bbc07140dd0f25f0dced4f8be503\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b6b579fcd00272dfea9cb4b875665aca76e4ce9c152d1f46cc9275e63e7d6530\"" Apr 21 10:07:24.297984 containerd[1955]: time="2026-04-21T10:07:24.297916624Z" level=info msg="StartContainer for \"b6b579fcd00272dfea9cb4b875665aca76e4ce9c152d1f46cc9275e63e7d6530\"" Apr 21 10:07:24.305804 systemd[1]: Started cri-containerd-9214cad41c6f799db953f6fce7c082d158e7dbcd2ed3c99d1bfcf4e9492cf6d3.scope - libcontainer container 9214cad41c6f799db953f6fce7c082d158e7dbcd2ed3c99d1bfcf4e9492cf6d3. Apr 21 10:07:24.399850 containerd[1955]: time="2026-04-21T10:07:24.399145322Z" level=info msg="StopPodSandbox for \"3387e0f166e367a3b7fb9f1457e40b12bac4376046f8c938aa64f3351806361b\"" Apr 21 10:07:24.402738 systemd[1]: Started cri-containerd-fb8943c7b1587acbb7ccd5a3c585cd44b8f121f82c321d10f574f5d825ef56cc.scope - libcontainer container fb8943c7b1587acbb7ccd5a3c585cd44b8f121f82c321d10f574f5d825ef56cc. Apr 21 10:07:24.419712 systemd-networkd[1831]: calicdd7c039a12: Gained IPv6LL Apr 21 10:07:24.489030 containerd[1955]: time="2026-04-21T10:07:24.488865644Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:07:24.489030 containerd[1955]: time="2026-04-21T10:07:24.488979497Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:07:24.489767 containerd[1955]: time="2026-04-21T10:07:24.489574382Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:07:24.493550 containerd[1955]: time="2026-04-21T10:07:24.490916401Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:07:24.581003 systemd[1]: Started cri-containerd-b6b579fcd00272dfea9cb4b875665aca76e4ce9c152d1f46cc9275e63e7d6530.scope - libcontainer container b6b579fcd00272dfea9cb4b875665aca76e4ce9c152d1f46cc9275e63e7d6530. Apr 21 10:07:24.631828 systemd[1]: Started cri-containerd-7638781ae575ab861c0999d5595fd36110fceb37cb7e49d9a734ed04ee49dbc0.scope - libcontainer container 7638781ae575ab861c0999d5595fd36110fceb37cb7e49d9a734ed04ee49dbc0. Apr 21 10:07:24.673870 systemd-networkd[1831]: calia12692d9c53: Gained IPv6LL Apr 21 10:07:24.818331 containerd[1955]: time="2026-04-21T10:07:24.818225260Z" level=info msg="StartContainer for \"b6b579fcd00272dfea9cb4b875665aca76e4ce9c152d1f46cc9275e63e7d6530\" returns successfully" Apr 21 10:07:24.920229 containerd[1955]: time="2026-04-21T10:07:24.920067800Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-74499fb665-57b24,Uid:26d85b28-0419-4bec-8bca-4c6bc1376147,Namespace:calico-system,Attempt:1,} returns sandbox id \"25216af02af7751975e027390ff59ade0564686f35cd1fc89261d39f8a7407b9\"" Apr 21 10:07:24.929807 systemd-networkd[1831]: cali97f1de98980: Gained IPv6LL Apr 21 10:07:24.937493 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2874609315.mount: Deactivated successfully. Apr 21 10:07:24.980498 containerd[1955]: 2026-04-21 10:07:24.701 [WARNING][5337] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3387e0f166e367a3b7fb9f1457e40b12bac4376046f8c938aa64f3351806361b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--11-k8s-calico--apiserver--756779796c--l52kg-eth0", GenerateName:"calico-apiserver-756779796c-", Namespace:"calico-system", SelfLink:"", UID:"a8a3131a-1ae1-4f2c-85eb-7bb7a52a06c6", ResourceVersion:"991", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 6, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"756779796c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-11", ContainerID:"fa65fbe38059786dc13faa70c366695250dc63cf67ce41c224b9a117398e3254", Pod:"calico-apiserver-756779796c-l52kg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.71.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali76289bd06d2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:07:24.980498 containerd[1955]: 2026-04-21 10:07:24.706 [INFO][5337] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="3387e0f166e367a3b7fb9f1457e40b12bac4376046f8c938aa64f3351806361b" Apr 21 10:07:24.980498 containerd[1955]: 2026-04-21 10:07:24.706 [INFO][5337] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3387e0f166e367a3b7fb9f1457e40b12bac4376046f8c938aa64f3351806361b" iface="eth0" netns="" Apr 21 10:07:24.980498 containerd[1955]: 2026-04-21 10:07:24.706 [INFO][5337] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="3387e0f166e367a3b7fb9f1457e40b12bac4376046f8c938aa64f3351806361b" Apr 21 10:07:24.980498 containerd[1955]: 2026-04-21 10:07:24.706 [INFO][5337] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="3387e0f166e367a3b7fb9f1457e40b12bac4376046f8c938aa64f3351806361b" Apr 21 10:07:24.980498 containerd[1955]: 2026-04-21 10:07:24.902 [INFO][5391] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="3387e0f166e367a3b7fb9f1457e40b12bac4376046f8c938aa64f3351806361b" HandleID="k8s-pod-network.3387e0f166e367a3b7fb9f1457e40b12bac4376046f8c938aa64f3351806361b" Workload="ip--172--31--20--11-k8s-calico--apiserver--756779796c--l52kg-eth0" Apr 21 10:07:24.980498 containerd[1955]: 2026-04-21 10:07:24.902 [INFO][5391] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:07:24.980498 containerd[1955]: 2026-04-21 10:07:24.902 [INFO][5391] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:07:24.980498 containerd[1955]: 2026-04-21 10:07:24.966 [WARNING][5391] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="3387e0f166e367a3b7fb9f1457e40b12bac4376046f8c938aa64f3351806361b" HandleID="k8s-pod-network.3387e0f166e367a3b7fb9f1457e40b12bac4376046f8c938aa64f3351806361b" Workload="ip--172--31--20--11-k8s-calico--apiserver--756779796c--l52kg-eth0" Apr 21 10:07:24.980498 containerd[1955]: 2026-04-21 10:07:24.966 [INFO][5391] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="3387e0f166e367a3b7fb9f1457e40b12bac4376046f8c938aa64f3351806361b" HandleID="k8s-pod-network.3387e0f166e367a3b7fb9f1457e40b12bac4376046f8c938aa64f3351806361b" Workload="ip--172--31--20--11-k8s-calico--apiserver--756779796c--l52kg-eth0" Apr 21 10:07:24.980498 containerd[1955]: 2026-04-21 10:07:24.969 [INFO][5391] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:07:24.980498 containerd[1955]: 2026-04-21 10:07:24.975 [INFO][5337] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="3387e0f166e367a3b7fb9f1457e40b12bac4376046f8c938aa64f3351806361b" Apr 21 10:07:24.980498 containerd[1955]: time="2026-04-21T10:07:24.980323033Z" level=info msg="TearDown network for sandbox \"3387e0f166e367a3b7fb9f1457e40b12bac4376046f8c938aa64f3351806361b\" successfully" Apr 21 10:07:24.980498 containerd[1955]: time="2026-04-21T10:07:24.980361080Z" level=info msg="StopPodSandbox for \"3387e0f166e367a3b7fb9f1457e40b12bac4376046f8c938aa64f3351806361b\" returns successfully" Apr 21 10:07:24.983756 containerd[1955]: time="2026-04-21T10:07:24.981826365Z" level=info msg="RemovePodSandbox for \"3387e0f166e367a3b7fb9f1457e40b12bac4376046f8c938aa64f3351806361b\"" Apr 21 10:07:24.983756 containerd[1955]: time="2026-04-21T10:07:24.981893538Z" level=info msg="Forcibly stopping sandbox \"3387e0f166e367a3b7fb9f1457e40b12bac4376046f8c938aa64f3351806361b\"" Apr 21 10:07:25.007113 containerd[1955]: time="2026-04-21T10:07:25.006746691Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8sx57,Uid:678541e7-a702-489b-a934-cdd3a561ab11,Namespace:calico-system,Attempt:1,} returns sandbox id \"a3e8435b7cc3ae195e8b99654e6660a416f9c392038ac8913470a76b2597968d\"" Apr 21 10:07:25.188953 kubelet[3162]: I0421 10:07:25.186991 3162 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-cxnd9" podStartSLOduration=56.186965613 podStartE2EDuration="56.186965613s" podCreationTimestamp="2026-04-21 10:06:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 10:07:25.122071669 +0000 UTC m=+61.039433786" watchObservedRunningTime="2026-04-21 10:07:25.186965613 +0000 UTC m=+61.104327706" Apr 21 10:07:25.353272 containerd[1955]: 2026-04-21 10:07:25.220 [WARNING][5434] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3387e0f166e367a3b7fb9f1457e40b12bac4376046f8c938aa64f3351806361b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--11-k8s-calico--apiserver--756779796c--l52kg-eth0", GenerateName:"calico-apiserver-756779796c-", Namespace:"calico-system", SelfLink:"", UID:"a8a3131a-1ae1-4f2c-85eb-7bb7a52a06c6", ResourceVersion:"991", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 6, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"756779796c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-11", ContainerID:"fa65fbe38059786dc13faa70c366695250dc63cf67ce41c224b9a117398e3254", Pod:"calico-apiserver-756779796c-l52kg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.71.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali76289bd06d2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:07:25.353272 containerd[1955]: 2026-04-21 10:07:25.222 [INFO][5434] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="3387e0f166e367a3b7fb9f1457e40b12bac4376046f8c938aa64f3351806361b" Apr 21 10:07:25.353272 containerd[1955]: 2026-04-21 10:07:25.222 [INFO][5434] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3387e0f166e367a3b7fb9f1457e40b12bac4376046f8c938aa64f3351806361b" iface="eth0" netns="" Apr 21 10:07:25.353272 containerd[1955]: 2026-04-21 10:07:25.222 [INFO][5434] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="3387e0f166e367a3b7fb9f1457e40b12bac4376046f8c938aa64f3351806361b" Apr 21 10:07:25.353272 containerd[1955]: 2026-04-21 10:07:25.222 [INFO][5434] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="3387e0f166e367a3b7fb9f1457e40b12bac4376046f8c938aa64f3351806361b" Apr 21 10:07:25.353272 containerd[1955]: 2026-04-21 10:07:25.304 [INFO][5446] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="3387e0f166e367a3b7fb9f1457e40b12bac4376046f8c938aa64f3351806361b" HandleID="k8s-pod-network.3387e0f166e367a3b7fb9f1457e40b12bac4376046f8c938aa64f3351806361b" Workload="ip--172--31--20--11-k8s-calico--apiserver--756779796c--l52kg-eth0" Apr 21 10:07:25.353272 containerd[1955]: 2026-04-21 10:07:25.305 [INFO][5446] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:07:25.353272 containerd[1955]: 2026-04-21 10:07:25.305 [INFO][5446] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:07:25.353272 containerd[1955]: 2026-04-21 10:07:25.338 [WARNING][5446] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="3387e0f166e367a3b7fb9f1457e40b12bac4376046f8c938aa64f3351806361b" HandleID="k8s-pod-network.3387e0f166e367a3b7fb9f1457e40b12bac4376046f8c938aa64f3351806361b" Workload="ip--172--31--20--11-k8s-calico--apiserver--756779796c--l52kg-eth0" Apr 21 10:07:25.353272 containerd[1955]: 2026-04-21 10:07:25.338 [INFO][5446] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="3387e0f166e367a3b7fb9f1457e40b12bac4376046f8c938aa64f3351806361b" HandleID="k8s-pod-network.3387e0f166e367a3b7fb9f1457e40b12bac4376046f8c938aa64f3351806361b" Workload="ip--172--31--20--11-k8s-calico--apiserver--756779796c--l52kg-eth0" Apr 21 10:07:25.353272 containerd[1955]: 2026-04-21 10:07:25.342 [INFO][5446] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:07:25.353272 containerd[1955]: 2026-04-21 10:07:25.347 [INFO][5434] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="3387e0f166e367a3b7fb9f1457e40b12bac4376046f8c938aa64f3351806361b" Apr 21 10:07:25.355372 containerd[1955]: time="2026-04-21T10:07:25.354319829Z" level=info msg="TearDown network for sandbox \"3387e0f166e367a3b7fb9f1457e40b12bac4376046f8c938aa64f3351806361b\" successfully" Apr 21 10:07:25.364406 containerd[1955]: time="2026-04-21T10:07:25.363207379Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3387e0f166e367a3b7fb9f1457e40b12bac4376046f8c938aa64f3351806361b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 21 10:07:25.366252 containerd[1955]: time="2026-04-21T10:07:25.365320903Z" level=info msg="RemovePodSandbox \"3387e0f166e367a3b7fb9f1457e40b12bac4376046f8c938aa64f3351806361b\" returns successfully" Apr 21 10:07:25.370297 containerd[1955]: time="2026-04-21T10:07:25.370245992Z" level=info msg="StopPodSandbox for \"21275ea079eb3066827c0126331c15680105000b3b73bfe6800334903f295186\"" Apr 21 10:07:25.379220 systemd-networkd[1831]: cali40de97ef5a8: Gained IPv6LL Apr 21 10:07:25.472884 containerd[1955]: time="2026-04-21T10:07:25.471258390Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-756779796c-mwdfq,Uid:dcac6754-a482-47df-a01f-171bd1cc8980,Namespace:calico-system,Attempt:1,} returns sandbox id \"9214cad41c6f799db953f6fce7c082d158e7dbcd2ed3c99d1bfcf4e9492cf6d3\"" Apr 21 10:07:25.507807 systemd-networkd[1831]: cali157128cdcd1: Gained IPv6LL Apr 21 10:07:25.600090 containerd[1955]: time="2026-04-21T10:07:25.599138547Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-fr8h9,Uid:1a31afb8-9d55-404f-8363-717bc3de0918,Namespace:calico-system,Attempt:1,} returns sandbox id \"fb8943c7b1587acbb7ccd5a3c585cd44b8f121f82c321d10f574f5d825ef56cc\"" Apr 21 10:07:25.737238 containerd[1955]: time="2026-04-21T10:07:25.735902195Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-669f97dc6b-fnwbg,Uid:009c7e1e-6291-4854-a472-a094b3df1eb4,Namespace:calico-system,Attempt:0,} returns sandbox id \"7638781ae575ab861c0999d5595fd36110fceb37cb7e49d9a734ed04ee49dbc0\"" Apr 21 10:07:25.808133 containerd[1955]: 2026-04-21 10:07:25.572 [WARNING][5463] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="21275ea079eb3066827c0126331c15680105000b3b73bfe6800334903f295186" WorkloadEndpoint="ip--172--31--20--11-k8s-whisker--5668cff8fc--xc5cn-eth0" Apr 21 10:07:25.808133 containerd[1955]: 2026-04-21 10:07:25.573 [INFO][5463] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="21275ea079eb3066827c0126331c15680105000b3b73bfe6800334903f295186" Apr 21 10:07:25.808133 containerd[1955]: 2026-04-21 10:07:25.573 [INFO][5463] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="21275ea079eb3066827c0126331c15680105000b3b73bfe6800334903f295186" iface="eth0" netns="" Apr 21 10:07:25.808133 containerd[1955]: 2026-04-21 10:07:25.573 [INFO][5463] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="21275ea079eb3066827c0126331c15680105000b3b73bfe6800334903f295186" Apr 21 10:07:25.808133 containerd[1955]: 2026-04-21 10:07:25.573 [INFO][5463] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="21275ea079eb3066827c0126331c15680105000b3b73bfe6800334903f295186" Apr 21 10:07:25.808133 containerd[1955]: 2026-04-21 10:07:25.751 [INFO][5483] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="21275ea079eb3066827c0126331c15680105000b3b73bfe6800334903f295186" HandleID="k8s-pod-network.21275ea079eb3066827c0126331c15680105000b3b73bfe6800334903f295186" Workload="ip--172--31--20--11-k8s-whisker--5668cff8fc--xc5cn-eth0" Apr 21 10:07:25.808133 containerd[1955]: 2026-04-21 10:07:25.751 [INFO][5483] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:07:25.808133 containerd[1955]: 2026-04-21 10:07:25.751 [INFO][5483] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:07:25.808133 containerd[1955]: 2026-04-21 10:07:25.790 [WARNING][5483] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="21275ea079eb3066827c0126331c15680105000b3b73bfe6800334903f295186" HandleID="k8s-pod-network.21275ea079eb3066827c0126331c15680105000b3b73bfe6800334903f295186" Workload="ip--172--31--20--11-k8s-whisker--5668cff8fc--xc5cn-eth0" Apr 21 10:07:25.808133 containerd[1955]: 2026-04-21 10:07:25.790 [INFO][5483] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="21275ea079eb3066827c0126331c15680105000b3b73bfe6800334903f295186" HandleID="k8s-pod-network.21275ea079eb3066827c0126331c15680105000b3b73bfe6800334903f295186" Workload="ip--172--31--20--11-k8s-whisker--5668cff8fc--xc5cn-eth0" Apr 21 10:07:25.808133 containerd[1955]: 2026-04-21 10:07:25.797 [INFO][5483] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:07:25.808133 containerd[1955]: 2026-04-21 10:07:25.804 [INFO][5463] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="21275ea079eb3066827c0126331c15680105000b3b73bfe6800334903f295186" Apr 21 10:07:25.811743 containerd[1955]: time="2026-04-21T10:07:25.810624863Z" level=info msg="TearDown network for sandbox \"21275ea079eb3066827c0126331c15680105000b3b73bfe6800334903f295186\" successfully" Apr 21 10:07:25.811743 containerd[1955]: time="2026-04-21T10:07:25.810668312Z" level=info msg="StopPodSandbox for \"21275ea079eb3066827c0126331c15680105000b3b73bfe6800334903f295186\" returns successfully" Apr 21 10:07:25.811743 containerd[1955]: time="2026-04-21T10:07:25.811322447Z" level=info msg="RemovePodSandbox for \"21275ea079eb3066827c0126331c15680105000b3b73bfe6800334903f295186\"" Apr 21 10:07:25.811743 containerd[1955]: time="2026-04-21T10:07:25.811384290Z" level=info msg="Forcibly stopping sandbox \"21275ea079eb3066827c0126331c15680105000b3b73bfe6800334903f295186\"" Apr 21 10:07:25.861501 systemd[1]: Started sshd@7-172.31.20.11:22-4.175.71.9:36254.service - OpenSSH per-connection server daemon (4.175.71.9:36254). Apr 21 10:07:25.889981 systemd-networkd[1831]: calib59fb0aaf00: Gained IPv6LL Apr 21 10:07:26.127190 kubelet[3162]: I0421 10:07:26.127058 3162 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-n4jjg" podStartSLOduration=56.127039837 podStartE2EDuration="56.127039837s" podCreationTimestamp="2026-04-21 10:06:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 10:07:25.190146458 +0000 UTC m=+61.107508551" watchObservedRunningTime="2026-04-21 10:07:26.127039837 +0000 UTC m=+62.044401942" Apr 21 10:07:26.174969 containerd[1955]: 2026-04-21 10:07:26.030 [WARNING][5517] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="21275ea079eb3066827c0126331c15680105000b3b73bfe6800334903f295186" WorkloadEndpoint="ip--172--31--20--11-k8s-whisker--5668cff8fc--xc5cn-eth0" Apr 21 10:07:26.174969 containerd[1955]: 2026-04-21 10:07:26.030 [INFO][5517] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="21275ea079eb3066827c0126331c15680105000b3b73bfe6800334903f295186" Apr 21 10:07:26.174969 containerd[1955]: 2026-04-21 10:07:26.030 [INFO][5517] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="21275ea079eb3066827c0126331c15680105000b3b73bfe6800334903f295186" iface="eth0" netns="" Apr 21 10:07:26.174969 containerd[1955]: 2026-04-21 10:07:26.031 [INFO][5517] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="21275ea079eb3066827c0126331c15680105000b3b73bfe6800334903f295186" Apr 21 10:07:26.174969 containerd[1955]: 2026-04-21 10:07:26.031 [INFO][5517] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="21275ea079eb3066827c0126331c15680105000b3b73bfe6800334903f295186" Apr 21 10:07:26.174969 containerd[1955]: 2026-04-21 10:07:26.135 [INFO][5539] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="21275ea079eb3066827c0126331c15680105000b3b73bfe6800334903f295186" HandleID="k8s-pod-network.21275ea079eb3066827c0126331c15680105000b3b73bfe6800334903f295186" Workload="ip--172--31--20--11-k8s-whisker--5668cff8fc--xc5cn-eth0" Apr 21 10:07:26.174969 containerd[1955]: 2026-04-21 10:07:26.136 [INFO][5539] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:07:26.174969 containerd[1955]: 2026-04-21 10:07:26.137 [INFO][5539] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:07:26.174969 containerd[1955]: 2026-04-21 10:07:26.160 [WARNING][5539] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="21275ea079eb3066827c0126331c15680105000b3b73bfe6800334903f295186" HandleID="k8s-pod-network.21275ea079eb3066827c0126331c15680105000b3b73bfe6800334903f295186" Workload="ip--172--31--20--11-k8s-whisker--5668cff8fc--xc5cn-eth0" Apr 21 10:07:26.174969 containerd[1955]: 2026-04-21 10:07:26.160 [INFO][5539] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="21275ea079eb3066827c0126331c15680105000b3b73bfe6800334903f295186" HandleID="k8s-pod-network.21275ea079eb3066827c0126331c15680105000b3b73bfe6800334903f295186" Workload="ip--172--31--20--11-k8s-whisker--5668cff8fc--xc5cn-eth0" Apr 21 10:07:26.174969 containerd[1955]: 2026-04-21 10:07:26.165 [INFO][5539] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:07:26.174969 containerd[1955]: 2026-04-21 10:07:26.169 [INFO][5517] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="21275ea079eb3066827c0126331c15680105000b3b73bfe6800334903f295186" Apr 21 10:07:26.174969 containerd[1955]: time="2026-04-21T10:07:26.174948484Z" level=info msg="TearDown network for sandbox \"21275ea079eb3066827c0126331c15680105000b3b73bfe6800334903f295186\" successfully" Apr 21 10:07:26.182052 containerd[1955]: time="2026-04-21T10:07:26.181911112Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"21275ea079eb3066827c0126331c15680105000b3b73bfe6800334903f295186\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 21 10:07:26.182052 containerd[1955]: time="2026-04-21T10:07:26.182010737Z" level=info msg="RemovePodSandbox \"21275ea079eb3066827c0126331c15680105000b3b73bfe6800334903f295186\" returns successfully" Apr 21 10:07:26.185376 containerd[1955]: time="2026-04-21T10:07:26.184338641Z" level=info msg="StopPodSandbox for \"b9259996c4f20e4dffc1ab6aa795f2d080338d492106c257cc9efc78a9edf950\"" Apr 21 10:07:26.301560 kernel: calico-node[4953]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Apr 21 10:07:26.426166 containerd[1955]: 2026-04-21 10:07:26.301 [WARNING][5558] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b9259996c4f20e4dffc1ab6aa795f2d080338d492106c257cc9efc78a9edf950" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--11-k8s-coredns--7d764666f9--cxnd9-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"af9f8050-444d-4935-9809-2c239d4c35de", ResourceVersion:"1086", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 6, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-11", ContainerID:"2b171adbaab7eecb1843024cd3493ee2fde6bbc07140dd0f25f0dced4f8be503", Pod:"coredns-7d764666f9-cxnd9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.71.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia12692d9c53", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:07:26.426166 containerd[1955]: 2026-04-21 10:07:26.302 [INFO][5558] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b9259996c4f20e4dffc1ab6aa795f2d080338d492106c257cc9efc78a9edf950" Apr 21 10:07:26.426166 containerd[1955]: 2026-04-21 10:07:26.302 [INFO][5558] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b9259996c4f20e4dffc1ab6aa795f2d080338d492106c257cc9efc78a9edf950" iface="eth0" netns="" Apr 21 10:07:26.426166 containerd[1955]: 2026-04-21 10:07:26.302 [INFO][5558] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b9259996c4f20e4dffc1ab6aa795f2d080338d492106c257cc9efc78a9edf950" Apr 21 10:07:26.426166 containerd[1955]: 2026-04-21 10:07:26.302 [INFO][5558] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b9259996c4f20e4dffc1ab6aa795f2d080338d492106c257cc9efc78a9edf950" Apr 21 10:07:26.426166 containerd[1955]: 2026-04-21 10:07:26.371 [INFO][5569] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b9259996c4f20e4dffc1ab6aa795f2d080338d492106c257cc9efc78a9edf950" HandleID="k8s-pod-network.b9259996c4f20e4dffc1ab6aa795f2d080338d492106c257cc9efc78a9edf950" Workload="ip--172--31--20--11-k8s-coredns--7d764666f9--cxnd9-eth0" Apr 21 10:07:26.426166 containerd[1955]: 2026-04-21 10:07:26.371 [INFO][5569] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:07:26.426166 containerd[1955]: 2026-04-21 10:07:26.371 [INFO][5569] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:07:26.426166 containerd[1955]: 2026-04-21 10:07:26.404 [WARNING][5569] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b9259996c4f20e4dffc1ab6aa795f2d080338d492106c257cc9efc78a9edf950" HandleID="k8s-pod-network.b9259996c4f20e4dffc1ab6aa795f2d080338d492106c257cc9efc78a9edf950" Workload="ip--172--31--20--11-k8s-coredns--7d764666f9--cxnd9-eth0" Apr 21 10:07:26.426166 containerd[1955]: 2026-04-21 10:07:26.404 [INFO][5569] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b9259996c4f20e4dffc1ab6aa795f2d080338d492106c257cc9efc78a9edf950" HandleID="k8s-pod-network.b9259996c4f20e4dffc1ab6aa795f2d080338d492106c257cc9efc78a9edf950" Workload="ip--172--31--20--11-k8s-coredns--7d764666f9--cxnd9-eth0" Apr 21 10:07:26.426166 containerd[1955]: 2026-04-21 10:07:26.409 [INFO][5569] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:07:26.426166 containerd[1955]: 2026-04-21 10:07:26.418 [INFO][5558] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b9259996c4f20e4dffc1ab6aa795f2d080338d492106c257cc9efc78a9edf950" Apr 21 10:07:26.426166 containerd[1955]: time="2026-04-21T10:07:26.425130234Z" level=info msg="TearDown network for sandbox \"b9259996c4f20e4dffc1ab6aa795f2d080338d492106c257cc9efc78a9edf950\" successfully" Apr 21 10:07:26.426166 containerd[1955]: time="2026-04-21T10:07:26.425167032Z" level=info msg="StopPodSandbox for \"b9259996c4f20e4dffc1ab6aa795f2d080338d492106c257cc9efc78a9edf950\" returns successfully" Apr 21 10:07:26.431212 containerd[1955]: time="2026-04-21T10:07:26.429961821Z" level=info msg="RemovePodSandbox for \"b9259996c4f20e4dffc1ab6aa795f2d080338d492106c257cc9efc78a9edf950\"" Apr 21 10:07:26.431212 containerd[1955]: time="2026-04-21T10:07:26.430016484Z" level=info msg="Forcibly stopping sandbox \"b9259996c4f20e4dffc1ab6aa795f2d080338d492106c257cc9efc78a9edf950\"" Apr 21 10:07:26.731042 containerd[1955]: 2026-04-21 10:07:26.588 [WARNING][5585] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b9259996c4f20e4dffc1ab6aa795f2d080338d492106c257cc9efc78a9edf950" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--11-k8s-coredns--7d764666f9--cxnd9-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"af9f8050-444d-4935-9809-2c239d4c35de", ResourceVersion:"1086", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 6, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-11", ContainerID:"2b171adbaab7eecb1843024cd3493ee2fde6bbc07140dd0f25f0dced4f8be503", Pod:"coredns-7d764666f9-cxnd9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.71.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia12692d9c53", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:07:26.731042 containerd[1955]: 2026-04-21 10:07:26.589 [INFO][5585] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b9259996c4f20e4dffc1ab6aa795f2d080338d492106c257cc9efc78a9edf950" Apr 21 10:07:26.731042 containerd[1955]: 2026-04-21 10:07:26.589 [INFO][5585] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b9259996c4f20e4dffc1ab6aa795f2d080338d492106c257cc9efc78a9edf950" iface="eth0" netns="" Apr 21 10:07:26.731042 containerd[1955]: 2026-04-21 10:07:26.590 [INFO][5585] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b9259996c4f20e4dffc1ab6aa795f2d080338d492106c257cc9efc78a9edf950" Apr 21 10:07:26.731042 containerd[1955]: 2026-04-21 10:07:26.590 [INFO][5585] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b9259996c4f20e4dffc1ab6aa795f2d080338d492106c257cc9efc78a9edf950" Apr 21 10:07:26.731042 containerd[1955]: 2026-04-21 10:07:26.660 [INFO][5595] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b9259996c4f20e4dffc1ab6aa795f2d080338d492106c257cc9efc78a9edf950" HandleID="k8s-pod-network.b9259996c4f20e4dffc1ab6aa795f2d080338d492106c257cc9efc78a9edf950" Workload="ip--172--31--20--11-k8s-coredns--7d764666f9--cxnd9-eth0" Apr 21 10:07:26.731042 containerd[1955]: 2026-04-21 10:07:26.662 [INFO][5595] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:07:26.731042 containerd[1955]: 2026-04-21 10:07:26.662 [INFO][5595] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:07:26.731042 containerd[1955]: 2026-04-21 10:07:26.718 [WARNING][5595] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b9259996c4f20e4dffc1ab6aa795f2d080338d492106c257cc9efc78a9edf950" HandleID="k8s-pod-network.b9259996c4f20e4dffc1ab6aa795f2d080338d492106c257cc9efc78a9edf950" Workload="ip--172--31--20--11-k8s-coredns--7d764666f9--cxnd9-eth0" Apr 21 10:07:26.731042 containerd[1955]: 2026-04-21 10:07:26.719 [INFO][5595] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b9259996c4f20e4dffc1ab6aa795f2d080338d492106c257cc9efc78a9edf950" HandleID="k8s-pod-network.b9259996c4f20e4dffc1ab6aa795f2d080338d492106c257cc9efc78a9edf950" Workload="ip--172--31--20--11-k8s-coredns--7d764666f9--cxnd9-eth0" Apr 21 10:07:26.731042 containerd[1955]: 2026-04-21 10:07:26.722 [INFO][5595] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:07:26.731042 containerd[1955]: 2026-04-21 10:07:26.726 [INFO][5585] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b9259996c4f20e4dffc1ab6aa795f2d080338d492106c257cc9efc78a9edf950" Apr 21 10:07:26.732277 containerd[1955]: time="2026-04-21T10:07:26.732190049Z" level=info msg="TearDown network for sandbox \"b9259996c4f20e4dffc1ab6aa795f2d080338d492106c257cc9efc78a9edf950\" successfully" Apr 21 10:07:26.740922 containerd[1955]: time="2026-04-21T10:07:26.740811114Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b9259996c4f20e4dffc1ab6aa795f2d080338d492106c257cc9efc78a9edf950\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 21 10:07:26.741340 containerd[1955]: time="2026-04-21T10:07:26.740937981Z" level=info msg="RemovePodSandbox \"b9259996c4f20e4dffc1ab6aa795f2d080338d492106c257cc9efc78a9edf950\" returns successfully" Apr 21 10:07:26.741947 containerd[1955]: time="2026-04-21T10:07:26.741875938Z" level=info msg="StopPodSandbox for \"107ed6270dced41699dd23a916d50793c177cbb6a6886b67de1690ccbbf15373\"" Apr 21 10:07:26.958267 sshd[5523]: Accepted publickey for core from 4.175.71.9 port 36254 ssh2: RSA SHA256:aREzjlBzhX3GBruysBn1Uz2TtCDk2d5wBU92NUxSFu4 Apr 21 10:07:26.960226 containerd[1955]: 2026-04-21 10:07:26.815 [WARNING][5611] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="107ed6270dced41699dd23a916d50793c177cbb6a6886b67de1690ccbbf15373" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--11-k8s-coredns--7d764666f9--n4jjg-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"7842ac67-69e7-4228-9787-d5e2cbd9c0b8", ResourceVersion:"1072", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 6, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-11", ContainerID:"70ca7c82fd913294c155bc0d2bb8f0c5fa6b329adad5067ab1cc25d8044b43e1", Pod:"coredns-7d764666f9-n4jjg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.71.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif0e38021757", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:07:26.960226 containerd[1955]: 2026-04-21 10:07:26.816 [INFO][5611] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="107ed6270dced41699dd23a916d50793c177cbb6a6886b67de1690ccbbf15373" Apr 21 10:07:26.960226 containerd[1955]: 2026-04-21 10:07:26.816 [INFO][5611] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="107ed6270dced41699dd23a916d50793c177cbb6a6886b67de1690ccbbf15373" iface="eth0" netns="" Apr 21 10:07:26.960226 containerd[1955]: 2026-04-21 10:07:26.816 [INFO][5611] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="107ed6270dced41699dd23a916d50793c177cbb6a6886b67de1690ccbbf15373" Apr 21 10:07:26.960226 containerd[1955]: 2026-04-21 10:07:26.816 [INFO][5611] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="107ed6270dced41699dd23a916d50793c177cbb6a6886b67de1690ccbbf15373" Apr 21 10:07:26.960226 containerd[1955]: 2026-04-21 10:07:26.933 [INFO][5619] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="107ed6270dced41699dd23a916d50793c177cbb6a6886b67de1690ccbbf15373" HandleID="k8s-pod-network.107ed6270dced41699dd23a916d50793c177cbb6a6886b67de1690ccbbf15373" Workload="ip--172--31--20--11-k8s-coredns--7d764666f9--n4jjg-eth0" Apr 21 10:07:26.960226 containerd[1955]: 2026-04-21 10:07:26.934 [INFO][5619] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:07:26.960226 containerd[1955]: 2026-04-21 10:07:26.934 [INFO][5619] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:07:26.960226 containerd[1955]: 2026-04-21 10:07:26.949 [WARNING][5619] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="107ed6270dced41699dd23a916d50793c177cbb6a6886b67de1690ccbbf15373" HandleID="k8s-pod-network.107ed6270dced41699dd23a916d50793c177cbb6a6886b67de1690ccbbf15373" Workload="ip--172--31--20--11-k8s-coredns--7d764666f9--n4jjg-eth0" Apr 21 10:07:26.960226 containerd[1955]: 2026-04-21 10:07:26.949 [INFO][5619] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="107ed6270dced41699dd23a916d50793c177cbb6a6886b67de1690ccbbf15373" HandleID="k8s-pod-network.107ed6270dced41699dd23a916d50793c177cbb6a6886b67de1690ccbbf15373" Workload="ip--172--31--20--11-k8s-coredns--7d764666f9--n4jjg-eth0" Apr 21 10:07:26.960226 containerd[1955]: 2026-04-21 10:07:26.952 [INFO][5619] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:07:26.960226 containerd[1955]: 2026-04-21 10:07:26.955 [INFO][5611] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="107ed6270dced41699dd23a916d50793c177cbb6a6886b67de1690ccbbf15373" Apr 21 10:07:26.960226 containerd[1955]: time="2026-04-21T10:07:26.960127023Z" level=info msg="TearDown network for sandbox \"107ed6270dced41699dd23a916d50793c177cbb6a6886b67de1690ccbbf15373\" successfully" Apr 21 10:07:26.960226 containerd[1955]: time="2026-04-21T10:07:26.960164853Z" level=info msg="StopPodSandbox for \"107ed6270dced41699dd23a916d50793c177cbb6a6886b67de1690ccbbf15373\" returns successfully" Apr 21 10:07:26.966694 containerd[1955]: time="2026-04-21T10:07:26.963494742Z" level=info msg="RemovePodSandbox for \"107ed6270dced41699dd23a916d50793c177cbb6a6886b67de1690ccbbf15373\"" Apr 21 10:07:26.966694 containerd[1955]: time="2026-04-21T10:07:26.963634984Z" level=info msg="Forcibly stopping sandbox \"107ed6270dced41699dd23a916d50793c177cbb6a6886b67de1690ccbbf15373\"" Apr 21 10:07:26.964695 sshd[5523]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:07:26.992050 systemd-logind[1930]: New session 8 of user core. Apr 21 10:07:26.996077 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 21 10:07:27.179527 containerd[1955]: 2026-04-21 10:07:27.086 [WARNING][5634] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="107ed6270dced41699dd23a916d50793c177cbb6a6886b67de1690ccbbf15373" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--11-k8s-coredns--7d764666f9--n4jjg-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"7842ac67-69e7-4228-9787-d5e2cbd9c0b8", ResourceVersion:"1072", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 6, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-11", ContainerID:"70ca7c82fd913294c155bc0d2bb8f0c5fa6b329adad5067ab1cc25d8044b43e1", Pod:"coredns-7d764666f9-n4jjg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.71.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif0e38021757", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:07:27.179527 containerd[1955]: 2026-04-21 10:07:27.087 [INFO][5634] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="107ed6270dced41699dd23a916d50793c177cbb6a6886b67de1690ccbbf15373" Apr 21 10:07:27.179527 containerd[1955]: 2026-04-21 10:07:27.087 [INFO][5634] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="107ed6270dced41699dd23a916d50793c177cbb6a6886b67de1690ccbbf15373" iface="eth0" netns="" Apr 21 10:07:27.179527 containerd[1955]: 2026-04-21 10:07:27.087 [INFO][5634] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="107ed6270dced41699dd23a916d50793c177cbb6a6886b67de1690ccbbf15373" Apr 21 10:07:27.179527 containerd[1955]: 2026-04-21 10:07:27.087 [INFO][5634] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="107ed6270dced41699dd23a916d50793c177cbb6a6886b67de1690ccbbf15373" Apr 21 10:07:27.179527 containerd[1955]: 2026-04-21 10:07:27.144 [INFO][5643] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="107ed6270dced41699dd23a916d50793c177cbb6a6886b67de1690ccbbf15373" HandleID="k8s-pod-network.107ed6270dced41699dd23a916d50793c177cbb6a6886b67de1690ccbbf15373" Workload="ip--172--31--20--11-k8s-coredns--7d764666f9--n4jjg-eth0" Apr 21 10:07:27.179527 containerd[1955]: 2026-04-21 10:07:27.144 [INFO][5643] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:07:27.179527 containerd[1955]: 2026-04-21 10:07:27.144 [INFO][5643] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:07:27.179527 containerd[1955]: 2026-04-21 10:07:27.168 [WARNING][5643] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="107ed6270dced41699dd23a916d50793c177cbb6a6886b67de1690ccbbf15373" HandleID="k8s-pod-network.107ed6270dced41699dd23a916d50793c177cbb6a6886b67de1690ccbbf15373" Workload="ip--172--31--20--11-k8s-coredns--7d764666f9--n4jjg-eth0" Apr 21 10:07:27.179527 containerd[1955]: 2026-04-21 10:07:27.168 [INFO][5643] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="107ed6270dced41699dd23a916d50793c177cbb6a6886b67de1690ccbbf15373" HandleID="k8s-pod-network.107ed6270dced41699dd23a916d50793c177cbb6a6886b67de1690ccbbf15373" Workload="ip--172--31--20--11-k8s-coredns--7d764666f9--n4jjg-eth0" Apr 21 10:07:27.179527 containerd[1955]: 2026-04-21 10:07:27.171 [INFO][5643] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:07:27.179527 containerd[1955]: 2026-04-21 10:07:27.175 [INFO][5634] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="107ed6270dced41699dd23a916d50793c177cbb6a6886b67de1690ccbbf15373" Apr 21 10:07:27.180444 containerd[1955]: time="2026-04-21T10:07:27.180214821Z" level=info msg="TearDown network for sandbox \"107ed6270dced41699dd23a916d50793c177cbb6a6886b67de1690ccbbf15373\" successfully" Apr 21 10:07:27.188688 containerd[1955]: time="2026-04-21T10:07:27.188600088Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"107ed6270dced41699dd23a916d50793c177cbb6a6886b67de1690ccbbf15373\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 21 10:07:27.188910 containerd[1955]: time="2026-04-21T10:07:27.188723329Z" level=info msg="RemovePodSandbox \"107ed6270dced41699dd23a916d50793c177cbb6a6886b67de1690ccbbf15373\" returns successfully" Apr 21 10:07:27.448792 systemd-networkd[1831]: vxlan.calico: Link UP Apr 21 10:07:27.448809 systemd-networkd[1831]: vxlan.calico: Gained carrier Apr 21 10:07:27.945604 sshd[5523]: pam_unix(sshd:session): session closed for user core Apr 21 10:07:27.961758 systemd[1]: sshd@7-172.31.20.11:22-4.175.71.9:36254.service: Deactivated successfully. Apr 21 10:07:27.970490 systemd[1]: session-8.scope: Deactivated successfully. Apr 21 10:07:27.984080 systemd-logind[1930]: Session 8 logged out. Waiting for processes to exit. Apr 21 10:07:27.992175 systemd-logind[1930]: Removed session 8. Apr 21 10:07:29.473962 systemd-networkd[1831]: vxlan.calico: Gained IPv6LL Apr 21 10:07:29.771626 containerd[1955]: time="2026-04-21T10:07:29.770352721Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:07:29.774318 containerd[1955]: time="2026-04-21T10:07:29.774248031Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=45552315" Apr 21 10:07:29.776268 containerd[1955]: time="2026-04-21T10:07:29.776190134Z" level=info msg="ImageCreate event name:\"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:07:29.784329 containerd[1955]: time="2026-04-21T10:07:29.784224849Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:07:29.789133 containerd[1955]: time="2026-04-21T10:07:29.788254735Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"46949856\" in 6.088580239s" Apr 21 10:07:29.789133 containerd[1955]: time="2026-04-21T10:07:29.788972706Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\"" Apr 21 10:07:29.795588 containerd[1955]: time="2026-04-21T10:07:29.792398570Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Apr 21 10:07:29.810625 containerd[1955]: time="2026-04-21T10:07:29.810497015Z" level=info msg="CreateContainer within sandbox \"fa65fbe38059786dc13faa70c366695250dc63cf67ce41c224b9a117398e3254\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 21 10:07:29.835287 containerd[1955]: time="2026-04-21T10:07:29.833731579Z" level=info msg="CreateContainer within sandbox \"fa65fbe38059786dc13faa70c366695250dc63cf67ce41c224b9a117398e3254\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"dcbd4b01a4ba0906a732c6aaae0018da63b62d5250a04a6ff547cc09b1b84e03\"" Apr 21 10:07:29.841575 containerd[1955]: time="2026-04-21T10:07:29.838131406Z" level=info msg="StartContainer for \"dcbd4b01a4ba0906a732c6aaae0018da63b62d5250a04a6ff547cc09b1b84e03\"" Apr 21 10:07:29.973874 systemd[1]: Started cri-containerd-dcbd4b01a4ba0906a732c6aaae0018da63b62d5250a04a6ff547cc09b1b84e03.scope - libcontainer container dcbd4b01a4ba0906a732c6aaae0018da63b62d5250a04a6ff547cc09b1b84e03. Apr 21 10:07:30.057366 containerd[1955]: time="2026-04-21T10:07:30.056318270Z" level=info msg="StartContainer for \"dcbd4b01a4ba0906a732c6aaae0018da63b62d5250a04a6ff547cc09b1b84e03\" returns successfully" Apr 21 10:07:31.134530 kubelet[3162]: I0421 10:07:31.132669 3162 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Apr 21 10:07:31.606806 ntpd[1924]: Listen normally on 7 vxlan.calico 192.168.71.128:123 Apr 21 10:07:31.608002 ntpd[1924]: 21 Apr 10:07:31 ntpd[1924]: Listen normally on 7 vxlan.calico 192.168.71.128:123 Apr 21 10:07:31.608002 ntpd[1924]: 21 Apr 10:07:31 ntpd[1924]: Listen normally on 8 cali76289bd06d2 [fe80::ecee:eeff:feee:eeee%4]:123 Apr 21 10:07:31.608002 ntpd[1924]: 21 Apr 10:07:31 ntpd[1924]: Listen normally on 9 calif0e38021757 [fe80::ecee:eeff:feee:eeee%5]:123 Apr 21 10:07:31.608002 ntpd[1924]: 21 Apr 10:07:31 ntpd[1924]: Listen normally on 10 calia12692d9c53 [fe80::ecee:eeff:feee:eeee%6]:123 Apr 21 10:07:31.608002 ntpd[1924]: 21 Apr 10:07:31 ntpd[1924]: Listen normally on 11 cali97f1de98980 [fe80::ecee:eeff:feee:eeee%7]:123 Apr 21 10:07:31.608002 ntpd[1924]: 21 Apr 10:07:31 ntpd[1924]: Listen normally on 12 calicdd7c039a12 [fe80::ecee:eeff:feee:eeee%8]:123 Apr 21 10:07:31.606955 ntpd[1924]: Listen normally on 8 cali76289bd06d2 [fe80::ecee:eeff:feee:eeee%4]:123 Apr 21 10:07:31.607070 ntpd[1924]: Listen normally on 9 calif0e38021757 [fe80::ecee:eeff:feee:eeee%5]:123 Apr 21 10:07:31.607146 ntpd[1924]: Listen normally on 10 calia12692d9c53 [fe80::ecee:eeff:feee:eeee%6]:123 Apr 21 10:07:31.607219 ntpd[1924]: Listen normally on 11 cali97f1de98980 [fe80::ecee:eeff:feee:eeee%7]:123 Apr 21 10:07:31.607291 ntpd[1924]: Listen normally on 12 calicdd7c039a12 [fe80::ecee:eeff:feee:eeee%8]:123 Apr 21 10:07:31.610724 ntpd[1924]: 21 Apr 10:07:31 ntpd[1924]: Listen normally on 13 cali40de97ef5a8 [fe80::ecee:eeff:feee:eeee%9]:123 Apr 21 10:07:31.610724 ntpd[1924]: 21 Apr 10:07:31 ntpd[1924]: Listen normally on 14 calib59fb0aaf00 [fe80::ecee:eeff:feee:eeee%10]:123 Apr 21 10:07:31.610261 ntpd[1924]: Listen normally on 13 cali40de97ef5a8 [fe80::ecee:eeff:feee:eeee%9]:123 Apr 21 10:07:31.610363 ntpd[1924]: Listen normally on 14 calib59fb0aaf00 [fe80::ecee:eeff:feee:eeee%10]:123 Apr 21 10:07:31.610914 ntpd[1924]: Listen normally on 15 cali157128cdcd1 [fe80::ecee:eeff:feee:eeee%11]:123 Apr 21 10:07:31.611053 ntpd[1924]: 21 Apr 10:07:31 ntpd[1924]: Listen normally on 15 cali157128cdcd1 [fe80::ecee:eeff:feee:eeee%11]:123 Apr 21 10:07:31.611053 ntpd[1924]: 21 Apr 10:07:31 ntpd[1924]: Listen normally on 16 vxlan.calico [fe80::6446:4aff:fe16:4239%12]:123 Apr 21 10:07:31.611001 ntpd[1924]: Listen normally on 16 vxlan.calico [fe80::6446:4aff:fe16:4239%12]:123 Apr 21 10:07:32.968127 containerd[1955]: time="2026-04-21T10:07:32.968047567Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:07:32.969696 containerd[1955]: time="2026-04-21T10:07:32.969238575Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=49189955" Apr 21 10:07:32.970848 containerd[1955]: time="2026-04-21T10:07:32.970799643Z" level=info msg="ImageCreate event name:\"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:07:32.975212 containerd[1955]: time="2026-04-21T10:07:32.975138251Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:07:32.977084 containerd[1955]: time="2026-04-21T10:07:32.977029400Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"50587448\" in 3.184552214s" Apr 21 10:07:32.977494 containerd[1955]: time="2026-04-21T10:07:32.977219311Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\"" Apr 21 10:07:32.980219 containerd[1955]: time="2026-04-21T10:07:32.980158452Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Apr 21 10:07:33.015534 containerd[1955]: time="2026-04-21T10:07:33.013944813Z" level=info msg="CreateContainer within sandbox \"25216af02af7751975e027390ff59ade0564686f35cd1fc89261d39f8a7407b9\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Apr 21 10:07:33.047536 containerd[1955]: time="2026-04-21T10:07:33.043439281Z" level=info msg="CreateContainer within sandbox \"25216af02af7751975e027390ff59ade0564686f35cd1fc89261d39f8a7407b9\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"a9d40cb889643b30cf5177384e01e98e38da684477c5254dcb43f99e995cd1aa\"" Apr 21 10:07:33.047536 containerd[1955]: time="2026-04-21T10:07:33.046785185Z" level=info msg="StartContainer for \"a9d40cb889643b30cf5177384e01e98e38da684477c5254dcb43f99e995cd1aa\"" Apr 21 10:07:33.045808 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1501264498.mount: Deactivated successfully. Apr 21 10:07:33.126854 systemd[1]: Started cri-containerd-a9d40cb889643b30cf5177384e01e98e38da684477c5254dcb43f99e995cd1aa.scope - libcontainer container a9d40cb889643b30cf5177384e01e98e38da684477c5254dcb43f99e995cd1aa. Apr 21 10:07:33.138984 systemd[1]: Started sshd@8-172.31.20.11:22-4.175.71.9:36262.service - OpenSSH per-connection server daemon (4.175.71.9:36262). Apr 21 10:07:33.223611 containerd[1955]: time="2026-04-21T10:07:33.223451759Z" level=info msg="StartContainer for \"a9d40cb889643b30cf5177384e01e98e38da684477c5254dcb43f99e995cd1aa\" returns successfully" Apr 21 10:07:34.207787 sshd[5833]: Accepted publickey for core from 4.175.71.9 port 36262 ssh2: RSA SHA256:aREzjlBzhX3GBruysBn1Uz2TtCDk2d5wBU92NUxSFu4 Apr 21 10:07:34.223221 kubelet[3162]: I0421 10:07:34.222671 3162 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-apiserver-756779796c-l52kg" podStartSLOduration=33.096537767 podStartE2EDuration="39.222604653s" podCreationTimestamp="2026-04-21 10:06:55 +0000 UTC" firstStartedPulling="2026-04-21 10:07:23.66577385 +0000 UTC m=+59.583135967" lastFinishedPulling="2026-04-21 10:07:29.791840748 +0000 UTC m=+65.709202853" observedRunningTime="2026-04-21 10:07:30.154235137 +0000 UTC m=+66.071597266" watchObservedRunningTime="2026-04-21 10:07:34.222604653 +0000 UTC m=+70.139966758" Apr 21 10:07:34.225032 sshd[5833]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:07:34.243915 systemd-logind[1930]: New session 9 of user core. Apr 21 10:07:34.252866 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 21 10:07:34.272103 systemd[1]: run-containerd-runc-k8s.io-a9d40cb889643b30cf5177384e01e98e38da684477c5254dcb43f99e995cd1aa-runc.iveiZJ.mount: Deactivated successfully. Apr 21 10:07:34.351348 kubelet[3162]: I0421 10:07:34.351198 3162 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-74499fb665-57b24" podStartSLOduration=26.297077791 podStartE2EDuration="34.351174147s" podCreationTimestamp="2026-04-21 10:07:00 +0000 UTC" firstStartedPulling="2026-04-21 10:07:24.925497382 +0000 UTC m=+60.842859475" lastFinishedPulling="2026-04-21 10:07:32.979593654 +0000 UTC m=+68.896955831" observedRunningTime="2026-04-21 10:07:34.221486474 +0000 UTC m=+70.138848579" watchObservedRunningTime="2026-04-21 10:07:34.351174147 +0000 UTC m=+70.268536252" Apr 21 10:07:34.721611 containerd[1955]: time="2026-04-21T10:07:34.720979696Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:07:34.723183 containerd[1955]: time="2026-04-21T10:07:34.723118048Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8261497" Apr 21 10:07:34.723432 containerd[1955]: time="2026-04-21T10:07:34.723396864Z" level=info msg="ImageCreate event name:\"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:07:34.729477 containerd[1955]: time="2026-04-21T10:07:34.729388278Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:07:34.732622 containerd[1955]: time="2026-04-21T10:07:34.731489208Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"9659022\" in 1.751245934s" Apr 21 10:07:34.732622 containerd[1955]: time="2026-04-21T10:07:34.731597910Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\"" Apr 21 10:07:34.736123 containerd[1955]: time="2026-04-21T10:07:34.736036877Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 21 10:07:34.749123 containerd[1955]: time="2026-04-21T10:07:34.749011953Z" level=info msg="CreateContainer within sandbox \"a3e8435b7cc3ae195e8b99654e6660a416f9c392038ac8913470a76b2597968d\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Apr 21 10:07:34.796070 containerd[1955]: time="2026-04-21T10:07:34.795924066Z" level=info msg="CreateContainer within sandbox \"a3e8435b7cc3ae195e8b99654e6660a416f9c392038ac8913470a76b2597968d\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"5c853fd7191a11fc3976eeed58375b8c4a9fc296953a9e65e857cf17aceaba2a\"" Apr 21 10:07:34.799579 containerd[1955]: time="2026-04-21T10:07:34.798785456Z" level=info msg="StartContainer for \"5c853fd7191a11fc3976eeed58375b8c4a9fc296953a9e65e857cf17aceaba2a\"" Apr 21 10:07:34.887236 systemd[1]: Started cri-containerd-5c853fd7191a11fc3976eeed58375b8c4a9fc296953a9e65e857cf17aceaba2a.scope - libcontainer container 5c853fd7191a11fc3976eeed58375b8c4a9fc296953a9e65e857cf17aceaba2a. Apr 21 10:07:34.962017 containerd[1955]: time="2026-04-21T10:07:34.961917822Z" level=info msg="StartContainer for \"5c853fd7191a11fc3976eeed58375b8c4a9fc296953a9e65e857cf17aceaba2a\" returns successfully" Apr 21 10:07:35.169961 containerd[1955]: time="2026-04-21T10:07:35.169885636Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:07:35.171316 containerd[1955]: time="2026-04-21T10:07:35.171158031Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Apr 21 10:07:35.179767 containerd[1955]: time="2026-04-21T10:07:35.179707324Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"46949856\" in 443.192248ms" Apr 21 10:07:35.179983 containerd[1955]: time="2026-04-21T10:07:35.179953831Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\"" Apr 21 10:07:35.181661 containerd[1955]: time="2026-04-21T10:07:35.181589661Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Apr 21 10:07:35.190065 containerd[1955]: time="2026-04-21T10:07:35.189819954Z" level=info msg="CreateContainer within sandbox \"9214cad41c6f799db953f6fce7c082d158e7dbcd2ed3c99d1bfcf4e9492cf6d3\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 21 10:07:35.191346 sshd[5833]: pam_unix(sshd:session): session closed for user core Apr 21 10:07:35.208704 systemd[1]: sshd@8-172.31.20.11:22-4.175.71.9:36262.service: Deactivated successfully. Apr 21 10:07:35.234051 systemd[1]: session-9.scope: Deactivated successfully. Apr 21 10:07:35.236116 containerd[1955]: time="2026-04-21T10:07:35.235744801Z" level=info msg="CreateContainer within sandbox \"9214cad41c6f799db953f6fce7c082d158e7dbcd2ed3c99d1bfcf4e9492cf6d3\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"e0d0f22da7f22f7966726c09c6c10b31ebd730aa6ad5c1606cb5363a0598b726\"" Apr 21 10:07:35.237055 systemd-logind[1930]: Session 9 logged out. Waiting for processes to exit. Apr 21 10:07:35.241255 containerd[1955]: time="2026-04-21T10:07:35.240760344Z" level=info msg="StartContainer for \"e0d0f22da7f22f7966726c09c6c10b31ebd730aa6ad5c1606cb5363a0598b726\"" Apr 21 10:07:35.243389 systemd-logind[1930]: Removed session 9. Apr 21 10:07:35.321105 systemd[1]: Started cri-containerd-e0d0f22da7f22f7966726c09c6c10b31ebd730aa6ad5c1606cb5363a0598b726.scope - libcontainer container e0d0f22da7f22f7966726c09c6c10b31ebd730aa6ad5c1606cb5363a0598b726. Apr 21 10:07:35.417821 containerd[1955]: time="2026-04-21T10:07:35.417689129Z" level=info msg="StartContainer for \"e0d0f22da7f22f7966726c09c6c10b31ebd730aa6ad5c1606cb5363a0598b726\" returns successfully" Apr 21 10:07:36.218045 kubelet[3162]: I0421 10:07:36.217145 3162 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-apiserver-756779796c-mwdfq" podStartSLOduration=31.518148185 podStartE2EDuration="41.217124469s" podCreationTimestamp="2026-04-21 10:06:55 +0000 UTC" firstStartedPulling="2026-04-21 10:07:25.482183418 +0000 UTC m=+61.399545511" lastFinishedPulling="2026-04-21 10:07:35.181159618 +0000 UTC m=+71.098521795" observedRunningTime="2026-04-21 10:07:36.21682325 +0000 UTC m=+72.134185367" watchObservedRunningTime="2026-04-21 10:07:36.217124469 +0000 UTC m=+72.134486574" Apr 21 10:07:37.908486 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1854983071.mount: Deactivated successfully. Apr 21 10:07:38.203551 kubelet[3162]: I0421 10:07:38.203058 3162 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Apr 21 10:07:38.818534 containerd[1955]: time="2026-04-21T10:07:38.816990962Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:07:38.820771 containerd[1955]: time="2026-04-21T10:07:38.820716232Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=51613980" Apr 21 10:07:38.822446 containerd[1955]: time="2026-04-21T10:07:38.822389124Z" level=info msg="ImageCreate event name:\"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:07:38.828597 containerd[1955]: time="2026-04-21T10:07:38.828538297Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:07:38.831462 containerd[1955]: time="2026-04-21T10:07:38.831399159Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"51613826\" in 3.649737835s" Apr 21 10:07:38.831734 containerd[1955]: time="2026-04-21T10:07:38.831699765Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\"" Apr 21 10:07:38.834756 containerd[1955]: time="2026-04-21T10:07:38.834709117Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Apr 21 10:07:38.854199 containerd[1955]: time="2026-04-21T10:07:38.853879265Z" level=info msg="CreateContainer within sandbox \"fb8943c7b1587acbb7ccd5a3c585cd44b8f121f82c321d10f574f5d825ef56cc\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Apr 21 10:07:38.913883 containerd[1955]: time="2026-04-21T10:07:38.913560492Z" level=info msg="CreateContainer within sandbox \"fb8943c7b1587acbb7ccd5a3c585cd44b8f121f82c321d10f574f5d825ef56cc\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"b43bd77f9d7c42c64ea7685cb5221e5c5766f2b16eccf9c11cc4bbf7e24c8b88\"" Apr 21 10:07:38.915678 containerd[1955]: time="2026-04-21T10:07:38.914345708Z" level=info msg="StartContainer for \"b43bd77f9d7c42c64ea7685cb5221e5c5766f2b16eccf9c11cc4bbf7e24c8b88\"" Apr 21 10:07:38.999127 systemd[1]: run-containerd-runc-k8s.io-b43bd77f9d7c42c64ea7685cb5221e5c5766f2b16eccf9c11cc4bbf7e24c8b88-runc.vk2qMu.mount: Deactivated successfully. Apr 21 10:07:39.013036 systemd[1]: Started cri-containerd-b43bd77f9d7c42c64ea7685cb5221e5c5766f2b16eccf9c11cc4bbf7e24c8b88.scope - libcontainer container b43bd77f9d7c42c64ea7685cb5221e5c5766f2b16eccf9c11cc4bbf7e24c8b88. Apr 21 10:07:39.133244 containerd[1955]: time="2026-04-21T10:07:39.132062682Z" level=info msg="StartContainer for \"b43bd77f9d7c42c64ea7685cb5221e5c5766f2b16eccf9c11cc4bbf7e24c8b88\" returns successfully" Apr 21 10:07:39.243562 kubelet[3162]: I0421 10:07:39.242760 3162 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/goldmane-9f7667bb8-fr8h9" podStartSLOduration=31.015845487 podStartE2EDuration="44.242701682s" podCreationTimestamp="2026-04-21 10:06:55 +0000 UTC" firstStartedPulling="2026-04-21 10:07:25.607123641 +0000 UTC m=+61.524485747" lastFinishedPulling="2026-04-21 10:07:38.833979825 +0000 UTC m=+74.751341942" observedRunningTime="2026-04-21 10:07:39.241240288 +0000 UTC m=+75.158602405" watchObservedRunningTime="2026-04-21 10:07:39.242701682 +0000 UTC m=+75.160063907" Apr 21 10:07:39.882289 systemd[1]: run-containerd-runc-k8s.io-b43bd77f9d7c42c64ea7685cb5221e5c5766f2b16eccf9c11cc4bbf7e24c8b88-runc.UYE8Ut.mount: Deactivated successfully. Apr 21 10:07:40.393048 systemd[1]: Started sshd@9-172.31.20.11:22-4.175.71.9:32966.service - OpenSSH per-connection server daemon (4.175.71.9:32966). Apr 21 10:07:40.548111 containerd[1955]: time="2026-04-21T10:07:40.548020413Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:07:40.549653 containerd[1955]: time="2026-04-21T10:07:40.549595648Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=5882804" Apr 21 10:07:40.550887 containerd[1955]: time="2026-04-21T10:07:40.550657650Z" level=info msg="ImageCreate event name:\"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:07:40.558007 containerd[1955]: time="2026-04-21T10:07:40.557388562Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:07:40.560922 containerd[1955]: time="2026-04-21T10:07:40.560872511Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7280321\" in 1.725896813s" Apr 21 10:07:40.561123 containerd[1955]: time="2026-04-21T10:07:40.561092870Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\"" Apr 21 10:07:40.562824 containerd[1955]: time="2026-04-21T10:07:40.562776087Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Apr 21 10:07:40.570388 containerd[1955]: time="2026-04-21T10:07:40.570063213Z" level=info msg="CreateContainer within sandbox \"7638781ae575ab861c0999d5595fd36110fceb37cb7e49d9a734ed04ee49dbc0\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Apr 21 10:07:40.604045 containerd[1955]: time="2026-04-21T10:07:40.603974988Z" level=info msg="CreateContainer within sandbox \"7638781ae575ab861c0999d5595fd36110fceb37cb7e49d9a734ed04ee49dbc0\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"c9bc7f1fd7be42f12424aafc978928a418a1bcac04838d43459be87527728a25\"" Apr 21 10:07:40.606072 containerd[1955]: time="2026-04-21T10:07:40.605153125Z" level=info msg="StartContainer for \"c9bc7f1fd7be42f12424aafc978928a418a1bcac04838d43459be87527728a25\"" Apr 21 10:07:40.674858 systemd[1]: Started cri-containerd-c9bc7f1fd7be42f12424aafc978928a418a1bcac04838d43459be87527728a25.scope - libcontainer container c9bc7f1fd7be42f12424aafc978928a418a1bcac04838d43459be87527728a25. Apr 21 10:07:40.759858 containerd[1955]: time="2026-04-21T10:07:40.759668425Z" level=info msg="StartContainer for \"c9bc7f1fd7be42f12424aafc978928a418a1bcac04838d43459be87527728a25\" returns successfully" Apr 21 10:07:41.282260 systemd[1]: run-containerd-runc-k8s.io-b43bd77f9d7c42c64ea7685cb5221e5c5766f2b16eccf9c11cc4bbf7e24c8b88-runc.QblPIx.mount: Deactivated successfully. Apr 21 10:07:41.460565 sshd[6104]: Accepted publickey for core from 4.175.71.9 port 32966 ssh2: RSA SHA256:aREzjlBzhX3GBruysBn1Uz2TtCDk2d5wBU92NUxSFu4 Apr 21 10:07:41.463577 sshd[6104]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:07:41.475085 systemd-logind[1930]: New session 10 of user core. Apr 21 10:07:41.479861 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 21 10:07:42.480592 sshd[6104]: pam_unix(sshd:session): session closed for user core Apr 21 10:07:42.492910 systemd[1]: sshd@9-172.31.20.11:22-4.175.71.9:32966.service: Deactivated successfully. Apr 21 10:07:42.499465 systemd[1]: session-10.scope: Deactivated successfully. Apr 21 10:07:42.502376 systemd-logind[1930]: Session 10 logged out. Waiting for processes to exit. Apr 21 10:07:42.511920 systemd-logind[1930]: Removed session 10. Apr 21 10:07:42.553288 containerd[1955]: time="2026-04-21T10:07:42.553102915Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:07:42.557314 containerd[1955]: time="2026-04-21T10:07:42.557240795Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=13766291" Apr 21 10:07:42.562701 containerd[1955]: time="2026-04-21T10:07:42.562378414Z" level=info msg="ImageCreate event name:\"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:07:42.574604 containerd[1955]: time="2026-04-21T10:07:42.574452512Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:07:42.579109 containerd[1955]: time="2026-04-21T10:07:42.576580276Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"15163768\" in 2.013382334s" Apr 21 10:07:42.579109 containerd[1955]: time="2026-04-21T10:07:42.576657138Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\"" Apr 21 10:07:42.582361 containerd[1955]: time="2026-04-21T10:07:42.580441813Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Apr 21 10:07:42.592564 containerd[1955]: time="2026-04-21T10:07:42.592464814Z" level=info msg="CreateContainer within sandbox \"a3e8435b7cc3ae195e8b99654e6660a416f9c392038ac8913470a76b2597968d\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Apr 21 10:07:42.677855 containerd[1955]: time="2026-04-21T10:07:42.677751513Z" level=info msg="CreateContainer within sandbox \"a3e8435b7cc3ae195e8b99654e6660a416f9c392038ac8913470a76b2597968d\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"e07b439431a387dbde61b5442e7718c64bb7f478d356165bd955f1f50bc2c88e\"" Apr 21 10:07:42.679548 containerd[1955]: time="2026-04-21T10:07:42.679238180Z" level=info msg="StartContainer for \"e07b439431a387dbde61b5442e7718c64bb7f478d356165bd955f1f50bc2c88e\"" Apr 21 10:07:42.783169 systemd[1]: Started cri-containerd-e07b439431a387dbde61b5442e7718c64bb7f478d356165bd955f1f50bc2c88e.scope - libcontainer container e07b439431a387dbde61b5442e7718c64bb7f478d356165bd955f1f50bc2c88e. Apr 21 10:07:42.862049 containerd[1955]: time="2026-04-21T10:07:42.861960208Z" level=info msg="StartContainer for \"e07b439431a387dbde61b5442e7718c64bb7f478d356165bd955f1f50bc2c88e\" returns successfully" Apr 21 10:07:43.275066 kubelet[3162]: I0421 10:07:43.274212 3162 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/csi-node-driver-8sx57" podStartSLOduration=26.717380335 podStartE2EDuration="44.274187266s" podCreationTimestamp="2026-04-21 10:06:59 +0000 UTC" firstStartedPulling="2026-04-21 10:07:25.023088874 +0000 UTC m=+60.940450979" lastFinishedPulling="2026-04-21 10:07:42.579895805 +0000 UTC m=+78.497257910" observedRunningTime="2026-04-21 10:07:43.271683666 +0000 UTC m=+79.189045844" watchObservedRunningTime="2026-04-21 10:07:43.274187266 +0000 UTC m=+79.191549371" Apr 21 10:07:43.646800 kubelet[3162]: I0421 10:07:43.646433 3162 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Apr 21 10:07:43.646800 kubelet[3162]: I0421 10:07:43.646616 3162 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Apr 21 10:07:44.467917 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4186431833.mount: Deactivated successfully. Apr 21 10:07:44.485349 containerd[1955]: time="2026-04-21T10:07:44.485269605Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:07:44.487416 containerd[1955]: time="2026-04-21T10:07:44.487222489Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=16426594" Apr 21 10:07:44.489094 containerd[1955]: time="2026-04-21T10:07:44.488386698Z" level=info msg="ImageCreate event name:\"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:07:44.495848 containerd[1955]: time="2026-04-21T10:07:44.495751167Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:07:44.498412 containerd[1955]: time="2026-04-21T10:07:44.498173942Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"16426424\" in 1.917660872s" Apr 21 10:07:44.498412 containerd[1955]: time="2026-04-21T10:07:44.498251693Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\"" Apr 21 10:07:44.510141 containerd[1955]: time="2026-04-21T10:07:44.509497701Z" level=info msg="CreateContainer within sandbox \"7638781ae575ab861c0999d5595fd36110fceb37cb7e49d9a734ed04ee49dbc0\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Apr 21 10:07:44.533818 containerd[1955]: time="2026-04-21T10:07:44.533575830Z" level=info msg="CreateContainer within sandbox \"7638781ae575ab861c0999d5595fd36110fceb37cb7e49d9a734ed04ee49dbc0\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"15198928d95d54a37919a3089822dfdad50413b3cde32cc71c66cc750caf7753\"" Apr 21 10:07:44.535554 containerd[1955]: time="2026-04-21T10:07:44.535013789Z" level=info msg="StartContainer for \"15198928d95d54a37919a3089822dfdad50413b3cde32cc71c66cc750caf7753\"" Apr 21 10:07:44.607865 systemd[1]: Started cri-containerd-15198928d95d54a37919a3089822dfdad50413b3cde32cc71c66cc750caf7753.scope - libcontainer container 15198928d95d54a37919a3089822dfdad50413b3cde32cc71c66cc750caf7753. Apr 21 10:07:44.735220 containerd[1955]: time="2026-04-21T10:07:44.732498275Z" level=info msg="StartContainer for \"15198928d95d54a37919a3089822dfdad50413b3cde32cc71c66cc750caf7753\" returns successfully" Apr 21 10:07:47.664046 systemd[1]: Started sshd@10-172.31.20.11:22-4.175.71.9:50892.service - OpenSSH per-connection server daemon (4.175.71.9:50892). Apr 21 10:07:48.677402 sshd[6325]: Accepted publickey for core from 4.175.71.9 port 50892 ssh2: RSA SHA256:aREzjlBzhX3GBruysBn1Uz2TtCDk2d5wBU92NUxSFu4 Apr 21 10:07:48.682008 sshd[6325]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:07:48.693303 systemd-logind[1930]: New session 11 of user core. Apr 21 10:07:48.698851 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 21 10:07:49.510877 sshd[6325]: pam_unix(sshd:session): session closed for user core Apr 21 10:07:49.518988 systemd[1]: sshd@10-172.31.20.11:22-4.175.71.9:50892.service: Deactivated successfully. Apr 21 10:07:49.523740 systemd[1]: session-11.scope: Deactivated successfully. Apr 21 10:07:49.525493 systemd-logind[1930]: Session 11 logged out. Waiting for processes to exit. Apr 21 10:07:49.528336 systemd-logind[1930]: Removed session 11. Apr 21 10:07:49.698447 systemd[1]: Started sshd@11-172.31.20.11:22-4.175.71.9:50900.service - OpenSSH per-connection server daemon (4.175.71.9:50900). Apr 21 10:07:50.720554 sshd[6354]: Accepted publickey for core from 4.175.71.9 port 50900 ssh2: RSA SHA256:aREzjlBzhX3GBruysBn1Uz2TtCDk2d5wBU92NUxSFu4 Apr 21 10:07:50.723323 sshd[6354]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:07:50.730981 systemd-logind[1930]: New session 12 of user core. Apr 21 10:07:50.738830 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 21 10:07:51.639401 sshd[6354]: pam_unix(sshd:session): session closed for user core Apr 21 10:07:51.647894 systemd[1]: sshd@11-172.31.20.11:22-4.175.71.9:50900.service: Deactivated successfully. Apr 21 10:07:51.652665 systemd[1]: session-12.scope: Deactivated successfully. Apr 21 10:07:51.654710 systemd-logind[1930]: Session 12 logged out. Waiting for processes to exit. Apr 21 10:07:51.656683 systemd-logind[1930]: Removed session 12. Apr 21 10:07:51.826082 systemd[1]: Started sshd@12-172.31.20.11:22-4.175.71.9:50910.service - OpenSSH per-connection server daemon (4.175.71.9:50910). Apr 21 10:07:52.874959 sshd[6389]: Accepted publickey for core from 4.175.71.9 port 50910 ssh2: RSA SHA256:aREzjlBzhX3GBruysBn1Uz2TtCDk2d5wBU92NUxSFu4 Apr 21 10:07:52.878384 sshd[6389]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:07:52.889994 systemd-logind[1930]: New session 13 of user core. Apr 21 10:07:52.897844 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 21 10:07:53.730996 sshd[6389]: pam_unix(sshd:session): session closed for user core Apr 21 10:07:53.738980 systemd[1]: sshd@12-172.31.20.11:22-4.175.71.9:50910.service: Deactivated successfully. Apr 21 10:07:53.744496 systemd[1]: session-13.scope: Deactivated successfully. Apr 21 10:07:53.746963 systemd-logind[1930]: Session 13 logged out. Waiting for processes to exit. Apr 21 10:07:53.748855 systemd-logind[1930]: Removed session 13. Apr 21 10:07:58.924050 systemd[1]: Started sshd@13-172.31.20.11:22-4.175.71.9:51514.service - OpenSSH per-connection server daemon (4.175.71.9:51514). Apr 21 10:07:59.972272 sshd[6404]: Accepted publickey for core from 4.175.71.9 port 51514 ssh2: RSA SHA256:aREzjlBzhX3GBruysBn1Uz2TtCDk2d5wBU92NUxSFu4 Apr 21 10:07:59.975477 sshd[6404]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:07:59.991599 systemd-logind[1930]: New session 14 of user core. Apr 21 10:07:59.999990 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 21 10:08:00.933011 sshd[6404]: pam_unix(sshd:session): session closed for user core Apr 21 10:08:00.945291 systemd[1]: sshd@13-172.31.20.11:22-4.175.71.9:51514.service: Deactivated successfully. Apr 21 10:08:00.949615 systemd[1]: session-14.scope: Deactivated successfully. Apr 21 10:08:00.951301 systemd-logind[1930]: Session 14 logged out. Waiting for processes to exit. Apr 21 10:08:00.955893 systemd-logind[1930]: Removed session 14. Apr 21 10:08:01.111917 systemd[1]: Started sshd@14-172.31.20.11:22-4.175.71.9:51516.service - OpenSSH per-connection server daemon (4.175.71.9:51516). Apr 21 10:08:02.128850 sshd[6426]: Accepted publickey for core from 4.175.71.9 port 51516 ssh2: RSA SHA256:aREzjlBzhX3GBruysBn1Uz2TtCDk2d5wBU92NUxSFu4 Apr 21 10:08:02.131822 sshd[6426]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:08:02.141385 systemd-logind[1930]: New session 15 of user core. Apr 21 10:08:02.148817 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 21 10:08:03.282257 sshd[6426]: pam_unix(sshd:session): session closed for user core Apr 21 10:08:03.288697 systemd[1]: sshd@14-172.31.20.11:22-4.175.71.9:51516.service: Deactivated successfully. Apr 21 10:08:03.292143 systemd[1]: session-15.scope: Deactivated successfully. Apr 21 10:08:03.295971 systemd-logind[1930]: Session 15 logged out. Waiting for processes to exit. Apr 21 10:08:03.298905 systemd-logind[1930]: Removed session 15. Apr 21 10:08:03.463029 systemd[1]: Started sshd@15-172.31.20.11:22-4.175.71.9:51522.service - OpenSSH per-connection server daemon (4.175.71.9:51522). Apr 21 10:08:04.467246 sshd[6439]: Accepted publickey for core from 4.175.71.9 port 51522 ssh2: RSA SHA256:aREzjlBzhX3GBruysBn1Uz2TtCDk2d5wBU92NUxSFu4 Apr 21 10:08:04.470117 sshd[6439]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:08:04.478624 systemd-logind[1930]: New session 16 of user core. Apr 21 10:08:04.488869 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 21 10:08:06.126796 sshd[6439]: pam_unix(sshd:session): session closed for user core Apr 21 10:08:06.139905 systemd[1]: sshd@15-172.31.20.11:22-4.175.71.9:51522.service: Deactivated successfully. Apr 21 10:08:06.145384 systemd[1]: session-16.scope: Deactivated successfully. Apr 21 10:08:06.148693 systemd-logind[1930]: Session 16 logged out. Waiting for processes to exit. Apr 21 10:08:06.152768 systemd-logind[1930]: Removed session 16. Apr 21 10:08:06.316534 systemd[1]: Started sshd@16-172.31.20.11:22-4.175.71.9:37104.service - OpenSSH per-connection server daemon (4.175.71.9:37104). Apr 21 10:08:07.369222 sshd[6485]: Accepted publickey for core from 4.175.71.9 port 37104 ssh2: RSA SHA256:aREzjlBzhX3GBruysBn1Uz2TtCDk2d5wBU92NUxSFu4 Apr 21 10:08:07.372251 sshd[6485]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:08:07.380890 systemd-logind[1930]: New session 17 of user core. Apr 21 10:08:07.386816 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 21 10:08:08.459891 sshd[6485]: pam_unix(sshd:session): session closed for user core Apr 21 10:08:08.467307 systemd[1]: sshd@16-172.31.20.11:22-4.175.71.9:37104.service: Deactivated successfully. Apr 21 10:08:08.473816 systemd[1]: session-17.scope: Deactivated successfully. Apr 21 10:08:08.476236 systemd-logind[1930]: Session 17 logged out. Waiting for processes to exit. Apr 21 10:08:08.479138 systemd-logind[1930]: Removed session 17. Apr 21 10:08:08.646065 systemd[1]: Started sshd@17-172.31.20.11:22-4.175.71.9:37118.service - OpenSSH per-connection server daemon (4.175.71.9:37118). Apr 21 10:08:09.678091 sshd[6510]: Accepted publickey for core from 4.175.71.9 port 37118 ssh2: RSA SHA256:aREzjlBzhX3GBruysBn1Uz2TtCDk2d5wBU92NUxSFu4 Apr 21 10:08:09.681182 sshd[6510]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:08:09.691653 systemd-logind[1930]: New session 18 of user core. Apr 21 10:08:09.701818 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 21 10:08:10.503672 sshd[6510]: pam_unix(sshd:session): session closed for user core Apr 21 10:08:10.510042 systemd-logind[1930]: Session 18 logged out. Waiting for processes to exit. Apr 21 10:08:10.511563 systemd[1]: sshd@17-172.31.20.11:22-4.175.71.9:37118.service: Deactivated successfully. Apr 21 10:08:10.517585 systemd[1]: session-18.scope: Deactivated successfully. Apr 21 10:08:10.523877 systemd-logind[1930]: Removed session 18. Apr 21 10:08:11.366559 kubelet[3162]: I0421 10:08:11.364993 3162 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/whisker-669f97dc6b-fnwbg" podStartSLOduration=30.626962482 podStartE2EDuration="49.364969237s" podCreationTimestamp="2026-04-21 10:07:22 +0000 UTC" firstStartedPulling="2026-04-21 10:07:25.76372442 +0000 UTC m=+61.681086525" lastFinishedPulling="2026-04-21 10:07:44.501731175 +0000 UTC m=+80.419093280" observedRunningTime="2026-04-21 10:07:45.288815246 +0000 UTC m=+81.206177375" watchObservedRunningTime="2026-04-21 10:08:11.364969237 +0000 UTC m=+107.282331354" Apr 21 10:08:14.347260 kubelet[3162]: I0421 10:08:14.346886 3162 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Apr 21 10:08:15.693103 systemd[1]: Started sshd@18-172.31.20.11:22-4.175.71.9:45688.service - OpenSSH per-connection server daemon (4.175.71.9:45688). Apr 21 10:08:16.734099 sshd[6546]: Accepted publickey for core from 4.175.71.9 port 45688 ssh2: RSA SHA256:aREzjlBzhX3GBruysBn1Uz2TtCDk2d5wBU92NUxSFu4 Apr 21 10:08:16.737272 sshd[6546]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:08:16.747711 systemd-logind[1930]: New session 19 of user core. Apr 21 10:08:16.751807 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 21 10:08:17.570174 sshd[6546]: pam_unix(sshd:session): session closed for user core Apr 21 10:08:17.577154 systemd-logind[1930]: Session 19 logged out. Waiting for processes to exit. Apr 21 10:08:17.578679 systemd[1]: sshd@18-172.31.20.11:22-4.175.71.9:45688.service: Deactivated successfully. Apr 21 10:08:17.583058 systemd[1]: session-19.scope: Deactivated successfully. Apr 21 10:08:17.585495 systemd-logind[1930]: Removed session 19. Apr 21 10:08:22.758409 systemd[1]: Started sshd@19-172.31.20.11:22-4.175.71.9:45700.service - OpenSSH per-connection server daemon (4.175.71.9:45700). Apr 21 10:08:23.808560 sshd[6580]: Accepted publickey for core from 4.175.71.9 port 45700 ssh2: RSA SHA256:aREzjlBzhX3GBruysBn1Uz2TtCDk2d5wBU92NUxSFu4 Apr 21 10:08:23.811128 sshd[6580]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:08:23.821121 systemd-logind[1930]: New session 20 of user core. Apr 21 10:08:23.827817 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 21 10:08:24.652494 sshd[6580]: pam_unix(sshd:session): session closed for user core Apr 21 10:08:24.660606 systemd[1]: sshd@19-172.31.20.11:22-4.175.71.9:45700.service: Deactivated successfully. Apr 21 10:08:24.667706 systemd[1]: session-20.scope: Deactivated successfully. Apr 21 10:08:24.670178 systemd-logind[1930]: Session 20 logged out. Waiting for processes to exit. Apr 21 10:08:24.676731 systemd-logind[1930]: Removed session 20. Apr 21 10:08:27.195260 containerd[1955]: time="2026-04-21T10:08:27.195182009Z" level=info msg="StopPodSandbox for \"5c866df755e1af6a463e3173f093534a10997bc2c2d8feb4bcc9715f12fe346d\"" Apr 21 10:08:27.404055 containerd[1955]: 2026-04-21 10:08:27.300 [WARNING][6602] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5c866df755e1af6a463e3173f093534a10997bc2c2d8feb4bcc9715f12fe346d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--11-k8s-csi--node--driver--8sx57-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"678541e7-a702-489b-a934-cdd3a561ab11", ResourceVersion:"1210", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 6, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-11", ContainerID:"a3e8435b7cc3ae195e8b99654e6660a416f9c392038ac8913470a76b2597968d", Pod:"csi-node-driver-8sx57", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.71.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calicdd7c039a12", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:08:27.404055 containerd[1955]: 2026-04-21 10:08:27.301 [INFO][6602] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="5c866df755e1af6a463e3173f093534a10997bc2c2d8feb4bcc9715f12fe346d" Apr 21 10:08:27.404055 containerd[1955]: 2026-04-21 10:08:27.301 [INFO][6602] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5c866df755e1af6a463e3173f093534a10997bc2c2d8feb4bcc9715f12fe346d" iface="eth0" netns="" Apr 21 10:08:27.404055 containerd[1955]: 2026-04-21 10:08:27.301 [INFO][6602] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="5c866df755e1af6a463e3173f093534a10997bc2c2d8feb4bcc9715f12fe346d" Apr 21 10:08:27.404055 containerd[1955]: 2026-04-21 10:08:27.301 [INFO][6602] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="5c866df755e1af6a463e3173f093534a10997bc2c2d8feb4bcc9715f12fe346d" Apr 21 10:08:27.404055 containerd[1955]: 2026-04-21 10:08:27.377 [INFO][6609] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="5c866df755e1af6a463e3173f093534a10997bc2c2d8feb4bcc9715f12fe346d" HandleID="k8s-pod-network.5c866df755e1af6a463e3173f093534a10997bc2c2d8feb4bcc9715f12fe346d" Workload="ip--172--31--20--11-k8s-csi--node--driver--8sx57-eth0" Apr 21 10:08:27.404055 containerd[1955]: 2026-04-21 10:08:27.377 [INFO][6609] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:08:27.404055 containerd[1955]: 2026-04-21 10:08:27.377 [INFO][6609] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:08:27.404055 containerd[1955]: 2026-04-21 10:08:27.393 [WARNING][6609] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="5c866df755e1af6a463e3173f093534a10997bc2c2d8feb4bcc9715f12fe346d" HandleID="k8s-pod-network.5c866df755e1af6a463e3173f093534a10997bc2c2d8feb4bcc9715f12fe346d" Workload="ip--172--31--20--11-k8s-csi--node--driver--8sx57-eth0" Apr 21 10:08:27.404055 containerd[1955]: 2026-04-21 10:08:27.393 [INFO][6609] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="5c866df755e1af6a463e3173f093534a10997bc2c2d8feb4bcc9715f12fe346d" HandleID="k8s-pod-network.5c866df755e1af6a463e3173f093534a10997bc2c2d8feb4bcc9715f12fe346d" Workload="ip--172--31--20--11-k8s-csi--node--driver--8sx57-eth0" Apr 21 10:08:27.404055 containerd[1955]: 2026-04-21 10:08:27.396 [INFO][6609] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:08:27.404055 containerd[1955]: 2026-04-21 10:08:27.399 [INFO][6602] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="5c866df755e1af6a463e3173f093534a10997bc2c2d8feb4bcc9715f12fe346d" Apr 21 10:08:27.406000 containerd[1955]: time="2026-04-21T10:08:27.404107818Z" level=info msg="TearDown network for sandbox \"5c866df755e1af6a463e3173f093534a10997bc2c2d8feb4bcc9715f12fe346d\" successfully" Apr 21 10:08:27.406000 containerd[1955]: time="2026-04-21T10:08:27.404148266Z" level=info msg="StopPodSandbox for \"5c866df755e1af6a463e3173f093534a10997bc2c2d8feb4bcc9715f12fe346d\" returns successfully" Apr 21 10:08:27.406926 containerd[1955]: time="2026-04-21T10:08:27.406814474Z" level=info msg="RemovePodSandbox for \"5c866df755e1af6a463e3173f093534a10997bc2c2d8feb4bcc9715f12fe346d\"" Apr 21 10:08:27.407666 containerd[1955]: time="2026-04-21T10:08:27.407364685Z" level=info msg="Forcibly stopping sandbox \"5c866df755e1af6a463e3173f093534a10997bc2c2d8feb4bcc9715f12fe346d\"" Apr 21 10:08:27.586633 containerd[1955]: 2026-04-21 10:08:27.486 [WARNING][6623] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5c866df755e1af6a463e3173f093534a10997bc2c2d8feb4bcc9715f12fe346d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--11-k8s-csi--node--driver--8sx57-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"678541e7-a702-489b-a934-cdd3a561ab11", ResourceVersion:"1210", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 6, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-11", ContainerID:"a3e8435b7cc3ae195e8b99654e6660a416f9c392038ac8913470a76b2597968d", Pod:"csi-node-driver-8sx57", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.71.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calicdd7c039a12", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:08:27.586633 containerd[1955]: 2026-04-21 10:08:27.487 [INFO][6623] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="5c866df755e1af6a463e3173f093534a10997bc2c2d8feb4bcc9715f12fe346d" Apr 21 10:08:27.586633 containerd[1955]: 2026-04-21 10:08:27.487 [INFO][6623] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5c866df755e1af6a463e3173f093534a10997bc2c2d8feb4bcc9715f12fe346d" iface="eth0" netns="" Apr 21 10:08:27.586633 containerd[1955]: 2026-04-21 10:08:27.487 [INFO][6623] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="5c866df755e1af6a463e3173f093534a10997bc2c2d8feb4bcc9715f12fe346d" Apr 21 10:08:27.586633 containerd[1955]: 2026-04-21 10:08:27.487 [INFO][6623] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="5c866df755e1af6a463e3173f093534a10997bc2c2d8feb4bcc9715f12fe346d" Apr 21 10:08:27.586633 containerd[1955]: 2026-04-21 10:08:27.558 [INFO][6630] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="5c866df755e1af6a463e3173f093534a10997bc2c2d8feb4bcc9715f12fe346d" HandleID="k8s-pod-network.5c866df755e1af6a463e3173f093534a10997bc2c2d8feb4bcc9715f12fe346d" Workload="ip--172--31--20--11-k8s-csi--node--driver--8sx57-eth0" Apr 21 10:08:27.586633 containerd[1955]: 2026-04-21 10:08:27.558 [INFO][6630] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:08:27.586633 containerd[1955]: 2026-04-21 10:08:27.558 [INFO][6630] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:08:27.586633 containerd[1955]: 2026-04-21 10:08:27.573 [WARNING][6630] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="5c866df755e1af6a463e3173f093534a10997bc2c2d8feb4bcc9715f12fe346d" HandleID="k8s-pod-network.5c866df755e1af6a463e3173f093534a10997bc2c2d8feb4bcc9715f12fe346d" Workload="ip--172--31--20--11-k8s-csi--node--driver--8sx57-eth0" Apr 21 10:08:27.586633 containerd[1955]: 2026-04-21 10:08:27.573 [INFO][6630] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="5c866df755e1af6a463e3173f093534a10997bc2c2d8feb4bcc9715f12fe346d" HandleID="k8s-pod-network.5c866df755e1af6a463e3173f093534a10997bc2c2d8feb4bcc9715f12fe346d" Workload="ip--172--31--20--11-k8s-csi--node--driver--8sx57-eth0" Apr 21 10:08:27.586633 containerd[1955]: 2026-04-21 10:08:27.578 [INFO][6630] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:08:27.586633 containerd[1955]: 2026-04-21 10:08:27.582 [INFO][6623] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="5c866df755e1af6a463e3173f093534a10997bc2c2d8feb4bcc9715f12fe346d" Apr 21 10:08:27.586633 containerd[1955]: time="2026-04-21T10:08:27.586154761Z" level=info msg="TearDown network for sandbox \"5c866df755e1af6a463e3173f093534a10997bc2c2d8feb4bcc9715f12fe346d\" successfully" Apr 21 10:08:27.595108 containerd[1955]: time="2026-04-21T10:08:27.595046212Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5c866df755e1af6a463e3173f093534a10997bc2c2d8feb4bcc9715f12fe346d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 21 10:08:27.595588 containerd[1955]: time="2026-04-21T10:08:27.595372367Z" level=info msg="RemovePodSandbox \"5c866df755e1af6a463e3173f093534a10997bc2c2d8feb4bcc9715f12fe346d\" returns successfully" Apr 21 10:08:27.596565 containerd[1955]: time="2026-04-21T10:08:27.596209366Z" level=info msg="StopPodSandbox for \"fab10588f8e7d4a2b00722325f708644c5b0a0b846bcb03afc2160de84a05ad6\"" Apr 21 10:08:27.757051 containerd[1955]: 2026-04-21 10:08:27.669 [WARNING][6645] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fab10588f8e7d4a2b00722325f708644c5b0a0b846bcb03afc2160de84a05ad6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--11-k8s-goldmane--9f7667bb8--fr8h9-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"1a31afb8-9d55-404f-8363-717bc3de0918", ResourceVersion:"1386", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 6, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-11", ContainerID:"fb8943c7b1587acbb7ccd5a3c585cd44b8f121f82c321d10f574f5d825ef56cc", Pod:"goldmane-9f7667bb8-fr8h9", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.71.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib59fb0aaf00", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:08:27.757051 containerd[1955]: 2026-04-21 10:08:27.670 [INFO][6645] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="fab10588f8e7d4a2b00722325f708644c5b0a0b846bcb03afc2160de84a05ad6" Apr 21 10:08:27.757051 containerd[1955]: 2026-04-21 10:08:27.670 [INFO][6645] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fab10588f8e7d4a2b00722325f708644c5b0a0b846bcb03afc2160de84a05ad6" iface="eth0" netns="" Apr 21 10:08:27.757051 containerd[1955]: 2026-04-21 10:08:27.670 [INFO][6645] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="fab10588f8e7d4a2b00722325f708644c5b0a0b846bcb03afc2160de84a05ad6" Apr 21 10:08:27.757051 containerd[1955]: 2026-04-21 10:08:27.670 [INFO][6645] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="fab10588f8e7d4a2b00722325f708644c5b0a0b846bcb03afc2160de84a05ad6" Apr 21 10:08:27.757051 containerd[1955]: 2026-04-21 10:08:27.730 [INFO][6653] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="fab10588f8e7d4a2b00722325f708644c5b0a0b846bcb03afc2160de84a05ad6" HandleID="k8s-pod-network.fab10588f8e7d4a2b00722325f708644c5b0a0b846bcb03afc2160de84a05ad6" Workload="ip--172--31--20--11-k8s-goldmane--9f7667bb8--fr8h9-eth0" Apr 21 10:08:27.757051 containerd[1955]: 2026-04-21 10:08:27.731 [INFO][6653] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:08:27.757051 containerd[1955]: 2026-04-21 10:08:27.731 [INFO][6653] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:08:27.757051 containerd[1955]: 2026-04-21 10:08:27.745 [WARNING][6653] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="fab10588f8e7d4a2b00722325f708644c5b0a0b846bcb03afc2160de84a05ad6" HandleID="k8s-pod-network.fab10588f8e7d4a2b00722325f708644c5b0a0b846bcb03afc2160de84a05ad6" Workload="ip--172--31--20--11-k8s-goldmane--9f7667bb8--fr8h9-eth0" Apr 21 10:08:27.757051 containerd[1955]: 2026-04-21 10:08:27.745 [INFO][6653] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="fab10588f8e7d4a2b00722325f708644c5b0a0b846bcb03afc2160de84a05ad6" HandleID="k8s-pod-network.fab10588f8e7d4a2b00722325f708644c5b0a0b846bcb03afc2160de84a05ad6" Workload="ip--172--31--20--11-k8s-goldmane--9f7667bb8--fr8h9-eth0" Apr 21 10:08:27.757051 containerd[1955]: 2026-04-21 10:08:27.748 [INFO][6653] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:08:27.757051 containerd[1955]: 2026-04-21 10:08:27.751 [INFO][6645] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="fab10588f8e7d4a2b00722325f708644c5b0a0b846bcb03afc2160de84a05ad6" Apr 21 10:08:27.758727 containerd[1955]: time="2026-04-21T10:08:27.758437008Z" level=info msg="TearDown network for sandbox \"fab10588f8e7d4a2b00722325f708644c5b0a0b846bcb03afc2160de84a05ad6\" successfully" Apr 21 10:08:27.758727 containerd[1955]: time="2026-04-21T10:08:27.758543778Z" level=info msg="StopPodSandbox for \"fab10588f8e7d4a2b00722325f708644c5b0a0b846bcb03afc2160de84a05ad6\" returns successfully" Apr 21 10:08:27.760053 containerd[1955]: time="2026-04-21T10:08:27.759417070Z" level=info msg="RemovePodSandbox for \"fab10588f8e7d4a2b00722325f708644c5b0a0b846bcb03afc2160de84a05ad6\"" Apr 21 10:08:27.760053 containerd[1955]: time="2026-04-21T10:08:27.759801934Z" level=info msg="Forcibly stopping sandbox \"fab10588f8e7d4a2b00722325f708644c5b0a0b846bcb03afc2160de84a05ad6\"" Apr 21 10:08:27.958422 containerd[1955]: 2026-04-21 10:08:27.840 [WARNING][6668] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fab10588f8e7d4a2b00722325f708644c5b0a0b846bcb03afc2160de84a05ad6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--11-k8s-goldmane--9f7667bb8--fr8h9-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"1a31afb8-9d55-404f-8363-717bc3de0918", ResourceVersion:"1386", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 6, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-11", ContainerID:"fb8943c7b1587acbb7ccd5a3c585cd44b8f121f82c321d10f574f5d825ef56cc", Pod:"goldmane-9f7667bb8-fr8h9", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.71.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib59fb0aaf00", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:08:27.958422 containerd[1955]: 2026-04-21 10:08:27.842 [INFO][6668] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="fab10588f8e7d4a2b00722325f708644c5b0a0b846bcb03afc2160de84a05ad6" Apr 21 10:08:27.958422 containerd[1955]: 2026-04-21 10:08:27.842 [INFO][6668] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fab10588f8e7d4a2b00722325f708644c5b0a0b846bcb03afc2160de84a05ad6" iface="eth0" netns="" Apr 21 10:08:27.958422 containerd[1955]: 2026-04-21 10:08:27.842 [INFO][6668] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="fab10588f8e7d4a2b00722325f708644c5b0a0b846bcb03afc2160de84a05ad6" Apr 21 10:08:27.958422 containerd[1955]: 2026-04-21 10:08:27.842 [INFO][6668] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="fab10588f8e7d4a2b00722325f708644c5b0a0b846bcb03afc2160de84a05ad6" Apr 21 10:08:27.958422 containerd[1955]: 2026-04-21 10:08:27.928 [INFO][6675] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="fab10588f8e7d4a2b00722325f708644c5b0a0b846bcb03afc2160de84a05ad6" HandleID="k8s-pod-network.fab10588f8e7d4a2b00722325f708644c5b0a0b846bcb03afc2160de84a05ad6" Workload="ip--172--31--20--11-k8s-goldmane--9f7667bb8--fr8h9-eth0" Apr 21 10:08:27.958422 containerd[1955]: 2026-04-21 10:08:27.929 [INFO][6675] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:08:27.958422 containerd[1955]: 2026-04-21 10:08:27.929 [INFO][6675] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:08:27.958422 containerd[1955]: 2026-04-21 10:08:27.944 [WARNING][6675] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="fab10588f8e7d4a2b00722325f708644c5b0a0b846bcb03afc2160de84a05ad6" HandleID="k8s-pod-network.fab10588f8e7d4a2b00722325f708644c5b0a0b846bcb03afc2160de84a05ad6" Workload="ip--172--31--20--11-k8s-goldmane--9f7667bb8--fr8h9-eth0" Apr 21 10:08:27.958422 containerd[1955]: 2026-04-21 10:08:27.944 [INFO][6675] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="fab10588f8e7d4a2b00722325f708644c5b0a0b846bcb03afc2160de84a05ad6" HandleID="k8s-pod-network.fab10588f8e7d4a2b00722325f708644c5b0a0b846bcb03afc2160de84a05ad6" Workload="ip--172--31--20--11-k8s-goldmane--9f7667bb8--fr8h9-eth0" Apr 21 10:08:27.958422 containerd[1955]: 2026-04-21 10:08:27.948 [INFO][6675] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:08:27.958422 containerd[1955]: 2026-04-21 10:08:27.953 [INFO][6668] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="fab10588f8e7d4a2b00722325f708644c5b0a0b846bcb03afc2160de84a05ad6" Apr 21 10:08:27.958422 containerd[1955]: time="2026-04-21T10:08:27.957579031Z" level=info msg="TearDown network for sandbox \"fab10588f8e7d4a2b00722325f708644c5b0a0b846bcb03afc2160de84a05ad6\" successfully" Apr 21 10:08:27.963377 containerd[1955]: time="2026-04-21T10:08:27.963250437Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fab10588f8e7d4a2b00722325f708644c5b0a0b846bcb03afc2160de84a05ad6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 21 10:08:27.963604 containerd[1955]: time="2026-04-21T10:08:27.963392420Z" level=info msg="RemovePodSandbox \"fab10588f8e7d4a2b00722325f708644c5b0a0b846bcb03afc2160de84a05ad6\" returns successfully" Apr 21 10:08:27.965531 containerd[1955]: time="2026-04-21T10:08:27.965048047Z" level=info msg="StopPodSandbox for \"c426fd80f098b5a96307306ff075dfee427d004363145efae42a8a28c4de5c7e\"" Apr 21 10:08:28.145036 containerd[1955]: 2026-04-21 10:08:28.045 [WARNING][6690] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c426fd80f098b5a96307306ff075dfee427d004363145efae42a8a28c4de5c7e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--11-k8s-calico--kube--controllers--74499fb665--57b24-eth0", GenerateName:"calico-kube-controllers-74499fb665-", Namespace:"calico-system", SelfLink:"", UID:"26d85b28-0419-4bec-8bca-4c6bc1376147", ResourceVersion:"1143", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 7, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"74499fb665", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-11", ContainerID:"25216af02af7751975e027390ff59ade0564686f35cd1fc89261d39f8a7407b9", Pod:"calico-kube-controllers-74499fb665-57b24", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.71.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali97f1de98980", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:08:28.145036 containerd[1955]: 2026-04-21 10:08:28.046 [INFO][6690] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="c426fd80f098b5a96307306ff075dfee427d004363145efae42a8a28c4de5c7e" Apr 21 10:08:28.145036 containerd[1955]: 2026-04-21 10:08:28.046 [INFO][6690] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c426fd80f098b5a96307306ff075dfee427d004363145efae42a8a28c4de5c7e" iface="eth0" netns="" Apr 21 10:08:28.145036 containerd[1955]: 2026-04-21 10:08:28.046 [INFO][6690] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="c426fd80f098b5a96307306ff075dfee427d004363145efae42a8a28c4de5c7e" Apr 21 10:08:28.145036 containerd[1955]: 2026-04-21 10:08:28.046 [INFO][6690] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="c426fd80f098b5a96307306ff075dfee427d004363145efae42a8a28c4de5c7e" Apr 21 10:08:28.145036 containerd[1955]: 2026-04-21 10:08:28.107 [INFO][6697] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="c426fd80f098b5a96307306ff075dfee427d004363145efae42a8a28c4de5c7e" HandleID="k8s-pod-network.c426fd80f098b5a96307306ff075dfee427d004363145efae42a8a28c4de5c7e" Workload="ip--172--31--20--11-k8s-calico--kube--controllers--74499fb665--57b24-eth0" Apr 21 10:08:28.145036 containerd[1955]: 2026-04-21 10:08:28.107 [INFO][6697] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:08:28.145036 containerd[1955]: 2026-04-21 10:08:28.107 [INFO][6697] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:08:28.145036 containerd[1955]: 2026-04-21 10:08:28.128 [WARNING][6697] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="c426fd80f098b5a96307306ff075dfee427d004363145efae42a8a28c4de5c7e" HandleID="k8s-pod-network.c426fd80f098b5a96307306ff075dfee427d004363145efae42a8a28c4de5c7e" Workload="ip--172--31--20--11-k8s-calico--kube--controllers--74499fb665--57b24-eth0" Apr 21 10:08:28.145036 containerd[1955]: 2026-04-21 10:08:28.128 [INFO][6697] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="c426fd80f098b5a96307306ff075dfee427d004363145efae42a8a28c4de5c7e" HandleID="k8s-pod-network.c426fd80f098b5a96307306ff075dfee427d004363145efae42a8a28c4de5c7e" Workload="ip--172--31--20--11-k8s-calico--kube--controllers--74499fb665--57b24-eth0" Apr 21 10:08:28.145036 containerd[1955]: 2026-04-21 10:08:28.132 [INFO][6697] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:08:28.145036 containerd[1955]: 2026-04-21 10:08:28.141 [INFO][6690] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="c426fd80f098b5a96307306ff075dfee427d004363145efae42a8a28c4de5c7e" Apr 21 10:08:28.145950 containerd[1955]: time="2026-04-21T10:08:28.145682113Z" level=info msg="TearDown network for sandbox \"c426fd80f098b5a96307306ff075dfee427d004363145efae42a8a28c4de5c7e\" successfully" Apr 21 10:08:28.145950 containerd[1955]: time="2026-04-21T10:08:28.145723737Z" level=info msg="StopPodSandbox for \"c426fd80f098b5a96307306ff075dfee427d004363145efae42a8a28c4de5c7e\" returns successfully" Apr 21 10:08:28.146433 containerd[1955]: time="2026-04-21T10:08:28.146341698Z" level=info msg="RemovePodSandbox for \"c426fd80f098b5a96307306ff075dfee427d004363145efae42a8a28c4de5c7e\"" Apr 21 10:08:28.146433 containerd[1955]: time="2026-04-21T10:08:28.146407059Z" level=info msg="Forcibly stopping sandbox \"c426fd80f098b5a96307306ff075dfee427d004363145efae42a8a28c4de5c7e\"" Apr 21 10:08:28.313946 containerd[1955]: 2026-04-21 10:08:28.218 [WARNING][6712] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c426fd80f098b5a96307306ff075dfee427d004363145efae42a8a28c4de5c7e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--11-k8s-calico--kube--controllers--74499fb665--57b24-eth0", GenerateName:"calico-kube-controllers-74499fb665-", Namespace:"calico-system", SelfLink:"", UID:"26d85b28-0419-4bec-8bca-4c6bc1376147", ResourceVersion:"1143", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 7, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"74499fb665", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-11", ContainerID:"25216af02af7751975e027390ff59ade0564686f35cd1fc89261d39f8a7407b9", Pod:"calico-kube-controllers-74499fb665-57b24", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.71.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali97f1de98980", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:08:28.313946 containerd[1955]: 2026-04-21 10:08:28.219 [INFO][6712] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="c426fd80f098b5a96307306ff075dfee427d004363145efae42a8a28c4de5c7e" Apr 21 10:08:28.313946 containerd[1955]: 2026-04-21 10:08:28.219 [INFO][6712] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c426fd80f098b5a96307306ff075dfee427d004363145efae42a8a28c4de5c7e" iface="eth0" netns="" Apr 21 10:08:28.313946 containerd[1955]: 2026-04-21 10:08:28.219 [INFO][6712] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="c426fd80f098b5a96307306ff075dfee427d004363145efae42a8a28c4de5c7e" Apr 21 10:08:28.313946 containerd[1955]: 2026-04-21 10:08:28.219 [INFO][6712] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="c426fd80f098b5a96307306ff075dfee427d004363145efae42a8a28c4de5c7e" Apr 21 10:08:28.313946 containerd[1955]: 2026-04-21 10:08:28.286 [INFO][6720] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="c426fd80f098b5a96307306ff075dfee427d004363145efae42a8a28c4de5c7e" HandleID="k8s-pod-network.c426fd80f098b5a96307306ff075dfee427d004363145efae42a8a28c4de5c7e" Workload="ip--172--31--20--11-k8s-calico--kube--controllers--74499fb665--57b24-eth0" Apr 21 10:08:28.313946 containerd[1955]: 2026-04-21 10:08:28.286 [INFO][6720] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:08:28.313946 containerd[1955]: 2026-04-21 10:08:28.286 [INFO][6720] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:08:28.313946 containerd[1955]: 2026-04-21 10:08:28.301 [WARNING][6720] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="c426fd80f098b5a96307306ff075dfee427d004363145efae42a8a28c4de5c7e" HandleID="k8s-pod-network.c426fd80f098b5a96307306ff075dfee427d004363145efae42a8a28c4de5c7e" Workload="ip--172--31--20--11-k8s-calico--kube--controllers--74499fb665--57b24-eth0" Apr 21 10:08:28.313946 containerd[1955]: 2026-04-21 10:08:28.301 [INFO][6720] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="c426fd80f098b5a96307306ff075dfee427d004363145efae42a8a28c4de5c7e" HandleID="k8s-pod-network.c426fd80f098b5a96307306ff075dfee427d004363145efae42a8a28c4de5c7e" Workload="ip--172--31--20--11-k8s-calico--kube--controllers--74499fb665--57b24-eth0" Apr 21 10:08:28.313946 containerd[1955]: 2026-04-21 10:08:28.305 [INFO][6720] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:08:28.313946 containerd[1955]: 2026-04-21 10:08:28.310 [INFO][6712] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="c426fd80f098b5a96307306ff075dfee427d004363145efae42a8a28c4de5c7e" Apr 21 10:08:28.316177 containerd[1955]: time="2026-04-21T10:08:28.313994708Z" level=info msg="TearDown network for sandbox \"c426fd80f098b5a96307306ff075dfee427d004363145efae42a8a28c4de5c7e\" successfully" Apr 21 10:08:28.320653 containerd[1955]: time="2026-04-21T10:08:28.320448629Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c426fd80f098b5a96307306ff075dfee427d004363145efae42a8a28c4de5c7e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 21 10:08:28.321295 containerd[1955]: time="2026-04-21T10:08:28.320860855Z" level=info msg="RemovePodSandbox \"c426fd80f098b5a96307306ff075dfee427d004363145efae42a8a28c4de5c7e\" returns successfully" Apr 21 10:08:28.321748 containerd[1955]: time="2026-04-21T10:08:28.321692715Z" level=info msg="StopPodSandbox for \"734c7f6214f7cf5d7196daffcbcfbbe97e510515850d3c08bd2886e1174b3685\"" Apr 21 10:08:28.507413 containerd[1955]: 2026-04-21 10:08:28.414 [WARNING][6734] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="734c7f6214f7cf5d7196daffcbcfbbe97e510515850d3c08bd2886e1174b3685" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--11-k8s-calico--apiserver--756779796c--mwdfq-eth0", GenerateName:"calico-apiserver-756779796c-", Namespace:"calico-system", SelfLink:"", UID:"dcac6754-a482-47df-a01f-171bd1cc8980", ResourceVersion:"1180", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 6, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"756779796c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-11", ContainerID:"9214cad41c6f799db953f6fce7c082d158e7dbcd2ed3c99d1bfcf4e9492cf6d3", Pod:"calico-apiserver-756779796c-mwdfq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.71.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali40de97ef5a8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:08:28.507413 containerd[1955]: 2026-04-21 10:08:28.415 [INFO][6734] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="734c7f6214f7cf5d7196daffcbcfbbe97e510515850d3c08bd2886e1174b3685" Apr 21 10:08:28.507413 containerd[1955]: 2026-04-21 10:08:28.415 [INFO][6734] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="734c7f6214f7cf5d7196daffcbcfbbe97e510515850d3c08bd2886e1174b3685" iface="eth0" netns="" Apr 21 10:08:28.507413 containerd[1955]: 2026-04-21 10:08:28.415 [INFO][6734] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="734c7f6214f7cf5d7196daffcbcfbbe97e510515850d3c08bd2886e1174b3685" Apr 21 10:08:28.507413 containerd[1955]: 2026-04-21 10:08:28.415 [INFO][6734] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="734c7f6214f7cf5d7196daffcbcfbbe97e510515850d3c08bd2886e1174b3685" Apr 21 10:08:28.507413 containerd[1955]: 2026-04-21 10:08:28.479 [INFO][6741] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="734c7f6214f7cf5d7196daffcbcfbbe97e510515850d3c08bd2886e1174b3685" HandleID="k8s-pod-network.734c7f6214f7cf5d7196daffcbcfbbe97e510515850d3c08bd2886e1174b3685" Workload="ip--172--31--20--11-k8s-calico--apiserver--756779796c--mwdfq-eth0" Apr 21 10:08:28.507413 containerd[1955]: 2026-04-21 10:08:28.479 [INFO][6741] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:08:28.507413 containerd[1955]: 2026-04-21 10:08:28.479 [INFO][6741] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:08:28.507413 containerd[1955]: 2026-04-21 10:08:28.497 [WARNING][6741] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="734c7f6214f7cf5d7196daffcbcfbbe97e510515850d3c08bd2886e1174b3685" HandleID="k8s-pod-network.734c7f6214f7cf5d7196daffcbcfbbe97e510515850d3c08bd2886e1174b3685" Workload="ip--172--31--20--11-k8s-calico--apiserver--756779796c--mwdfq-eth0" Apr 21 10:08:28.507413 containerd[1955]: 2026-04-21 10:08:28.498 [INFO][6741] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="734c7f6214f7cf5d7196daffcbcfbbe97e510515850d3c08bd2886e1174b3685" HandleID="k8s-pod-network.734c7f6214f7cf5d7196daffcbcfbbe97e510515850d3c08bd2886e1174b3685" Workload="ip--172--31--20--11-k8s-calico--apiserver--756779796c--mwdfq-eth0" Apr 21 10:08:28.507413 containerd[1955]: 2026-04-21 10:08:28.500 [INFO][6741] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:08:28.507413 containerd[1955]: 2026-04-21 10:08:28.504 [INFO][6734] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="734c7f6214f7cf5d7196daffcbcfbbe97e510515850d3c08bd2886e1174b3685" Apr 21 10:08:28.508317 containerd[1955]: time="2026-04-21T10:08:28.507469394Z" level=info msg="TearDown network for sandbox \"734c7f6214f7cf5d7196daffcbcfbbe97e510515850d3c08bd2886e1174b3685\" successfully" Apr 21 10:08:28.508317 containerd[1955]: time="2026-04-21T10:08:28.507532246Z" level=info msg="StopPodSandbox for \"734c7f6214f7cf5d7196daffcbcfbbe97e510515850d3c08bd2886e1174b3685\" returns successfully" Apr 21 10:08:28.508317 containerd[1955]: time="2026-04-21T10:08:28.508294987Z" level=info msg="RemovePodSandbox for \"734c7f6214f7cf5d7196daffcbcfbbe97e510515850d3c08bd2886e1174b3685\"" Apr 21 10:08:28.508486 containerd[1955]: time="2026-04-21T10:08:28.508339853Z" level=info msg="Forcibly stopping sandbox \"734c7f6214f7cf5d7196daffcbcfbbe97e510515850d3c08bd2886e1174b3685\"" Apr 21 10:08:28.702748 containerd[1955]: 2026-04-21 10:08:28.606 [WARNING][6756] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="734c7f6214f7cf5d7196daffcbcfbbe97e510515850d3c08bd2886e1174b3685" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--11-k8s-calico--apiserver--756779796c--mwdfq-eth0", GenerateName:"calico-apiserver-756779796c-", Namespace:"calico-system", SelfLink:"", UID:"dcac6754-a482-47df-a01f-171bd1cc8980", ResourceVersion:"1180", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 6, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"756779796c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-11", ContainerID:"9214cad41c6f799db953f6fce7c082d158e7dbcd2ed3c99d1bfcf4e9492cf6d3", Pod:"calico-apiserver-756779796c-mwdfq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.71.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali40de97ef5a8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:08:28.702748 containerd[1955]: 2026-04-21 10:08:28.606 [INFO][6756] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="734c7f6214f7cf5d7196daffcbcfbbe97e510515850d3c08bd2886e1174b3685" Apr 21 10:08:28.702748 containerd[1955]: 2026-04-21 10:08:28.606 [INFO][6756] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="734c7f6214f7cf5d7196daffcbcfbbe97e510515850d3c08bd2886e1174b3685" iface="eth0" netns="" Apr 21 10:08:28.702748 containerd[1955]: 2026-04-21 10:08:28.606 [INFO][6756] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="734c7f6214f7cf5d7196daffcbcfbbe97e510515850d3c08bd2886e1174b3685" Apr 21 10:08:28.702748 containerd[1955]: 2026-04-21 10:08:28.606 [INFO][6756] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="734c7f6214f7cf5d7196daffcbcfbbe97e510515850d3c08bd2886e1174b3685" Apr 21 10:08:28.702748 containerd[1955]: 2026-04-21 10:08:28.670 [INFO][6764] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="734c7f6214f7cf5d7196daffcbcfbbe97e510515850d3c08bd2886e1174b3685" HandleID="k8s-pod-network.734c7f6214f7cf5d7196daffcbcfbbe97e510515850d3c08bd2886e1174b3685" Workload="ip--172--31--20--11-k8s-calico--apiserver--756779796c--mwdfq-eth0" Apr 21 10:08:28.702748 containerd[1955]: 2026-04-21 10:08:28.670 [INFO][6764] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:08:28.702748 containerd[1955]: 2026-04-21 10:08:28.670 [INFO][6764] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:08:28.702748 containerd[1955]: 2026-04-21 10:08:28.685 [WARNING][6764] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="734c7f6214f7cf5d7196daffcbcfbbe97e510515850d3c08bd2886e1174b3685" HandleID="k8s-pod-network.734c7f6214f7cf5d7196daffcbcfbbe97e510515850d3c08bd2886e1174b3685" Workload="ip--172--31--20--11-k8s-calico--apiserver--756779796c--mwdfq-eth0" Apr 21 10:08:28.702748 containerd[1955]: 2026-04-21 10:08:28.685 [INFO][6764] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="734c7f6214f7cf5d7196daffcbcfbbe97e510515850d3c08bd2886e1174b3685" HandleID="k8s-pod-network.734c7f6214f7cf5d7196daffcbcfbbe97e510515850d3c08bd2886e1174b3685" Workload="ip--172--31--20--11-k8s-calico--apiserver--756779796c--mwdfq-eth0" Apr 21 10:08:28.702748 containerd[1955]: 2026-04-21 10:08:28.689 [INFO][6764] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:08:28.702748 containerd[1955]: 2026-04-21 10:08:28.695 [INFO][6756] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="734c7f6214f7cf5d7196daffcbcfbbe97e510515850d3c08bd2886e1174b3685" Apr 21 10:08:28.705963 containerd[1955]: time="2026-04-21T10:08:28.703175756Z" level=info msg="TearDown network for sandbox \"734c7f6214f7cf5d7196daffcbcfbbe97e510515850d3c08bd2886e1174b3685\" successfully" Apr 21 10:08:28.714978 containerd[1955]: time="2026-04-21T10:08:28.713791545Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"734c7f6214f7cf5d7196daffcbcfbbe97e510515850d3c08bd2886e1174b3685\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 21 10:08:28.714978 containerd[1955]: time="2026-04-21T10:08:28.714036552Z" level=info msg="RemovePodSandbox \"734c7f6214f7cf5d7196daffcbcfbbe97e510515850d3c08bd2886e1174b3685\" returns successfully" Apr 21 10:08:29.854855 systemd[1]: Started sshd@20-172.31.20.11:22-4.175.71.9:60484.service - OpenSSH per-connection server daemon (4.175.71.9:60484). Apr 21 10:08:30.916755 sshd[6772]: Accepted publickey for core from 4.175.71.9 port 60484 ssh2: RSA SHA256:aREzjlBzhX3GBruysBn1Uz2TtCDk2d5wBU92NUxSFu4 Apr 21 10:08:30.925156 sshd[6772]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:08:30.934135 systemd-logind[1930]: New session 21 of user core. Apr 21 10:08:30.943986 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 21 10:08:31.781845 sshd[6772]: pam_unix(sshd:session): session closed for user core Apr 21 10:08:31.787776 systemd-logind[1930]: Session 21 logged out. Waiting for processes to exit. Apr 21 10:08:31.790742 systemd[1]: sshd@20-172.31.20.11:22-4.175.71.9:60484.service: Deactivated successfully. Apr 21 10:08:31.795941 systemd[1]: session-21.scope: Deactivated successfully. Apr 21 10:08:31.798527 systemd-logind[1930]: Removed session 21. Apr 21 10:08:36.961010 systemd[1]: Started sshd@21-172.31.20.11:22-4.175.71.9:50412.service - OpenSSH per-connection server daemon (4.175.71.9:50412). Apr 21 10:08:37.979371 sshd[6806]: Accepted publickey for core from 4.175.71.9 port 50412 ssh2: RSA SHA256:aREzjlBzhX3GBruysBn1Uz2TtCDk2d5wBU92NUxSFu4 Apr 21 10:08:37.982575 sshd[6806]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:08:37.990557 systemd-logind[1930]: New session 22 of user core. Apr 21 10:08:37.998803 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 21 10:08:38.807692 sshd[6806]: pam_unix(sshd:session): session closed for user core Apr 21 10:08:38.814074 systemd-logind[1930]: Session 22 logged out. Waiting for processes to exit. Apr 21 10:08:38.814478 systemd[1]: sshd@21-172.31.20.11:22-4.175.71.9:50412.service: Deactivated successfully. Apr 21 10:08:38.818207 systemd[1]: session-22.scope: Deactivated successfully. Apr 21 10:08:38.823372 systemd-logind[1930]: Removed session 22. Apr 21 10:08:46.385252 systemd[1]: run-containerd-runc-k8s.io-a9d40cb889643b30cf5177384e01e98e38da684477c5254dcb43f99e995cd1aa-runc.Pb0iFy.mount: Deactivated successfully. Apr 21 10:08:52.662461 systemd[1]: cri-containerd-0bd22e82160f2c8bf19791a772b6dcc521c75d5ebeaf23f07a12ce9d26a825dc.scope: Deactivated successfully. Apr 21 10:08:52.665741 systemd[1]: cri-containerd-0bd22e82160f2c8bf19791a772b6dcc521c75d5ebeaf23f07a12ce9d26a825dc.scope: Consumed 4.756s CPU time, 17.4M memory peak, 0B memory swap peak. Apr 21 10:08:52.722044 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0bd22e82160f2c8bf19791a772b6dcc521c75d5ebeaf23f07a12ce9d26a825dc-rootfs.mount: Deactivated successfully. Apr 21 10:08:52.724551 containerd[1955]: time="2026-04-21T10:08:52.722862897Z" level=info msg="shim disconnected" id=0bd22e82160f2c8bf19791a772b6dcc521c75d5ebeaf23f07a12ce9d26a825dc namespace=k8s.io Apr 21 10:08:52.727090 containerd[1955]: time="2026-04-21T10:08:52.726707590Z" level=warning msg="cleaning up after shim disconnected" id=0bd22e82160f2c8bf19791a772b6dcc521c75d5ebeaf23f07a12ce9d26a825dc namespace=k8s.io Apr 21 10:08:52.727090 containerd[1955]: time="2026-04-21T10:08:52.726808513Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 10:08:53.064433 systemd[1]: cri-containerd-d0a692be492791dd6bbcd6ee236d22e2681dd83c8d77ed339f033965e62ca6e5.scope: Deactivated successfully. Apr 21 10:08:53.065075 systemd[1]: cri-containerd-d0a692be492791dd6bbcd6ee236d22e2681dd83c8d77ed339f033965e62ca6e5.scope: Consumed 33.887s CPU time. Apr 21 10:08:53.107050 containerd[1955]: time="2026-04-21T10:08:53.106809269Z" level=info msg="shim disconnected" id=d0a692be492791dd6bbcd6ee236d22e2681dd83c8d77ed339f033965e62ca6e5 namespace=k8s.io Apr 21 10:08:53.107050 containerd[1955]: time="2026-04-21T10:08:53.106887956Z" level=warning msg="cleaning up after shim disconnected" id=d0a692be492791dd6bbcd6ee236d22e2681dd83c8d77ed339f033965e62ca6e5 namespace=k8s.io Apr 21 10:08:53.107050 containerd[1955]: time="2026-04-21T10:08:53.106907922Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 10:08:53.112100 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d0a692be492791dd6bbcd6ee236d22e2681dd83c8d77ed339f033965e62ca6e5-rootfs.mount: Deactivated successfully. Apr 21 10:08:53.546933 kubelet[3162]: I0421 10:08:53.546096 3162 scope.go:122] "RemoveContainer" containerID="0bd22e82160f2c8bf19791a772b6dcc521c75d5ebeaf23f07a12ce9d26a825dc" Apr 21 10:08:53.551050 kubelet[3162]: I0421 10:08:53.551002 3162 scope.go:122] "RemoveContainer" containerID="d0a692be492791dd6bbcd6ee236d22e2681dd83c8d77ed339f033965e62ca6e5" Apr 21 10:08:53.554501 containerd[1955]: time="2026-04-21T10:08:53.554315681Z" level=info msg="CreateContainer within sandbox \"ba70ae438ab14005cc968aef7fbe9f2345663a51d888de4657b4e658697b5561\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Apr 21 10:08:53.558279 containerd[1955]: time="2026-04-21T10:08:53.557896951Z" level=info msg="CreateContainer within sandbox \"90dd0df3282434f4248494f1309ad6660e989dd97206795dff9c7dd0686bf9a7\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Apr 21 10:08:53.581950 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1002220551.mount: Deactivated successfully. Apr 21 10:08:53.585923 containerd[1955]: time="2026-04-21T10:08:53.584876210Z" level=info msg="CreateContainer within sandbox \"ba70ae438ab14005cc968aef7fbe9f2345663a51d888de4657b4e658697b5561\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"68c67e1e312295a14951e02650be447d6bdb92b46db49127ddd809743f6dddd4\"" Apr 21 10:08:53.591537 containerd[1955]: time="2026-04-21T10:08:53.588273632Z" level=info msg="StartContainer for \"68c67e1e312295a14951e02650be447d6bdb92b46db49127ddd809743f6dddd4\"" Apr 21 10:08:53.595139 containerd[1955]: time="2026-04-21T10:08:53.594323635Z" level=info msg="CreateContainer within sandbox \"90dd0df3282434f4248494f1309ad6660e989dd97206795dff9c7dd0686bf9a7\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"3d94b850eb6985a3eed58dea710e143f7cc3a7ad1089db73d1752ac149cacf8c\"" Apr 21 10:08:53.600534 containerd[1955]: time="2026-04-21T10:08:53.599032653Z" level=info msg="StartContainer for \"3d94b850eb6985a3eed58dea710e143f7cc3a7ad1089db73d1752ac149cacf8c\"" Apr 21 10:08:53.648052 systemd[1]: Started cri-containerd-68c67e1e312295a14951e02650be447d6bdb92b46db49127ddd809743f6dddd4.scope - libcontainer container 68c67e1e312295a14951e02650be447d6bdb92b46db49127ddd809743f6dddd4. Apr 21 10:08:53.670019 systemd[1]: Started cri-containerd-3d94b850eb6985a3eed58dea710e143f7cc3a7ad1089db73d1752ac149cacf8c.scope - libcontainer container 3d94b850eb6985a3eed58dea710e143f7cc3a7ad1089db73d1752ac149cacf8c. Apr 21 10:08:53.756007 containerd[1955]: time="2026-04-21T10:08:53.755886662Z" level=info msg="StartContainer for \"68c67e1e312295a14951e02650be447d6bdb92b46db49127ddd809743f6dddd4\" returns successfully" Apr 21 10:08:53.786249 containerd[1955]: time="2026-04-21T10:08:53.786076433Z" level=info msg="StartContainer for \"3d94b850eb6985a3eed58dea710e143f7cc3a7ad1089db73d1752ac149cacf8c\" returns successfully" Apr 21 10:08:57.062463 kubelet[3162]: E0421 10:08:57.061649 3162 controller.go:251] "Failed to update lease" err="Put \"https://172.31.20.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-11?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Apr 21 10:08:58.357961 systemd[1]: cri-containerd-75942a978b5676d6758cdc99ff7aa0248e5378c2a3b90b974f5d4520976282f7.scope: Deactivated successfully. Apr 21 10:08:58.359012 systemd[1]: cri-containerd-75942a978b5676d6758cdc99ff7aa0248e5378c2a3b90b974f5d4520976282f7.scope: Consumed 2.704s CPU time, 16.4M memory peak, 0B memory swap peak. Apr 21 10:08:58.398143 containerd[1955]: time="2026-04-21T10:08:58.398015623Z" level=info msg="shim disconnected" id=75942a978b5676d6758cdc99ff7aa0248e5378c2a3b90b974f5d4520976282f7 namespace=k8s.io Apr 21 10:08:58.398143 containerd[1955]: time="2026-04-21T10:08:58.398128467Z" level=warning msg="cleaning up after shim disconnected" id=75942a978b5676d6758cdc99ff7aa0248e5378c2a3b90b974f5d4520976282f7 namespace=k8s.io Apr 21 10:08:58.399810 containerd[1955]: time="2026-04-21T10:08:58.398153559Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 10:08:58.403412 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-75942a978b5676d6758cdc99ff7aa0248e5378c2a3b90b974f5d4520976282f7-rootfs.mount: Deactivated successfully. Apr 21 10:08:58.580556 kubelet[3162]: I0421 10:08:58.580255 3162 scope.go:122] "RemoveContainer" containerID="75942a978b5676d6758cdc99ff7aa0248e5378c2a3b90b974f5d4520976282f7" Apr 21 10:08:58.585867 containerd[1955]: time="2026-04-21T10:08:58.585793402Z" level=info msg="CreateContainer within sandbox \"dcbd461d69c9327767543660c172abb76953b2c505fb8e537de4e9ae0b089841\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Apr 21 10:08:58.616675 containerd[1955]: time="2026-04-21T10:08:58.614446081Z" level=info msg="CreateContainer within sandbox \"dcbd461d69c9327767543660c172abb76953b2c505fb8e537de4e9ae0b089841\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"267f4fbd6bc10610d17947e5fc133354def31ae163999d07110992d4dd2256a5\"" Apr 21 10:08:58.616675 containerd[1955]: time="2026-04-21T10:08:58.615460084Z" level=info msg="StartContainer for \"267f4fbd6bc10610d17947e5fc133354def31ae163999d07110992d4dd2256a5\"" Apr 21 10:08:58.693848 systemd[1]: Started cri-containerd-267f4fbd6bc10610d17947e5fc133354def31ae163999d07110992d4dd2256a5.scope - libcontainer container 267f4fbd6bc10610d17947e5fc133354def31ae163999d07110992d4dd2256a5. Apr 21 10:08:58.765092 containerd[1955]: time="2026-04-21T10:08:58.764967982Z" level=info msg="StartContainer for \"267f4fbd6bc10610d17947e5fc133354def31ae163999d07110992d4dd2256a5\" returns successfully" Apr 21 10:09:05.295981 systemd[1]: cri-containerd-68c67e1e312295a14951e02650be447d6bdb92b46db49127ddd809743f6dddd4.scope: Deactivated successfully. Apr 21 10:09:05.332619 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-68c67e1e312295a14951e02650be447d6bdb92b46db49127ddd809743f6dddd4-rootfs.mount: Deactivated successfully. Apr 21 10:09:05.346830 containerd[1955]: time="2026-04-21T10:09:05.346745616Z" level=info msg="shim disconnected" id=68c67e1e312295a14951e02650be447d6bdb92b46db49127ddd809743f6dddd4 namespace=k8s.io Apr 21 10:09:05.346830 containerd[1955]: time="2026-04-21T10:09:05.346826944Z" level=warning msg="cleaning up after shim disconnected" id=68c67e1e312295a14951e02650be447d6bdb92b46db49127ddd809743f6dddd4 namespace=k8s.io Apr 21 10:09:05.347594 containerd[1955]: time="2026-04-21T10:09:05.346849419Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 10:09:05.609824 kubelet[3162]: I0421 10:09:05.609657 3162 scope.go:122] "RemoveContainer" containerID="d0a692be492791dd6bbcd6ee236d22e2681dd83c8d77ed339f033965e62ca6e5" Apr 21 10:09:05.611014 kubelet[3162]: I0421 10:09:05.610059 3162 scope.go:122] "RemoveContainer" containerID="68c67e1e312295a14951e02650be447d6bdb92b46db49127ddd809743f6dddd4" Apr 21 10:09:05.611014 kubelet[3162]: E0421 10:09:05.610321 3162 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-6cf4cccc57-6vz9l_tigera-operator(4e1de161-af2b-4f77-a406-898e76c6edbf)\"" pod="tigera-operator/tigera-operator-6cf4cccc57-6vz9l" podUID="4e1de161-af2b-4f77-a406-898e76c6edbf" Apr 21 10:09:05.613900 containerd[1955]: time="2026-04-21T10:09:05.613826195Z" level=info msg="RemoveContainer for \"d0a692be492791dd6bbcd6ee236d22e2681dd83c8d77ed339f033965e62ca6e5\"" Apr 21 10:09:05.621343 containerd[1955]: time="2026-04-21T10:09:05.621268835Z" level=info msg="RemoveContainer for \"d0a692be492791dd6bbcd6ee236d22e2681dd83c8d77ed339f033965e62ca6e5\" returns successfully"