Apr 17 23:34:30.275464 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Apr 17 23:34:30.275511 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Apr 17 22:13:49 -00 2026 Apr 17 23:34:30.275537 kernel: KASLR disabled due to lack of seed Apr 17 23:34:30.275554 kernel: efi: EFI v2.7 by EDK II Apr 17 23:34:30.275571 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7b001a98 MEMRESERVE=0x7852ee18 Apr 17 23:34:30.275586 kernel: ACPI: Early table checksum verification disabled Apr 17 23:34:30.275605 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Apr 17 23:34:30.275621 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Apr 17 23:34:30.275637 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Apr 17 23:34:30.275653 kernel: ACPI: DSDT 0x0000000078640000 0013D2 (v02 AMAZON AMZNDSDT 00000001 AMZN 00000001) Apr 17 23:34:30.275674 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Apr 17 23:34:30.275690 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Apr 17 23:34:30.275706 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Apr 17 23:34:30.275722 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Apr 17 23:34:30.275741 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Apr 17 23:34:30.275762 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Apr 17 23:34:30.275780 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Apr 17 23:34:30.275798 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Apr 17 23:34:30.275817 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Apr 17 23:34:30.275837 kernel: printk: bootconsole [uart0] enabled Apr 17 23:34:30.275857 kernel: NUMA: Failed to initialise from firmware Apr 17 23:34:30.275874 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Apr 17 23:34:30.275894 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Apr 17 23:34:30.275914 kernel: Zone ranges: Apr 17 23:34:30.287998 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Apr 17 23:34:30.288049 kernel: DMA32 empty Apr 17 23:34:30.288089 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Apr 17 23:34:30.288108 kernel: Movable zone start for each node Apr 17 23:34:30.288126 kernel: Early memory node ranges Apr 17 23:34:30.288143 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Apr 17 23:34:30.288161 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Apr 17 23:34:30.288179 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Apr 17 23:34:30.288196 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Apr 17 23:34:30.288213 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Apr 17 23:34:30.288230 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Apr 17 23:34:30.288247 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Apr 17 23:34:30.288264 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Apr 17 23:34:30.288280 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Apr 17 23:34:30.288301 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Apr 17 23:34:30.288319 kernel: psci: probing for conduit method from ACPI. Apr 17 23:34:30.288343 kernel: psci: PSCIv1.0 detected in firmware. Apr 17 23:34:30.288361 kernel: psci: Using standard PSCI v0.2 function IDs Apr 17 23:34:30.288379 kernel: psci: Trusted OS migration not required Apr 17 23:34:30.288401 kernel: psci: SMC Calling Convention v1.1 Apr 17 23:34:30.288419 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000001) Apr 17 23:34:30.288437 kernel: percpu: Embedded 30 pages/cpu s85736 r8192 d28952 u122880 Apr 17 23:34:30.288454 kernel: pcpu-alloc: s85736 r8192 d28952 u122880 alloc=30*4096 Apr 17 23:34:30.288472 kernel: pcpu-alloc: [0] 0 [0] 1 Apr 17 23:34:30.288490 kernel: Detected PIPT I-cache on CPU0 Apr 17 23:34:30.288508 kernel: CPU features: detected: GIC system register CPU interface Apr 17 23:34:30.288525 kernel: CPU features: detected: Spectre-v2 Apr 17 23:34:30.288543 kernel: CPU features: detected: Spectre-v3a Apr 17 23:34:30.288561 kernel: CPU features: detected: Spectre-BHB Apr 17 23:34:30.288579 kernel: CPU features: detected: ARM erratum 1742098 Apr 17 23:34:30.288602 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Apr 17 23:34:30.288621 kernel: alternatives: applying boot alternatives Apr 17 23:34:30.288642 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=f77c53ef012912081447488e689e924a7faa1d92b63ab5dfeba9709e9511e349 Apr 17 23:34:30.288661 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 17 23:34:30.288680 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 17 23:34:30.288697 kernel: Fallback order for Node 0: 0 Apr 17 23:34:30.288715 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Apr 17 23:34:30.288741 kernel: Policy zone: Normal Apr 17 23:34:30.288761 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 17 23:34:30.288778 kernel: software IO TLB: area num 2. Apr 17 23:34:30.288796 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Apr 17 23:34:30.288819 kernel: Memory: 3820096K/4030464K available (10304K kernel code, 2180K rwdata, 8116K rodata, 39424K init, 897K bss, 210368K reserved, 0K cma-reserved) Apr 17 23:34:30.288837 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 17 23:34:30.288855 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 17 23:34:30.288873 kernel: rcu: RCU event tracing is enabled. Apr 17 23:34:30.288891 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 17 23:34:30.288909 kernel: Trampoline variant of Tasks RCU enabled. Apr 17 23:34:30.288965 kernel: Tracing variant of Tasks RCU enabled. Apr 17 23:34:30.288986 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 17 23:34:30.289004 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 17 23:34:30.289021 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Apr 17 23:34:30.289039 kernel: GICv3: 96 SPIs implemented Apr 17 23:34:30.289063 kernel: GICv3: 0 Extended SPIs implemented Apr 17 23:34:30.289081 kernel: Root IRQ handler: gic_handle_irq Apr 17 23:34:30.289099 kernel: GICv3: GICv3 features: 16 PPIs Apr 17 23:34:30.289117 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Apr 17 23:34:30.289134 kernel: ITS [mem 0x10080000-0x1009ffff] Apr 17 23:34:30.289152 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Apr 17 23:34:30.289170 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Apr 17 23:34:30.289188 kernel: GICv3: using LPI property table @0x00000004000d0000 Apr 17 23:34:30.289206 kernel: ITS: Using hypervisor restricted LPI range [128] Apr 17 23:34:30.289224 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Apr 17 23:34:30.289241 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 17 23:34:30.289259 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Apr 17 23:34:30.289281 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Apr 17 23:34:30.289299 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Apr 17 23:34:30.289317 kernel: Console: colour dummy device 80x25 Apr 17 23:34:30.289335 kernel: printk: console [tty1] enabled Apr 17 23:34:30.289353 kernel: ACPI: Core revision 20230628 Apr 17 23:34:30.289372 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Apr 17 23:34:30.289390 kernel: pid_max: default: 32768 minimum: 301 Apr 17 23:34:30.289408 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 17 23:34:30.289426 kernel: landlock: Up and running. Apr 17 23:34:30.289447 kernel: SELinux: Initializing. Apr 17 23:34:30.289466 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 17 23:34:30.289484 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 17 23:34:30.289502 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 17 23:34:30.289520 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 17 23:34:30.289538 kernel: rcu: Hierarchical SRCU implementation. Apr 17 23:34:30.289556 kernel: rcu: Max phase no-delay instances is 400. Apr 17 23:34:30.289595 kernel: Platform MSI: ITS@0x10080000 domain created Apr 17 23:34:30.289615 kernel: PCI/MSI: ITS@0x10080000 domain created Apr 17 23:34:30.289638 kernel: Remapping and enabling EFI services. Apr 17 23:34:30.289657 kernel: smp: Bringing up secondary CPUs ... Apr 17 23:34:30.289675 kernel: Detected PIPT I-cache on CPU1 Apr 17 23:34:30.289693 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Apr 17 23:34:30.289711 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Apr 17 23:34:30.289729 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Apr 17 23:34:30.289747 kernel: smp: Brought up 1 node, 2 CPUs Apr 17 23:34:30.289765 kernel: SMP: Total of 2 processors activated. Apr 17 23:34:30.289782 kernel: CPU features: detected: 32-bit EL0 Support Apr 17 23:34:30.289804 kernel: CPU features: detected: 32-bit EL1 Support Apr 17 23:34:30.289823 kernel: CPU features: detected: CRC32 instructions Apr 17 23:34:30.289841 kernel: CPU: All CPU(s) started at EL1 Apr 17 23:34:30.289870 kernel: alternatives: applying system-wide alternatives Apr 17 23:34:30.289892 kernel: devtmpfs: initialized Apr 17 23:34:30.289911 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 17 23:34:30.291119 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 17 23:34:30.291147 kernel: pinctrl core: initialized pinctrl subsystem Apr 17 23:34:30.291168 kernel: SMBIOS 3.0.0 present. Apr 17 23:34:30.291197 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Apr 17 23:34:30.291217 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 17 23:34:30.291237 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Apr 17 23:34:30.291258 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Apr 17 23:34:30.291277 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Apr 17 23:34:30.291297 kernel: audit: initializing netlink subsys (disabled) Apr 17 23:34:30.291316 kernel: audit: type=2000 audit(0.289:1): state=initialized audit_enabled=0 res=1 Apr 17 23:34:30.291335 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 17 23:34:30.291358 kernel: cpuidle: using governor menu Apr 17 23:34:30.291377 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Apr 17 23:34:30.291396 kernel: ASID allocator initialised with 65536 entries Apr 17 23:34:30.291415 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 17 23:34:30.291433 kernel: Serial: AMBA PL011 UART driver Apr 17 23:34:30.291452 kernel: Modules: 17488 pages in range for non-PLT usage Apr 17 23:34:30.291471 kernel: Modules: 509008 pages in range for PLT usage Apr 17 23:34:30.291490 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 17 23:34:30.291509 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Apr 17 23:34:30.291532 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Apr 17 23:34:30.291552 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Apr 17 23:34:30.291572 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 17 23:34:30.291591 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Apr 17 23:34:30.291610 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Apr 17 23:34:30.291628 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Apr 17 23:34:30.291647 kernel: ACPI: Added _OSI(Module Device) Apr 17 23:34:30.291666 kernel: ACPI: Added _OSI(Processor Device) Apr 17 23:34:30.291685 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 17 23:34:30.291708 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 17 23:34:30.291727 kernel: ACPI: Interpreter enabled Apr 17 23:34:30.291745 kernel: ACPI: Using GIC for interrupt routing Apr 17 23:34:30.291764 kernel: ACPI: MCFG table detected, 1 entries Apr 17 23:34:30.291783 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00]) Apr 17 23:34:30.293167 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 17 23:34:30.293399 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Apr 17 23:34:30.293640 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Apr 17 23:34:30.293860 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x200fffff] reserved by PNP0C02:00 Apr 17 23:34:30.294361 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x200fffff] for [bus 00] Apr 17 23:34:30.294391 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Apr 17 23:34:30.294411 kernel: acpiphp: Slot [1] registered Apr 17 23:34:30.294430 kernel: acpiphp: Slot [2] registered Apr 17 23:34:30.294449 kernel: acpiphp: Slot [3] registered Apr 17 23:34:30.294468 kernel: acpiphp: Slot [4] registered Apr 17 23:34:30.294486 kernel: acpiphp: Slot [5] registered Apr 17 23:34:30.294512 kernel: acpiphp: Slot [6] registered Apr 17 23:34:30.294531 kernel: acpiphp: Slot [7] registered Apr 17 23:34:30.294550 kernel: acpiphp: Slot [8] registered Apr 17 23:34:30.294569 kernel: acpiphp: Slot [9] registered Apr 17 23:34:30.294587 kernel: acpiphp: Slot [10] registered Apr 17 23:34:30.294606 kernel: acpiphp: Slot [11] registered Apr 17 23:34:30.294624 kernel: acpiphp: Slot [12] registered Apr 17 23:34:30.294643 kernel: acpiphp: Slot [13] registered Apr 17 23:34:30.294662 kernel: acpiphp: Slot [14] registered Apr 17 23:34:30.294681 kernel: acpiphp: Slot [15] registered Apr 17 23:34:30.294705 kernel: acpiphp: Slot [16] registered Apr 17 23:34:30.294723 kernel: acpiphp: Slot [17] registered Apr 17 23:34:30.294742 kernel: acpiphp: Slot [18] registered Apr 17 23:34:30.294761 kernel: acpiphp: Slot [19] registered Apr 17 23:34:30.294779 kernel: acpiphp: Slot [20] registered Apr 17 23:34:30.294798 kernel: acpiphp: Slot [21] registered Apr 17 23:34:30.294816 kernel: acpiphp: Slot [22] registered Apr 17 23:34:30.294835 kernel: acpiphp: Slot [23] registered Apr 17 23:34:30.294853 kernel: acpiphp: Slot [24] registered Apr 17 23:34:30.294876 kernel: acpiphp: Slot [25] registered Apr 17 23:34:30.294895 kernel: acpiphp: Slot [26] registered Apr 17 23:34:30.294914 kernel: acpiphp: Slot [27] registered Apr 17 23:34:30.296050 kernel: acpiphp: Slot [28] registered Apr 17 23:34:30.296075 kernel: acpiphp: Slot [29] registered Apr 17 23:34:30.296094 kernel: acpiphp: Slot [30] registered Apr 17 23:34:30.296113 kernel: acpiphp: Slot [31] registered Apr 17 23:34:30.296132 kernel: PCI host bridge to bus 0000:00 Apr 17 23:34:30.296406 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Apr 17 23:34:30.296607 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Apr 17 23:34:30.296794 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Apr 17 23:34:30.297302 kernel: pci_bus 0000:00: root bus resource [bus 00] Apr 17 23:34:30.297578 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Apr 17 23:34:30.297820 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Apr 17 23:34:30.298154 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Apr 17 23:34:30.298427 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Apr 17 23:34:30.298677 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Apr 17 23:34:30.298912 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Apr 17 23:34:30.299269 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Apr 17 23:34:30.299487 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Apr 17 23:34:30.299701 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Apr 17 23:34:30.299918 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Apr 17 23:34:30.300272 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Apr 17 23:34:30.300543 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Apr 17 23:34:30.300803 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Apr 17 23:34:30.301079 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Apr 17 23:34:30.301117 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Apr 17 23:34:30.301138 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Apr 17 23:34:30.301170 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Apr 17 23:34:30.301191 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Apr 17 23:34:30.301231 kernel: iommu: Default domain type: Translated Apr 17 23:34:30.301262 kernel: iommu: DMA domain TLB invalidation policy: strict mode Apr 17 23:34:30.301284 kernel: efivars: Registered efivars operations Apr 17 23:34:30.301315 kernel: vgaarb: loaded Apr 17 23:34:30.301335 kernel: clocksource: Switched to clocksource arch_sys_counter Apr 17 23:34:30.301367 kernel: VFS: Disk quotas dquot_6.6.0 Apr 17 23:34:30.301387 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 17 23:34:30.301420 kernel: pnp: PnP ACPI init Apr 17 23:34:30.301739 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Apr 17 23:34:30.301785 kernel: pnp: PnP ACPI: found 1 devices Apr 17 23:34:30.301818 kernel: NET: Registered PF_INET protocol family Apr 17 23:34:30.301838 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 17 23:34:30.301871 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 17 23:34:30.301900 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 17 23:34:30.301938 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 17 23:34:30.301962 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 17 23:34:30.301993 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 17 23:34:30.302029 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 17 23:34:30.302054 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 17 23:34:30.302083 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 17 23:34:30.302106 kernel: PCI: CLS 0 bytes, default 64 Apr 17 23:34:30.302136 kernel: kvm [1]: HYP mode not available Apr 17 23:34:30.302223 kernel: Initialise system trusted keyrings Apr 17 23:34:30.302249 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 17 23:34:30.302280 kernel: Key type asymmetric registered Apr 17 23:34:30.302302 kernel: Asymmetric key parser 'x509' registered Apr 17 23:34:30.305784 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Apr 17 23:34:30.305828 kernel: io scheduler mq-deadline registered Apr 17 23:34:30.305851 kernel: io scheduler kyber registered Apr 17 23:34:30.305870 kernel: io scheduler bfq registered Apr 17 23:34:30.306253 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Apr 17 23:34:30.306290 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Apr 17 23:34:30.306309 kernel: ACPI: button: Power Button [PWRB] Apr 17 23:34:30.306329 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Apr 17 23:34:30.306365 kernel: ACPI: button: Sleep Button [SLPB] Apr 17 23:34:30.306405 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 17 23:34:30.306430 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Apr 17 23:34:30.306728 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Apr 17 23:34:30.306758 kernel: printk: console [ttyS0] disabled Apr 17 23:34:30.306791 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Apr 17 23:34:30.306821 kernel: printk: console [ttyS0] enabled Apr 17 23:34:30.306845 kernel: printk: bootconsole [uart0] disabled Apr 17 23:34:30.306875 kernel: thunder_xcv, ver 1.0 Apr 17 23:34:30.306897 kernel: thunder_bgx, ver 1.0 Apr 17 23:34:30.306955 kernel: nicpf, ver 1.0 Apr 17 23:34:30.306997 kernel: nicvf, ver 1.0 Apr 17 23:34:30.309189 kernel: rtc-efi rtc-efi.0: registered as rtc0 Apr 17 23:34:30.309613 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-04-17T23:34:29 UTC (1776468869) Apr 17 23:34:30.309644 kernel: hid: raw HID events driver (C) Jiri Kosina Apr 17 23:34:30.309680 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Apr 17 23:34:30.309711 kernel: watchdog: Delayed init of the lockup detector failed: -19 Apr 17 23:34:30.309747 kernel: watchdog: Hard watchdog permanently disabled Apr 17 23:34:30.309798 kernel: NET: Registered PF_INET6 protocol family Apr 17 23:34:30.309829 kernel: Segment Routing with IPv6 Apr 17 23:34:30.309865 kernel: In-situ OAM (IOAM) with IPv6 Apr 17 23:34:30.309901 kernel: NET: Registered PF_PACKET protocol family Apr 17 23:34:30.309957 kernel: Key type dns_resolver registered Apr 17 23:34:30.309979 kernel: registered taskstats version 1 Apr 17 23:34:30.309998 kernel: Loading compiled-in X.509 certificates Apr 17 23:34:30.310017 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 1161289bfc8d953baa9f687fefeecf0e077bc535' Apr 17 23:34:30.310036 kernel: Key type .fscrypt registered Apr 17 23:34:30.310061 kernel: Key type fscrypt-provisioning registered Apr 17 23:34:30.310080 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 17 23:34:30.310099 kernel: ima: Allocated hash algorithm: sha1 Apr 17 23:34:30.310118 kernel: ima: No architecture policies found Apr 17 23:34:30.310136 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Apr 17 23:34:30.310155 kernel: clk: Disabling unused clocks Apr 17 23:34:30.310174 kernel: Freeing unused kernel memory: 39424K Apr 17 23:34:30.310193 kernel: Run /init as init process Apr 17 23:34:30.310211 kernel: with arguments: Apr 17 23:34:30.310234 kernel: /init Apr 17 23:34:30.310252 kernel: with environment: Apr 17 23:34:30.310270 kernel: HOME=/ Apr 17 23:34:30.310289 kernel: TERM=linux Apr 17 23:34:30.310312 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 17 23:34:30.310336 systemd[1]: Detected virtualization amazon. Apr 17 23:34:30.310357 systemd[1]: Detected architecture arm64. Apr 17 23:34:30.310377 systemd[1]: Running in initrd. Apr 17 23:34:30.310401 systemd[1]: No hostname configured, using default hostname. Apr 17 23:34:30.310421 systemd[1]: Hostname set to . Apr 17 23:34:30.310442 systemd[1]: Initializing machine ID from VM UUID. Apr 17 23:34:30.310462 systemd[1]: Queued start job for default target initrd.target. Apr 17 23:34:30.310483 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 17 23:34:30.310504 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 17 23:34:30.310525 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 17 23:34:30.310547 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 17 23:34:30.310573 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 17 23:34:30.310594 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 17 23:34:30.310617 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 17 23:34:30.310638 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 17 23:34:30.310659 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 17 23:34:30.310679 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 17 23:34:30.310704 systemd[1]: Reached target paths.target - Path Units. Apr 17 23:34:30.310725 systemd[1]: Reached target slices.target - Slice Units. Apr 17 23:34:30.310745 systemd[1]: Reached target swap.target - Swaps. Apr 17 23:34:30.310766 systemd[1]: Reached target timers.target - Timer Units. Apr 17 23:34:30.310786 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 17 23:34:30.310807 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 17 23:34:30.310827 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 17 23:34:30.310848 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 17 23:34:30.310869 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 17 23:34:30.310894 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 17 23:34:30.310915 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 17 23:34:30.312059 systemd[1]: Reached target sockets.target - Socket Units. Apr 17 23:34:30.312088 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 17 23:34:30.312110 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 17 23:34:30.312131 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 17 23:34:30.312152 systemd[1]: Starting systemd-fsck-usr.service... Apr 17 23:34:30.312173 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 17 23:34:30.312193 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 17 23:34:30.312223 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 23:34:30.312243 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 17 23:34:30.312264 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 17 23:34:30.312284 systemd[1]: Finished systemd-fsck-usr.service. Apr 17 23:34:30.312348 systemd-journald[251]: Collecting audit messages is disabled. Apr 17 23:34:30.312398 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 17 23:34:30.312421 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:34:30.312442 systemd-journald[251]: Journal started Apr 17 23:34:30.312484 systemd-journald[251]: Runtime Journal (/run/log/journal/ec2a2a3ae444829931c10c12fd9cb8ad) is 8.0M, max 75.3M, 67.3M free. Apr 17 23:34:30.274060 systemd-modules-load[253]: Inserted module 'overlay' Apr 17 23:34:30.330466 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 17 23:34:30.330560 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 17 23:34:30.330588 systemd[1]: Started systemd-journald.service - Journal Service. Apr 17 23:34:30.336387 systemd-modules-load[253]: Inserted module 'br_netfilter' Apr 17 23:34:30.339093 kernel: Bridge firewalling registered Apr 17 23:34:30.340773 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 17 23:34:30.347134 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 17 23:34:30.357849 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 17 23:34:30.372246 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 17 23:34:30.385240 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 17 23:34:30.415238 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 17 23:34:30.428167 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 17 23:34:30.435773 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 17 23:34:30.449316 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 17 23:34:30.453450 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 17 23:34:30.467671 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 17 23:34:30.508552 dracut-cmdline[286]: dracut-dracut-053 Apr 17 23:34:30.516951 dracut-cmdline[286]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=f77c53ef012912081447488e689e924a7faa1d92b63ab5dfeba9709e9511e349 Apr 17 23:34:30.557665 systemd-resolved[290]: Positive Trust Anchors: Apr 17 23:34:30.558645 systemd-resolved[290]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 17 23:34:30.559321 systemd-resolved[290]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 17 23:34:30.686443 kernel: SCSI subsystem initialized Apr 17 23:34:30.694050 kernel: Loading iSCSI transport class v2.0-870. Apr 17 23:34:30.707047 kernel: iscsi: registered transport (tcp) Apr 17 23:34:30.729366 kernel: iscsi: registered transport (qla4xxx) Apr 17 23:34:30.729438 kernel: QLogic iSCSI HBA Driver Apr 17 23:34:30.801969 kernel: random: crng init done Apr 17 23:34:30.802362 systemd-resolved[290]: Defaulting to hostname 'linux'. Apr 17 23:34:30.807136 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 17 23:34:30.810378 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 17 23:34:30.840394 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 17 23:34:30.852335 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 17 23:34:30.888211 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 17 23:34:30.888287 kernel: device-mapper: uevent: version 1.0.3 Apr 17 23:34:30.888314 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 17 23:34:30.958970 kernel: raid6: neonx8 gen() 6665 MB/s Apr 17 23:34:30.975959 kernel: raid6: neonx4 gen() 6534 MB/s Apr 17 23:34:30.992959 kernel: raid6: neonx2 gen() 5433 MB/s Apr 17 23:34:31.010957 kernel: raid6: neonx1 gen() 3947 MB/s Apr 17 23:34:31.028958 kernel: raid6: int64x8 gen() 3811 MB/s Apr 17 23:34:31.045959 kernel: raid6: int64x4 gen() 3703 MB/s Apr 17 23:34:31.062961 kernel: raid6: int64x2 gen() 3584 MB/s Apr 17 23:34:31.081027 kernel: raid6: int64x1 gen() 2743 MB/s Apr 17 23:34:31.081078 kernel: raid6: using algorithm neonx8 gen() 6665 MB/s Apr 17 23:34:31.100441 kernel: raid6: .... xor() 4798 MB/s, rmw enabled Apr 17 23:34:31.100490 kernel: raid6: using neon recovery algorithm Apr 17 23:34:31.109547 kernel: xor: measuring software checksum speed Apr 17 23:34:31.109620 kernel: 8regs : 10970 MB/sec Apr 17 23:34:31.110828 kernel: 32regs : 11376 MB/sec Apr 17 23:34:31.113180 kernel: arm64_neon : 9010 MB/sec Apr 17 23:34:31.113216 kernel: xor: using function: 32regs (11376 MB/sec) Apr 17 23:34:31.197977 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 17 23:34:31.217998 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 17 23:34:31.233301 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 17 23:34:31.267727 systemd-udevd[472]: Using default interface naming scheme 'v255'. Apr 17 23:34:31.276060 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 17 23:34:31.295182 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 17 23:34:31.327823 dracut-pre-trigger[481]: rd.md=0: removing MD RAID activation Apr 17 23:34:31.385221 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 17 23:34:31.400315 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 17 23:34:31.519621 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 17 23:34:31.531451 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 17 23:34:31.583003 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 17 23:34:31.589672 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 17 23:34:31.602853 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 17 23:34:31.608471 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 17 23:34:31.632278 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 17 23:34:31.667568 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 17 23:34:31.729384 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 17 23:34:31.729947 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 17 23:34:31.741687 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 17 23:34:31.745339 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 17 23:34:31.748739 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:34:31.758656 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 23:34:31.769616 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Apr 17 23:34:31.769685 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Apr 17 23:34:31.772738 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 23:34:31.783794 kernel: ena 0000:00:05.0: ENA device version: 0.10 Apr 17 23:34:31.785775 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Apr 17 23:34:31.792370 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80110000, mac addr 06:5f:86:72:99:37 Apr 17 23:34:31.797811 (udev-worker)[545]: Network interface NamePolicy= disabled on kernel command line. Apr 17 23:34:31.820181 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:34:31.832799 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Apr 17 23:34:31.832870 kernel: nvme nvme0: pci function 0000:00:04.0 Apr 17 23:34:31.846964 kernel: nvme nvme0: 2/0/0 default/read/poll queues Apr 17 23:34:31.847160 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 17 23:34:31.867130 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 17 23:34:31.867221 kernel: GPT:9289727 != 33554431 Apr 17 23:34:31.867250 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 17 23:34:31.867276 kernel: GPT:9289727 != 33554431 Apr 17 23:34:31.867301 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 17 23:34:31.869238 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 17 23:34:31.883982 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 17 23:34:31.955003 kernel: BTRFS: device fsid 6218981f-ef91-4196-be05-d5f6a224b350 devid 1 transid 32 /dev/nvme0n1p3 scanned by (udev-worker) (545) Apr 17 23:34:31.989965 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/nvme0n1p6 scanned by (udev-worker) (522) Apr 17 23:34:32.051567 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Apr 17 23:34:32.085718 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Apr 17 23:34:32.126696 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Apr 17 23:34:32.130185 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Apr 17 23:34:32.150483 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Apr 17 23:34:32.166318 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 17 23:34:32.190208 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 17 23:34:32.190279 disk-uuid[664]: Primary Header is updated. Apr 17 23:34:32.190279 disk-uuid[664]: Secondary Entries is updated. Apr 17 23:34:32.190279 disk-uuid[664]: Secondary Header is updated. Apr 17 23:34:32.231013 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 17 23:34:33.226951 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 17 23:34:33.230249 disk-uuid[666]: The operation has completed successfully. Apr 17 23:34:33.420377 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 17 23:34:33.422264 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 17 23:34:33.481212 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 17 23:34:33.493475 sh[1007]: Success Apr 17 23:34:33.521978 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Apr 17 23:34:33.635549 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 17 23:34:33.641688 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 17 23:34:33.650161 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 17 23:34:33.684469 kernel: BTRFS info (device dm-0): first mount of filesystem 6218981f-ef91-4196-be05-d5f6a224b350 Apr 17 23:34:33.684545 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Apr 17 23:34:33.686586 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 17 23:34:33.686625 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 17 23:34:33.688035 kernel: BTRFS info (device dm-0): using free space tree Apr 17 23:34:33.728970 kernel: BTRFS info (device dm-0): enabling ssd optimizations Apr 17 23:34:33.732273 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 17 23:34:33.742499 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 17 23:34:33.752219 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 17 23:34:33.766341 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 17 23:34:33.799754 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 511634b8-962b-4ed3-9161-3f02d13492ea Apr 17 23:34:33.799821 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Apr 17 23:34:33.799849 kernel: BTRFS info (device nvme0n1p6): using free space tree Apr 17 23:34:33.820974 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Apr 17 23:34:33.841890 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 17 23:34:33.850186 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 511634b8-962b-4ed3-9161-3f02d13492ea Apr 17 23:34:33.859641 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 17 23:34:33.876216 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 17 23:34:33.991436 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 17 23:34:34.014298 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 17 23:34:34.067666 systemd-networkd[1199]: lo: Link UP Apr 17 23:34:34.067687 systemd-networkd[1199]: lo: Gained carrier Apr 17 23:34:34.070197 systemd-networkd[1199]: Enumeration completed Apr 17 23:34:34.071472 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 17 23:34:34.071889 systemd-networkd[1199]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 23:34:34.071896 systemd-networkd[1199]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 17 23:34:34.078525 systemd[1]: Reached target network.target - Network. Apr 17 23:34:34.100573 systemd-networkd[1199]: eth0: Link UP Apr 17 23:34:34.100587 systemd-networkd[1199]: eth0: Gained carrier Apr 17 23:34:34.100607 systemd-networkd[1199]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 23:34:34.120049 systemd-networkd[1199]: eth0: DHCPv4 address 172.31.31.247/20, gateway 172.31.16.1 acquired from 172.31.16.1 Apr 17 23:34:34.241987 ignition[1116]: Ignition 2.19.0 Apr 17 23:34:34.242016 ignition[1116]: Stage: fetch-offline Apr 17 23:34:34.247023 ignition[1116]: no configs at "/usr/lib/ignition/base.d" Apr 17 23:34:34.247063 ignition[1116]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 17 23:34:34.250026 ignition[1116]: Ignition finished successfully Apr 17 23:34:34.257492 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 17 23:34:34.270537 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 17 23:34:34.299799 ignition[1210]: Ignition 2.19.0 Apr 17 23:34:34.299822 ignition[1210]: Stage: fetch Apr 17 23:34:34.300524 ignition[1210]: no configs at "/usr/lib/ignition/base.d" Apr 17 23:34:34.300550 ignition[1210]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 17 23:34:34.300706 ignition[1210]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 17 23:34:34.321821 ignition[1210]: PUT result: OK Apr 17 23:34:34.325142 ignition[1210]: parsed url from cmdline: "" Apr 17 23:34:34.325214 ignition[1210]: no config URL provided Apr 17 23:34:34.325231 ignition[1210]: reading system config file "/usr/lib/ignition/user.ign" Apr 17 23:34:34.325259 ignition[1210]: no config at "/usr/lib/ignition/user.ign" Apr 17 23:34:34.325293 ignition[1210]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 17 23:34:34.335538 ignition[1210]: PUT result: OK Apr 17 23:34:34.335647 ignition[1210]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Apr 17 23:34:34.340644 ignition[1210]: GET result: OK Apr 17 23:34:34.340815 ignition[1210]: parsing config with SHA512: 6f54f7334fbed79eb6ff3ff82e8b9f4bb495be87ac996c15f96bbb845c77d9d53e5b7b27fbfb240f5a31dc7d7b0ed317f534d37d0e93a22b7f297ea9ad985e47 Apr 17 23:34:34.356147 unknown[1210]: fetched base config from "system" Apr 17 23:34:34.357045 ignition[1210]: fetch: fetch complete Apr 17 23:34:34.356170 unknown[1210]: fetched base config from "system" Apr 17 23:34:34.357058 ignition[1210]: fetch: fetch passed Apr 17 23:34:34.356185 unknown[1210]: fetched user config from "aws" Apr 17 23:34:34.357156 ignition[1210]: Ignition finished successfully Apr 17 23:34:34.367524 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 17 23:34:34.384530 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 17 23:34:34.414083 ignition[1217]: Ignition 2.19.0 Apr 17 23:34:34.414691 ignition[1217]: Stage: kargs Apr 17 23:34:34.415493 ignition[1217]: no configs at "/usr/lib/ignition/base.d" Apr 17 23:34:34.415543 ignition[1217]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 17 23:34:34.415706 ignition[1217]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 17 23:34:34.426148 ignition[1217]: PUT result: OK Apr 17 23:34:34.437125 ignition[1217]: kargs: kargs passed Apr 17 23:34:34.439424 ignition[1217]: Ignition finished successfully Apr 17 23:34:34.446025 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 17 23:34:34.460256 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 17 23:34:34.487394 ignition[1223]: Ignition 2.19.0 Apr 17 23:34:34.488003 ignition[1223]: Stage: disks Apr 17 23:34:34.488633 ignition[1223]: no configs at "/usr/lib/ignition/base.d" Apr 17 23:34:34.488658 ignition[1223]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 17 23:34:34.488833 ignition[1223]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 17 23:34:34.500077 ignition[1223]: PUT result: OK Apr 17 23:34:34.505429 ignition[1223]: disks: disks passed Apr 17 23:34:34.505607 ignition[1223]: Ignition finished successfully Apr 17 23:34:34.508884 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 17 23:34:34.513171 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 17 23:34:34.516300 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 17 23:34:34.519604 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 17 23:34:34.522390 systemd[1]: Reached target sysinit.target - System Initialization. Apr 17 23:34:34.524846 systemd[1]: Reached target basic.target - Basic System. Apr 17 23:34:34.543281 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 17 23:34:34.605239 systemd-fsck[1232]: ROOT: clean, 14/553520 files, 52654/553472 blocks Apr 17 23:34:34.612457 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 17 23:34:34.625524 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 17 23:34:34.724977 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 2a4b2d55-130a-4cda-bef1-b1e6ed7bcf6b r/w with ordered data mode. Quota mode: none. Apr 17 23:34:34.726542 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 17 23:34:34.731359 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 17 23:34:34.750059 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 17 23:34:34.758154 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 17 23:34:34.764894 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 17 23:34:34.765000 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 17 23:34:34.765052 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 17 23:34:34.790948 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/nvme0n1p6 scanned by mount (1251) Apr 17 23:34:34.798103 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 511634b8-962b-4ed3-9161-3f02d13492ea Apr 17 23:34:34.798164 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Apr 17 23:34:34.797982 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 17 23:34:34.803827 kernel: BTRFS info (device nvme0n1p6): using free space tree Apr 17 23:34:34.810290 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 17 23:34:34.822963 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Apr 17 23:34:34.825819 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 17 23:34:34.922448 initrd-setup-root[1275]: cut: /sysroot/etc/passwd: No such file or directory Apr 17 23:34:34.932618 initrd-setup-root[1282]: cut: /sysroot/etc/group: No such file or directory Apr 17 23:34:34.942685 initrd-setup-root[1289]: cut: /sysroot/etc/shadow: No such file or directory Apr 17 23:34:34.955984 initrd-setup-root[1296]: cut: /sysroot/etc/gshadow: No such file or directory Apr 17 23:34:35.150959 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 17 23:34:35.161089 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 17 23:34:35.168316 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 17 23:34:35.196078 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 17 23:34:35.201198 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 511634b8-962b-4ed3-9161-3f02d13492ea Apr 17 23:34:35.230017 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 17 23:34:35.245967 ignition[1364]: INFO : Ignition 2.19.0 Apr 17 23:34:35.245967 ignition[1364]: INFO : Stage: mount Apr 17 23:34:35.245967 ignition[1364]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 17 23:34:35.245967 ignition[1364]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 17 23:34:35.256153 ignition[1364]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 17 23:34:35.260167 ignition[1364]: INFO : PUT result: OK Apr 17 23:34:35.265157 ignition[1364]: INFO : mount: mount passed Apr 17 23:34:35.265157 ignition[1364]: INFO : Ignition finished successfully Apr 17 23:34:35.270034 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 17 23:34:35.283160 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 17 23:34:35.742242 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 17 23:34:35.768053 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 scanned by mount (1375) Apr 17 23:34:35.768122 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 511634b8-962b-4ed3-9161-3f02d13492ea Apr 17 23:34:35.770137 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Apr 17 23:34:35.770179 kernel: BTRFS info (device nvme0n1p6): using free space tree Apr 17 23:34:35.776973 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Apr 17 23:34:35.780658 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 17 23:34:35.819077 ignition[1392]: INFO : Ignition 2.19.0 Apr 17 23:34:35.819077 ignition[1392]: INFO : Stage: files Apr 17 23:34:35.824171 ignition[1392]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 17 23:34:35.824171 ignition[1392]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 17 23:34:35.824171 ignition[1392]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 17 23:34:35.837867 ignition[1392]: INFO : PUT result: OK Apr 17 23:34:35.837867 ignition[1392]: DEBUG : files: compiled without relabeling support, skipping Apr 17 23:34:35.837867 ignition[1392]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 17 23:34:35.837867 ignition[1392]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 17 23:34:35.850984 ignition[1392]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 17 23:34:35.850984 ignition[1392]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 17 23:34:35.850984 ignition[1392]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 17 23:34:35.850984 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Apr 17 23:34:35.850984 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Apr 17 23:34:35.850984 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Apr 17 23:34:35.850984 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Apr 17 23:34:35.843244 unknown[1392]: wrote ssh authorized keys file for user: core Apr 17 23:34:35.956023 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Apr 17 23:34:36.004091 systemd-networkd[1199]: eth0: Gained IPv6LL Apr 17 23:34:36.104316 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Apr 17 23:34:36.109889 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Apr 17 23:34:36.109889 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Apr 17 23:34:36.109889 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 17 23:34:36.109889 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 17 23:34:36.109889 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 17 23:34:36.109889 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 17 23:34:36.109889 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 17 23:34:36.109889 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 17 23:34:36.109889 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 17 23:34:36.109889 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 17 23:34:36.109889 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Apr 17 23:34:36.109889 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Apr 17 23:34:36.109889 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Apr 17 23:34:36.109889 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-arm64.raw: attempt #1 Apr 17 23:34:36.577249 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Apr 17 23:34:36.999435 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Apr 17 23:34:36.999435 ignition[1392]: INFO : files: op(c): [started] processing unit "containerd.service" Apr 17 23:34:37.008257 ignition[1392]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Apr 17 23:34:37.008257 ignition[1392]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Apr 17 23:34:37.008257 ignition[1392]: INFO : files: op(c): [finished] processing unit "containerd.service" Apr 17 23:34:37.008257 ignition[1392]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Apr 17 23:34:37.008257 ignition[1392]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 17 23:34:37.008257 ignition[1392]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 17 23:34:37.008257 ignition[1392]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Apr 17 23:34:37.008257 ignition[1392]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Apr 17 23:34:37.008257 ignition[1392]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Apr 17 23:34:37.008257 ignition[1392]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 17 23:34:37.008257 ignition[1392]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 17 23:34:37.008257 ignition[1392]: INFO : files: files passed Apr 17 23:34:37.008257 ignition[1392]: INFO : Ignition finished successfully Apr 17 23:34:37.064986 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 17 23:34:37.077199 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 17 23:34:37.084186 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 17 23:34:37.099620 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 17 23:34:37.100471 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 17 23:34:37.124529 initrd-setup-root-after-ignition[1420]: grep: Apr 17 23:34:37.124529 initrd-setup-root-after-ignition[1424]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 17 23:34:37.131025 initrd-setup-root-after-ignition[1420]: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 17 23:34:37.131025 initrd-setup-root-after-ignition[1420]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 17 23:34:37.140998 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 17 23:34:37.145436 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 17 23:34:37.160459 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 17 23:34:37.220143 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 17 23:34:37.221036 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 17 23:34:37.229736 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 17 23:34:37.233009 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 17 23:34:37.235826 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 17 23:34:37.237725 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 17 23:34:37.283997 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 17 23:34:37.301883 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 17 23:34:37.330200 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 17 23:34:37.333232 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 17 23:34:37.336805 systemd[1]: Stopped target timers.target - Timer Units. Apr 17 23:34:37.346854 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 17 23:34:37.347137 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 17 23:34:37.351677 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 17 23:34:37.362849 systemd[1]: Stopped target basic.target - Basic System. Apr 17 23:34:37.365642 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 17 23:34:37.373976 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 17 23:34:37.377628 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 17 23:34:37.381374 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 17 23:34:37.392153 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 17 23:34:37.395907 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 17 23:34:37.399035 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 17 23:34:37.410490 systemd[1]: Stopped target swap.target - Swaps. Apr 17 23:34:37.412898 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 17 23:34:37.413161 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 17 23:34:37.423035 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 17 23:34:37.426463 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 17 23:34:37.435821 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 17 23:34:37.440970 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 17 23:34:37.444519 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 17 23:34:37.444753 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 17 23:34:37.455777 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 17 23:34:37.456060 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 17 23:34:37.459713 systemd[1]: ignition-files.service: Deactivated successfully. Apr 17 23:34:37.459981 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 17 23:34:37.481343 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 17 23:34:37.483942 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 17 23:34:37.484232 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 17 23:34:37.500025 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 17 23:34:37.503410 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 17 23:34:37.503705 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 17 23:34:37.516883 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 17 23:34:37.517162 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 17 23:34:37.546354 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 17 23:34:37.551032 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 17 23:34:37.562150 ignition[1444]: INFO : Ignition 2.19.0 Apr 17 23:34:37.562150 ignition[1444]: INFO : Stage: umount Apr 17 23:34:37.568042 ignition[1444]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 17 23:34:37.568042 ignition[1444]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 17 23:34:37.568042 ignition[1444]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 17 23:34:37.579881 ignition[1444]: INFO : PUT result: OK Apr 17 23:34:37.584529 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 17 23:34:37.593753 ignition[1444]: INFO : umount: umount passed Apr 17 23:34:37.593753 ignition[1444]: INFO : Ignition finished successfully Apr 17 23:34:37.600188 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 17 23:34:37.600560 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 17 23:34:37.608325 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 17 23:34:37.608656 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 17 23:34:37.617191 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 17 23:34:37.617359 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 17 23:34:37.620904 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 17 23:34:37.621023 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 17 23:34:37.633396 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 17 23:34:37.633494 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 17 23:34:37.636686 systemd[1]: Stopped target network.target - Network. Apr 17 23:34:37.639014 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 17 23:34:37.639106 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 17 23:34:37.642313 systemd[1]: Stopped target paths.target - Path Units. Apr 17 23:34:37.644718 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 17 23:34:37.652151 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 17 23:34:37.655143 systemd[1]: Stopped target slices.target - Slice Units. Apr 17 23:34:37.657316 systemd[1]: Stopped target sockets.target - Socket Units. Apr 17 23:34:37.660102 systemd[1]: iscsid.socket: Deactivated successfully. Apr 17 23:34:37.660186 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 17 23:34:37.662855 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 17 23:34:37.662950 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 17 23:34:37.665850 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 17 23:34:37.665964 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 17 23:34:37.668630 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 17 23:34:37.668713 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 17 23:34:37.671772 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 17 23:34:37.671857 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 17 23:34:37.672774 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 17 23:34:37.673072 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 17 23:34:37.733176 systemd-networkd[1199]: eth0: DHCPv6 lease lost Apr 17 23:34:37.735234 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 17 23:34:37.735622 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 17 23:34:37.751534 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 17 23:34:37.752707 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 17 23:34:37.757893 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 17 23:34:37.758622 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 17 23:34:37.787144 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 17 23:34:37.789831 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 17 23:34:37.789961 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 17 23:34:37.802189 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 17 23:34:37.802299 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 17 23:34:37.805223 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 17 23:34:37.805312 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 17 23:34:37.808305 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 17 23:34:37.808389 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 17 23:34:37.813184 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 17 23:34:37.850154 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 17 23:34:37.850542 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 17 23:34:37.865209 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 17 23:34:37.865741 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 17 23:34:37.877662 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 17 23:34:37.877761 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 17 23:34:37.880696 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 17 23:34:37.880765 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 17 23:34:37.883732 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 17 23:34:37.883819 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 17 23:34:37.888853 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 17 23:34:37.888975 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 17 23:34:37.902644 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 17 23:34:37.902742 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 17 23:34:37.923246 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 17 23:34:37.926272 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 17 23:34:37.926394 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 17 23:34:37.929710 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 17 23:34:37.929801 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:34:37.966718 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 17 23:34:37.967168 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 17 23:34:37.976696 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 17 23:34:37.991959 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 17 23:34:38.012825 systemd[1]: Switching root. Apr 17 23:34:38.053364 systemd-journald[251]: Journal stopped Apr 17 23:34:40.190764 systemd-journald[251]: Received SIGTERM from PID 1 (systemd). Apr 17 23:34:40.190897 kernel: SELinux: policy capability network_peer_controls=1 Apr 17 23:34:40.191475 kernel: SELinux: policy capability open_perms=1 Apr 17 23:34:40.191517 kernel: SELinux: policy capability extended_socket_class=1 Apr 17 23:34:40.191547 kernel: SELinux: policy capability always_check_network=0 Apr 17 23:34:40.191593 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 17 23:34:40.191624 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 17 23:34:40.191654 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 17 23:34:40.191685 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 17 23:34:40.191724 kernel: audit: type=1403 audit(1776468878.584:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 17 23:34:40.191758 systemd[1]: Successfully loaded SELinux policy in 52.605ms. Apr 17 23:34:40.191811 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 23.351ms. Apr 17 23:34:40.191846 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 17 23:34:40.191878 systemd[1]: Detected virtualization amazon. Apr 17 23:34:40.191914 systemd[1]: Detected architecture arm64. Apr 17 23:34:40.191994 systemd[1]: Detected first boot. Apr 17 23:34:40.192030 systemd[1]: Initializing machine ID from VM UUID. Apr 17 23:34:40.192060 zram_generator::config[1503]: No configuration found. Apr 17 23:34:40.192095 systemd[1]: Populated /etc with preset unit settings. Apr 17 23:34:40.192127 systemd[1]: Queued start job for default target multi-user.target. Apr 17 23:34:40.192160 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Apr 17 23:34:40.192193 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 17 23:34:40.192230 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 17 23:34:40.192263 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 17 23:34:40.192293 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 17 23:34:40.192325 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 17 23:34:40.192357 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 17 23:34:40.192390 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 17 23:34:40.192419 systemd[1]: Created slice user.slice - User and Session Slice. Apr 17 23:34:40.192448 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 17 23:34:40.192481 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 17 23:34:40.192515 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 17 23:34:40.192548 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 17 23:34:40.192581 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 17 23:34:40.192611 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 17 23:34:40.192644 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 17 23:34:40.192673 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 17 23:34:40.192703 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 17 23:34:40.192742 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 17 23:34:40.192773 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 17 23:34:40.192807 systemd[1]: Reached target slices.target - Slice Units. Apr 17 23:34:40.192839 systemd[1]: Reached target swap.target - Swaps. Apr 17 23:34:40.192868 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 17 23:34:40.192900 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 17 23:34:40.196787 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 17 23:34:40.196845 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 17 23:34:40.196878 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 17 23:34:40.196909 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 17 23:34:40.197201 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 17 23:34:40.197244 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 17 23:34:40.197275 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 17 23:34:40.197308 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 17 23:34:40.197338 systemd[1]: Mounting media.mount - External Media Directory... Apr 17 23:34:40.197372 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 17 23:34:40.197402 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 17 23:34:40.197432 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 17 23:34:40.197462 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 17 23:34:40.197498 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 17 23:34:40.197552 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 17 23:34:40.197585 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 17 23:34:40.197615 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 17 23:34:40.197645 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 17 23:34:40.197675 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 17 23:34:40.197706 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 17 23:34:40.197735 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 17 23:34:40.197770 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 17 23:34:40.197802 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Apr 17 23:34:40.197846 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Apr 17 23:34:40.197876 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 17 23:34:40.197908 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 17 23:34:40.199026 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 17 23:34:40.199067 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 17 23:34:40.199097 kernel: loop: module loaded Apr 17 23:34:40.199131 kernel: fuse: init (API version 7.39) Apr 17 23:34:40.199170 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 17 23:34:40.199203 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 17 23:34:40.199235 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 17 23:34:40.199264 kernel: ACPI: bus type drm_connector registered Apr 17 23:34:40.199293 systemd[1]: Mounted media.mount - External Media Directory. Apr 17 23:34:40.199322 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 17 23:34:40.199354 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 17 23:34:40.199383 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 17 23:34:40.199413 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 17 23:34:40.199449 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 17 23:34:40.199481 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 17 23:34:40.199564 systemd-journald[1607]: Collecting audit messages is disabled. Apr 17 23:34:40.199622 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 17 23:34:40.199653 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 17 23:34:40.199682 systemd-journald[1607]: Journal started Apr 17 23:34:40.199735 systemd-journald[1607]: Runtime Journal (/run/log/journal/ec2a2a3ae444829931c10c12fd9cb8ad) is 8.0M, max 75.3M, 67.3M free. Apr 17 23:34:40.214041 systemd[1]: Started systemd-journald.service - Journal Service. Apr 17 23:34:40.217469 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 17 23:34:40.217867 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 17 23:34:40.223988 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 17 23:34:40.224346 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 17 23:34:40.231011 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 17 23:34:40.231395 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 17 23:34:40.237662 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 17 23:34:40.251283 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 17 23:34:40.251751 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 17 23:34:40.260174 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 17 23:34:40.268489 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 17 23:34:40.275428 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 17 23:34:40.304875 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 17 23:34:40.319182 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 17 23:34:40.337083 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 17 23:34:40.342021 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 17 23:34:40.362248 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 17 23:34:40.375276 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 17 23:34:40.379093 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 17 23:34:40.386290 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 17 23:34:40.395117 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 17 23:34:40.405211 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 17 23:34:40.431562 systemd-journald[1607]: Time spent on flushing to /var/log/journal/ec2a2a3ae444829931c10c12fd9cb8ad is 59.402ms for 884 entries. Apr 17 23:34:40.431562 systemd-journald[1607]: System Journal (/var/log/journal/ec2a2a3ae444829931c10c12fd9cb8ad) is 8.0M, max 195.6M, 187.6M free. Apr 17 23:34:40.507186 systemd-journald[1607]: Received client request to flush runtime journal. Apr 17 23:34:40.434252 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 17 23:34:40.451563 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 17 23:34:40.467554 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 17 23:34:40.473821 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 17 23:34:40.508299 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 17 23:34:40.525272 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 17 23:34:40.542493 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 17 23:34:40.550834 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 17 23:34:40.587725 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 17 23:34:40.600820 udevadm[1661]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Apr 17 23:34:40.614837 systemd-tmpfiles[1657]: ACLs are not supported, ignoring. Apr 17 23:34:40.614877 systemd-tmpfiles[1657]: ACLs are not supported, ignoring. Apr 17 23:34:40.624207 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 17 23:34:40.648333 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 17 23:34:40.701324 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 17 23:34:40.716453 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 17 23:34:40.755606 systemd-tmpfiles[1678]: ACLs are not supported, ignoring. Apr 17 23:34:40.755648 systemd-tmpfiles[1678]: ACLs are not supported, ignoring. Apr 17 23:34:40.765552 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 17 23:34:41.324616 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 17 23:34:41.338240 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 17 23:34:41.390980 systemd-udevd[1684]: Using default interface naming scheme 'v255'. Apr 17 23:34:41.427913 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 17 23:34:41.459266 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 17 23:34:41.500270 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 17 23:34:41.531961 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Apr 17 23:34:41.577963 (udev-worker)[1687]: Network interface NamePolicy= disabled on kernel command line. Apr 17 23:34:41.702717 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 17 23:34:41.859245 systemd-networkd[1694]: lo: Link UP Apr 17 23:34:41.859259 systemd-networkd[1694]: lo: Gained carrier Apr 17 23:34:41.871628 systemd-networkd[1694]: Enumeration completed Apr 17 23:34:41.874694 systemd-networkd[1694]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 23:34:41.874703 systemd-networkd[1694]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 17 23:34:41.875006 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 17 23:34:41.883242 systemd-networkd[1694]: eth0: Link UP Apr 17 23:34:41.883902 systemd-networkd[1694]: eth0: Gained carrier Apr 17 23:34:41.884182 systemd-networkd[1694]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 23:34:41.905799 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 17 23:34:41.918982 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 32 scanned by (udev-worker) (1704) Apr 17 23:34:41.920056 systemd-networkd[1694]: eth0: DHCPv4 address 172.31.31.247/20, gateway 172.31.16.1 acquired from 172.31.16.1 Apr 17 23:34:41.960773 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 23:34:42.141352 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:34:42.161132 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 17 23:34:42.177495 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Apr 17 23:34:42.189343 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 17 23:34:42.211894 lvm[1813]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 17 23:34:42.249767 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 17 23:34:42.254195 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 17 23:34:42.262413 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 17 23:34:42.278948 lvm[1816]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 17 23:34:42.314833 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 17 23:34:42.320891 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 17 23:34:42.324261 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 17 23:34:42.324315 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 17 23:34:42.327116 systemd[1]: Reached target machines.target - Containers. Apr 17 23:34:42.331866 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 17 23:34:42.341257 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 17 23:34:42.348020 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 17 23:34:42.353299 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 17 23:34:42.363501 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 17 23:34:42.378995 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 17 23:34:42.400875 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 17 23:34:42.411141 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 17 23:34:42.430776 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 17 23:34:42.455559 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 17 23:34:42.456904 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 17 23:34:42.478003 kernel: loop0: detected capacity change from 0 to 114432 Apr 17 23:34:42.511963 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 17 23:34:42.533537 kernel: loop1: detected capacity change from 0 to 52536 Apr 17 23:34:42.629026 kernel: loop2: detected capacity change from 0 to 209336 Apr 17 23:34:42.749328 kernel: loop3: detected capacity change from 0 to 114328 Apr 17 23:34:42.789032 kernel: loop4: detected capacity change from 0 to 114432 Apr 17 23:34:42.812977 kernel: loop5: detected capacity change from 0 to 52536 Apr 17 23:34:42.826980 kernel: loop6: detected capacity change from 0 to 209336 Apr 17 23:34:42.860029 kernel: loop7: detected capacity change from 0 to 114328 Apr 17 23:34:42.878360 (sd-merge)[1837]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Apr 17 23:34:42.881395 (sd-merge)[1837]: Merged extensions into '/usr'. Apr 17 23:34:42.916737 systemd[1]: Reloading requested from client PID 1824 ('systemd-sysext') (unit systemd-sysext.service)... Apr 17 23:34:42.916770 systemd[1]: Reloading... Apr 17 23:34:43.022027 ldconfig[1820]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 17 23:34:43.044971 zram_generator::config[1866]: No configuration found. Apr 17 23:34:43.340169 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 17 23:34:43.523149 systemd[1]: Reloading finished in 605 ms. Apr 17 23:34:43.562315 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 17 23:34:43.568476 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 17 23:34:43.588372 systemd[1]: Starting ensure-sysext.service... Apr 17 23:34:43.596830 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 17 23:34:43.613085 systemd[1]: Reloading requested from client PID 1924 ('systemctl') (unit ensure-sysext.service)... Apr 17 23:34:43.613137 systemd[1]: Reloading... Apr 17 23:34:43.661249 systemd-tmpfiles[1925]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 17 23:34:43.661987 systemd-tmpfiles[1925]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 17 23:34:43.663826 systemd-tmpfiles[1925]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 17 23:34:43.664436 systemd-tmpfiles[1925]: ACLs are not supported, ignoring. Apr 17 23:34:43.664575 systemd-tmpfiles[1925]: ACLs are not supported, ignoring. Apr 17 23:34:43.673337 systemd-tmpfiles[1925]: Detected autofs mount point /boot during canonicalization of boot. Apr 17 23:34:43.673357 systemd-tmpfiles[1925]: Skipping /boot Apr 17 23:34:43.703362 systemd-tmpfiles[1925]: Detected autofs mount point /boot during canonicalization of boot. Apr 17 23:34:43.703547 systemd-tmpfiles[1925]: Skipping /boot Apr 17 23:34:43.813663 systemd-networkd[1694]: eth0: Gained IPv6LL Apr 17 23:34:43.814986 zram_generator::config[1963]: No configuration found. Apr 17 23:34:44.057104 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 17 23:34:44.213302 systemd[1]: Reloading finished in 599 ms. Apr 17 23:34:44.240618 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 17 23:34:44.251163 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 17 23:34:44.270302 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 17 23:34:44.280305 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 17 23:34:44.292774 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 17 23:34:44.311764 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 17 23:34:44.324244 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 17 23:34:44.353101 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 17 23:34:44.362118 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 17 23:34:44.377133 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 17 23:34:44.403330 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 17 23:34:44.406491 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 17 23:34:44.412454 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 17 23:34:44.417338 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 17 23:34:44.417736 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 17 23:34:44.453377 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 17 23:34:44.453804 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 17 23:34:44.458736 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 17 23:34:44.459234 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 17 23:34:44.483231 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 17 23:34:44.502290 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 17 23:34:44.508315 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 17 23:34:44.517954 augenrules[2050]: No rules Apr 17 23:34:44.532020 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 17 23:34:44.536330 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 17 23:34:44.557098 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 17 23:34:44.573515 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 17 23:34:44.580437 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 17 23:34:44.587820 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 17 23:34:44.594466 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 17 23:34:44.594830 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 17 23:34:44.603784 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 17 23:34:44.604263 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 17 23:34:44.611159 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 17 23:34:44.613279 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 17 23:34:44.622799 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 17 23:34:44.664421 systemd[1]: Finished ensure-sysext.service. Apr 17 23:34:44.670120 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 17 23:34:44.679388 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 17 23:34:44.685258 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 17 23:34:44.696291 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 17 23:34:44.715226 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 17 23:34:44.722303 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 17 23:34:44.722407 systemd[1]: Reached target time-set.target - System Time Set. Apr 17 23:34:44.730178 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 17 23:34:44.730846 systemd-resolved[2020]: Positive Trust Anchors: Apr 17 23:34:44.730868 systemd-resolved[2020]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 17 23:34:44.730970 systemd-resolved[2020]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 17 23:34:44.732220 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 17 23:34:44.732599 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 17 23:34:44.742780 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 17 23:34:44.743532 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 17 23:34:44.756198 systemd-resolved[2020]: Defaulting to hostname 'linux'. Apr 17 23:34:44.761911 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 17 23:34:44.771796 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 17 23:34:44.772203 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 17 23:34:44.781273 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 17 23:34:44.783400 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 17 23:34:44.794244 systemd[1]: Reached target network.target - Network. Apr 17 23:34:44.797206 systemd[1]: Reached target network-online.target - Network is Online. Apr 17 23:34:44.800046 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 17 23:34:44.803807 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 17 23:34:44.803980 systemd[1]: Reached target sysinit.target - System Initialization. Apr 17 23:34:44.807079 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 17 23:34:44.810566 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 17 23:34:44.814049 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 17 23:34:44.816876 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 17 23:34:44.820910 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 17 23:34:44.824417 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 17 23:34:44.824615 systemd[1]: Reached target paths.target - Path Units. Apr 17 23:34:44.827093 systemd[1]: Reached target timers.target - Timer Units. Apr 17 23:34:44.830497 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 17 23:34:44.836183 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 17 23:34:44.841898 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 17 23:34:44.845037 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 17 23:34:44.850099 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 17 23:34:44.853265 systemd[1]: Reached target sockets.target - Socket Units. Apr 17 23:34:44.855678 systemd[1]: Reached target basic.target - Basic System. Apr 17 23:34:44.859379 systemd[1]: System is tainted: cgroupsv1 Apr 17 23:34:44.859462 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 17 23:34:44.859512 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 17 23:34:44.869313 systemd[1]: Starting containerd.service - containerd container runtime... Apr 17 23:34:44.877891 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Apr 17 23:34:44.886082 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 17 23:34:44.901295 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 17 23:34:44.912638 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 17 23:34:44.915676 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 17 23:34:44.929363 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:34:44.939072 jq[2094]: false Apr 17 23:34:44.956862 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 17 23:34:44.980982 systemd[1]: Started ntpd.service - Network Time Service. Apr 17 23:34:44.992253 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 17 23:34:45.008962 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 17 23:34:45.031023 extend-filesystems[2095]: Found loop4 Apr 17 23:34:45.031023 extend-filesystems[2095]: Found loop5 Apr 17 23:34:45.031023 extend-filesystems[2095]: Found loop6 Apr 17 23:34:45.031023 extend-filesystems[2095]: Found loop7 Apr 17 23:34:45.031023 extend-filesystems[2095]: Found nvme0n1 Apr 17 23:34:45.031023 extend-filesystems[2095]: Found nvme0n1p1 Apr 17 23:34:45.031023 extend-filesystems[2095]: Found nvme0n1p2 Apr 17 23:34:45.031023 extend-filesystems[2095]: Found nvme0n1p3 Apr 17 23:34:45.031023 extend-filesystems[2095]: Found usr Apr 17 23:34:45.031023 extend-filesystems[2095]: Found nvme0n1p4 Apr 17 23:34:45.031023 extend-filesystems[2095]: Found nvme0n1p6 Apr 17 23:34:45.031023 extend-filesystems[2095]: Found nvme0n1p7 Apr 17 23:34:45.031023 extend-filesystems[2095]: Found nvme0n1p9 Apr 17 23:34:45.031023 extend-filesystems[2095]: Checking size of /dev/nvme0n1p9 Apr 17 23:34:45.138186 ntpd[2102]: 17 Apr 23:34:45 ntpd[2102]: ntpd 4.2.8p17@1.4004-o Fri Apr 17 21:46:13 UTC 2026 (1): Starting Apr 17 23:34:45.138186 ntpd[2102]: 17 Apr 23:34:45 ntpd[2102]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Apr 17 23:34:45.138186 ntpd[2102]: 17 Apr 23:34:45 ntpd[2102]: ---------------------------------------------------- Apr 17 23:34:45.138186 ntpd[2102]: 17 Apr 23:34:45 ntpd[2102]: ntp-4 is maintained by Network Time Foundation, Apr 17 23:34:45.138186 ntpd[2102]: 17 Apr 23:34:45 ntpd[2102]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Apr 17 23:34:45.138186 ntpd[2102]: 17 Apr 23:34:45 ntpd[2102]: corporation. Support and training for ntp-4 are Apr 17 23:34:45.138186 ntpd[2102]: 17 Apr 23:34:45 ntpd[2102]: available at https://www.nwtime.org/support Apr 17 23:34:45.138186 ntpd[2102]: 17 Apr 23:34:45 ntpd[2102]: ---------------------------------------------------- Apr 17 23:34:45.138186 ntpd[2102]: 17 Apr 23:34:45 ntpd[2102]: proto: precision = 0.108 usec (-23) Apr 17 23:34:45.138186 ntpd[2102]: 17 Apr 23:34:45 ntpd[2102]: basedate set to 2026-04-05 Apr 17 23:34:45.138186 ntpd[2102]: 17 Apr 23:34:45 ntpd[2102]: gps base set to 2026-04-05 (week 2413) Apr 17 23:34:45.040658 systemd[1]: Starting setup-oem.service - Setup OEM... Apr 17 23:34:45.102374 dbus-daemon[2093]: [system] SELinux support is enabled Apr 17 23:34:45.158737 extend-filesystems[2095]: Resized partition /dev/nvme0n1p9 Apr 17 23:34:45.173974 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 3587067 blocks Apr 17 23:34:45.069695 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 17 23:34:45.104110 ntpd[2102]: ntpd 4.2.8p17@1.4004-o Fri Apr 17 21:46:13 UTC 2026 (1): Starting Apr 17 23:34:45.174805 extend-filesystems[2122]: resize2fs 1.47.1 (20-May-2024) Apr 17 23:34:45.215276 ntpd[2102]: 17 Apr 23:34:45 ntpd[2102]: Listen and drop on 0 v6wildcard [::]:123 Apr 17 23:34:45.215276 ntpd[2102]: 17 Apr 23:34:45 ntpd[2102]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Apr 17 23:34:45.215276 ntpd[2102]: 17 Apr 23:34:45 ntpd[2102]: Listen normally on 2 lo 127.0.0.1:123 Apr 17 23:34:45.215276 ntpd[2102]: 17 Apr 23:34:45 ntpd[2102]: Listen normally on 3 eth0 172.31.31.247:123 Apr 17 23:34:45.215276 ntpd[2102]: 17 Apr 23:34:45 ntpd[2102]: Listen normally on 4 lo [::1]:123 Apr 17 23:34:45.215276 ntpd[2102]: 17 Apr 23:34:45 ntpd[2102]: Listen normally on 5 eth0 [fe80::45f:86ff:fe72:9937%2]:123 Apr 17 23:34:45.215276 ntpd[2102]: 17 Apr 23:34:45 ntpd[2102]: Listening on routing socket on fd #22 for interface updates Apr 17 23:34:45.123164 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 17 23:34:45.104156 ntpd[2102]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Apr 17 23:34:45.153270 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 17 23:34:45.104176 ntpd[2102]: ---------------------------------------------------- Apr 17 23:34:45.168245 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 17 23:34:45.104196 ntpd[2102]: ntp-4 is maintained by Network Time Foundation, Apr 17 23:34:45.176515 systemd[1]: Starting update-engine.service - Update Engine... Apr 17 23:34:45.104215 ntpd[2102]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Apr 17 23:34:45.201696 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 17 23:34:45.104235 ntpd[2102]: corporation. Support and training for ntp-4 are Apr 17 23:34:45.214771 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 17 23:34:45.104253 ntpd[2102]: available at https://www.nwtime.org/support Apr 17 23:34:45.244081 jq[2131]: true Apr 17 23:34:45.243236 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 17 23:34:45.104271 ntpd[2102]: ---------------------------------------------------- Apr 17 23:34:45.243727 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 17 23:34:45.263117 ntpd[2102]: 17 Apr 23:34:45 ntpd[2102]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 17 23:34:45.263117 ntpd[2102]: 17 Apr 23:34:45 ntpd[2102]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 17 23:34:45.117743 ntpd[2102]: proto: precision = 0.108 usec (-23) Apr 17 23:34:45.251675 systemd[1]: motdgen.service: Deactivated successfully. Apr 17 23:34:45.124079 ntpd[2102]: basedate set to 2026-04-05 Apr 17 23:34:45.252189 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 17 23:34:45.124111 ntpd[2102]: gps base set to 2026-04-05 (week 2413) Apr 17 23:34:45.131109 dbus-daemon[2093]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1694 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Apr 17 23:34:45.163088 ntpd[2102]: Listen and drop on 0 v6wildcard [::]:123 Apr 17 23:34:45.163180 ntpd[2102]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Apr 17 23:34:45.168991 ntpd[2102]: Listen normally on 2 lo 127.0.0.1:123 Apr 17 23:34:45.169250 ntpd[2102]: Listen normally on 3 eth0 172.31.31.247:123 Apr 17 23:34:45.176749 ntpd[2102]: Listen normally on 4 lo [::1]:123 Apr 17 23:34:45.176863 ntpd[2102]: Listen normally on 5 eth0 [fe80::45f:86ff:fe72:9937%2]:123 Apr 17 23:34:45.176992 ntpd[2102]: Listening on routing socket on fd #22 for interface updates Apr 17 23:34:45.241909 ntpd[2102]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 17 23:34:45.253993 ntpd[2102]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 17 23:34:45.278007 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 17 23:34:45.278496 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 17 23:34:45.352618 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 17 23:34:45.354747 (ntainerd)[2145]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 17 23:34:45.368608 dbus-daemon[2093]: [system] Successfully activated service 'org.freedesktop.systemd1' Apr 17 23:34:45.366337 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 17 23:34:45.366430 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 17 23:34:45.372712 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 17 23:34:45.372751 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 17 23:34:45.388713 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Apr 17 23:34:45.447226 coreos-metadata[2091]: Apr 17 23:34:45.447 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Apr 17 23:34:45.477678 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 3587067 Apr 17 23:34:45.477778 tar[2138]: linux-arm64/LICENSE Apr 17 23:34:45.491865 jq[2140]: true Apr 17 23:34:45.494385 extend-filesystems[2122]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Apr 17 23:34:45.494385 extend-filesystems[2122]: old_desc_blocks = 1, new_desc_blocks = 2 Apr 17 23:34:45.494385 extend-filesystems[2122]: The filesystem on /dev/nvme0n1p9 is now 3587067 (4k) blocks long. Apr 17 23:34:45.533354 tar[2138]: linux-arm64/helm Apr 17 23:34:45.533542 coreos-metadata[2091]: Apr 17 23:34:45.481 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Apr 17 23:34:45.533542 coreos-metadata[2091]: Apr 17 23:34:45.494 INFO Fetch successful Apr 17 23:34:45.533542 coreos-metadata[2091]: Apr 17 23:34:45.494 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Apr 17 23:34:45.533542 coreos-metadata[2091]: Apr 17 23:34:45.518 INFO Fetch successful Apr 17 23:34:45.533542 coreos-metadata[2091]: Apr 17 23:34:45.518 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Apr 17 23:34:45.533542 coreos-metadata[2091]: Apr 17 23:34:45.529 INFO Fetch successful Apr 17 23:34:45.533542 coreos-metadata[2091]: Apr 17 23:34:45.529 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Apr 17 23:34:45.533542 coreos-metadata[2091]: Apr 17 23:34:45.531 INFO Fetch successful Apr 17 23:34:45.533542 coreos-metadata[2091]: Apr 17 23:34:45.531 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Apr 17 23:34:45.534096 extend-filesystems[2095]: Resized filesystem in /dev/nvme0n1p9 Apr 17 23:34:45.539798 systemd[1]: Finished setup-oem.service - Setup OEM. Apr 17 23:34:45.565916 coreos-metadata[2091]: Apr 17 23:34:45.550 INFO Fetch failed with 404: resource not found Apr 17 23:34:45.565916 coreos-metadata[2091]: Apr 17 23:34:45.550 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Apr 17 23:34:45.565916 coreos-metadata[2091]: Apr 17 23:34:45.565 INFO Fetch successful Apr 17 23:34:45.565916 coreos-metadata[2091]: Apr 17 23:34:45.565 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Apr 17 23:34:45.544376 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 17 23:34:45.544852 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 17 23:34:45.557236 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Apr 17 23:34:45.580225 coreos-metadata[2091]: Apr 17 23:34:45.579 INFO Fetch successful Apr 17 23:34:45.580225 coreos-metadata[2091]: Apr 17 23:34:45.579 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Apr 17 23:34:45.580225 coreos-metadata[2091]: Apr 17 23:34:45.580 INFO Fetch successful Apr 17 23:34:45.580225 coreos-metadata[2091]: Apr 17 23:34:45.580 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Apr 17 23:34:45.584871 coreos-metadata[2091]: Apr 17 23:34:45.584 INFO Fetch successful Apr 17 23:34:45.584871 coreos-metadata[2091]: Apr 17 23:34:45.584 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Apr 17 23:34:45.592636 coreos-metadata[2091]: Apr 17 23:34:45.587 INFO Fetch successful Apr 17 23:34:45.605267 update_engine[2128]: I20260417 23:34:45.597899 2128 main.cc:92] Flatcar Update Engine starting Apr 17 23:34:45.625722 systemd[1]: Started update-engine.service - Update Engine. Apr 17 23:34:45.632038 update_engine[2128]: I20260417 23:34:45.630699 2128 update_check_scheduler.cc:74] Next update check in 5m0s Apr 17 23:34:45.631759 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 17 23:34:45.643412 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 17 23:34:45.782819 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Apr 17 23:34:45.790255 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 17 23:34:45.881958 bash[2211]: Updated "/home/core/.ssh/authorized_keys" Apr 17 23:34:45.892770 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 17 23:34:45.905714 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 17 23:34:45.941520 systemd[1]: Starting sshkeys.service... Apr 17 23:34:46.038813 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Apr 17 23:34:46.041814 systemd-logind[2124]: Watching system buttons on /dev/input/event0 (Power Button) Apr 17 23:34:46.137735 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 32 scanned by (udev-worker) (2213) Apr 17 23:34:46.137782 amazon-ssm-agent[2174]: Initializing new seelog logger Apr 17 23:34:46.041850 systemd-logind[2124]: Watching system buttons on /dev/input/event1 (Sleep Button) Apr 17 23:34:46.143732 containerd[2145]: time="2026-04-17T23:34:46.086965294Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 17 23:34:46.144141 amazon-ssm-agent[2174]: New Seelog Logger Creation Complete Apr 17 23:34:46.144141 amazon-ssm-agent[2174]: 2026/04/17 23:34:46 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 17 23:34:46.144141 amazon-ssm-agent[2174]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 17 23:34:46.042230 systemd-logind[2124]: New seat seat0. Apr 17 23:34:46.137799 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Apr 17 23:34:46.149761 amazon-ssm-agent[2174]: 2026/04/17 23:34:46 processing appconfig overrides Apr 17 23:34:46.144401 systemd[1]: Started systemd-logind.service - User Login Management. Apr 17 23:34:46.156105 amazon-ssm-agent[2174]: 2026/04/17 23:34:46 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 17 23:34:46.156105 amazon-ssm-agent[2174]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 17 23:34:46.156332 amazon-ssm-agent[2174]: 2026/04/17 23:34:46 processing appconfig overrides Apr 17 23:34:46.156502 amazon-ssm-agent[2174]: 2026/04/17 23:34:46 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 17 23:34:46.156502 amazon-ssm-agent[2174]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 17 23:34:46.156643 amazon-ssm-agent[2174]: 2026/04/17 23:34:46 processing appconfig overrides Apr 17 23:34:46.169839 amazon-ssm-agent[2174]: 2026-04-17 23:34:46 INFO Proxy environment variables: Apr 17 23:34:46.195541 amazon-ssm-agent[2174]: 2026/04/17 23:34:46 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 17 23:34:46.195541 amazon-ssm-agent[2174]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 17 23:34:46.195750 amazon-ssm-agent[2174]: 2026/04/17 23:34:46 processing appconfig overrides Apr 17 23:34:46.271360 amazon-ssm-agent[2174]: 2026-04-17 23:34:46 INFO https_proxy: Apr 17 23:34:46.365098 containerd[2145]: time="2026-04-17T23:34:46.364132787Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 17 23:34:46.369657 amazon-ssm-agent[2174]: 2026-04-17 23:34:46 INFO http_proxy: Apr 17 23:34:46.377298 containerd[2145]: time="2026-04-17T23:34:46.377226659Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 17 23:34:46.377966 containerd[2145]: time="2026-04-17T23:34:46.377534099Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 17 23:34:46.377966 containerd[2145]: time="2026-04-17T23:34:46.377583659Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 17 23:34:46.380014 containerd[2145]: time="2026-04-17T23:34:46.378780131Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 17 23:34:46.380014 containerd[2145]: time="2026-04-17T23:34:46.378841175Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 17 23:34:46.380014 containerd[2145]: time="2026-04-17T23:34:46.379051331Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 17 23:34:46.380014 containerd[2145]: time="2026-04-17T23:34:46.379086239Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 17 23:34:46.380661 containerd[2145]: time="2026-04-17T23:34:46.380610143Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 17 23:34:46.383792 containerd[2145]: time="2026-04-17T23:34:46.382979543Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 17 23:34:46.383792 containerd[2145]: time="2026-04-17T23:34:46.383040827Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 17 23:34:46.383792 containerd[2145]: time="2026-04-17T23:34:46.383080019Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 17 23:34:46.383792 containerd[2145]: time="2026-04-17T23:34:46.383298743Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 17 23:34:46.383792 containerd[2145]: time="2026-04-17T23:34:46.383719319Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 17 23:34:46.386557 containerd[2145]: time="2026-04-17T23:34:46.386473415Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 17 23:34:46.387959 containerd[2145]: time="2026-04-17T23:34:46.387699047Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 17 23:34:46.389896 containerd[2145]: time="2026-04-17T23:34:46.389061167Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 17 23:34:46.390793 containerd[2145]: time="2026-04-17T23:34:46.390575363Z" level=info msg="metadata content store policy set" policy=shared Apr 17 23:34:46.406946 containerd[2145]: time="2026-04-17T23:34:46.404102975Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 17 23:34:46.406946 containerd[2145]: time="2026-04-17T23:34:46.404220743Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 17 23:34:46.406946 containerd[2145]: time="2026-04-17T23:34:46.404352119Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 17 23:34:46.406946 containerd[2145]: time="2026-04-17T23:34:46.404390087Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 17 23:34:46.406946 containerd[2145]: time="2026-04-17T23:34:46.404426759Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 17 23:34:46.406946 containerd[2145]: time="2026-04-17T23:34:46.404682971Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 17 23:34:46.406946 containerd[2145]: time="2026-04-17T23:34:46.405373331Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 17 23:34:46.406946 containerd[2145]: time="2026-04-17T23:34:46.405782951Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 17 23:34:46.406946 containerd[2145]: time="2026-04-17T23:34:46.405844751Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 17 23:34:46.420944 containerd[2145]: time="2026-04-17T23:34:46.405905195Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 17 23:34:46.420944 containerd[2145]: time="2026-04-17T23:34:46.415055940Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 17 23:34:46.420944 containerd[2145]: time="2026-04-17T23:34:46.415098228Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 17 23:34:46.420944 containerd[2145]: time="2026-04-17T23:34:46.415135224Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 17 23:34:46.420944 containerd[2145]: time="2026-04-17T23:34:46.415176108Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 17 23:34:46.420944 containerd[2145]: time="2026-04-17T23:34:46.415215360Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 17 23:34:46.420944 containerd[2145]: time="2026-04-17T23:34:46.415246752Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 17 23:34:46.420944 containerd[2145]: time="2026-04-17T23:34:46.415276776Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 17 23:34:46.420944 containerd[2145]: time="2026-04-17T23:34:46.415311588Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 17 23:34:46.420944 containerd[2145]: time="2026-04-17T23:34:46.415352268Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 17 23:34:46.420944 containerd[2145]: time="2026-04-17T23:34:46.415383336Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 17 23:34:46.420944 containerd[2145]: time="2026-04-17T23:34:46.415416876Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 17 23:34:46.420944 containerd[2145]: time="2026-04-17T23:34:46.415450980Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 17 23:34:46.420944 containerd[2145]: time="2026-04-17T23:34:46.415480224Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 17 23:34:46.421639 containerd[2145]: time="2026-04-17T23:34:46.415510296Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 17 23:34:46.421639 containerd[2145]: time="2026-04-17T23:34:46.415542228Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 17 23:34:46.421639 containerd[2145]: time="2026-04-17T23:34:46.415573416Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 17 23:34:46.421639 containerd[2145]: time="2026-04-17T23:34:46.415603452Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 17 23:34:46.421639 containerd[2145]: time="2026-04-17T23:34:46.415637268Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 17 23:34:46.421639 containerd[2145]: time="2026-04-17T23:34:46.415665468Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 17 23:34:46.421639 containerd[2145]: time="2026-04-17T23:34:46.415693884Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 17 23:34:46.421639 containerd[2145]: time="2026-04-17T23:34:46.415724688Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 17 23:34:46.421639 containerd[2145]: time="2026-04-17T23:34:46.415770096Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 17 23:34:46.421639 containerd[2145]: time="2026-04-17T23:34:46.415818720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 17 23:34:46.421639 containerd[2145]: time="2026-04-17T23:34:46.415846872Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 17 23:34:46.421639 containerd[2145]: time="2026-04-17T23:34:46.415874940Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 17 23:34:46.421639 containerd[2145]: time="2026-04-17T23:34:46.418118316Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 17 23:34:46.421639 containerd[2145]: time="2026-04-17T23:34:46.418222392Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 17 23:34:46.422271 containerd[2145]: time="2026-04-17T23:34:46.418269504Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 17 23:34:46.422271 containerd[2145]: time="2026-04-17T23:34:46.418326900Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 17 23:34:46.422271 containerd[2145]: time="2026-04-17T23:34:46.418365708Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 17 23:34:46.422271 containerd[2145]: time="2026-04-17T23:34:46.418407384Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 17 23:34:46.422271 containerd[2145]: time="2026-04-17T23:34:46.418451496Z" level=info msg="NRI interface is disabled by configuration." Apr 17 23:34:46.422271 containerd[2145]: time="2026-04-17T23:34:46.418519320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 17 23:34:46.431998 containerd[2145]: time="2026-04-17T23:34:46.428948544Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 17 23:34:46.431998 containerd[2145]: time="2026-04-17T23:34:46.429150600Z" level=info msg="Connect containerd service" Apr 17 23:34:46.431998 containerd[2145]: time="2026-04-17T23:34:46.429249852Z" level=info msg="using legacy CRI server" Apr 17 23:34:46.431998 containerd[2145]: time="2026-04-17T23:34:46.429281124Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 17 23:34:46.431998 containerd[2145]: time="2026-04-17T23:34:46.429617808Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 17 23:34:46.439960 containerd[2145]: time="2026-04-17T23:34:46.438439188Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 17 23:34:46.450946 containerd[2145]: time="2026-04-17T23:34:46.450085920Z" level=info msg="Start subscribing containerd event" Apr 17 23:34:46.462945 containerd[2145]: time="2026-04-17T23:34:46.455145636Z" level=info msg="Start recovering state" Apr 17 23:34:46.462945 containerd[2145]: time="2026-04-17T23:34:46.455314512Z" level=info msg="Start event monitor" Apr 17 23:34:46.462945 containerd[2145]: time="2026-04-17T23:34:46.455341320Z" level=info msg="Start snapshots syncer" Apr 17 23:34:46.462945 containerd[2145]: time="2026-04-17T23:34:46.455364588Z" level=info msg="Start cni network conf syncer for default" Apr 17 23:34:46.462945 containerd[2145]: time="2026-04-17T23:34:46.455384112Z" level=info msg="Start streaming server" Apr 17 23:34:46.462945 containerd[2145]: time="2026-04-17T23:34:46.451979856Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 17 23:34:46.462945 containerd[2145]: time="2026-04-17T23:34:46.455822100Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 17 23:34:46.456128 systemd[1]: Started containerd.service - containerd container runtime. Apr 17 23:34:46.469143 containerd[2145]: time="2026-04-17T23:34:46.465011688Z" level=info msg="containerd successfully booted in 0.380640s" Apr 17 23:34:46.488004 amazon-ssm-agent[2174]: 2026-04-17 23:34:46 INFO no_proxy: Apr 17 23:34:46.518827 dbus-daemon[2093]: [system] Successfully activated service 'org.freedesktop.hostname1' Apr 17 23:34:46.519421 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Apr 17 23:34:46.519829 dbus-daemon[2093]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=2157 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Apr 17 23:34:46.575356 coreos-metadata[2221]: Apr 17 23:34:46.574 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Apr 17 23:34:46.578601 coreos-metadata[2221]: Apr 17 23:34:46.577 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Apr 17 23:34:46.578612 systemd[1]: Starting polkit.service - Authorization Manager... Apr 17 23:34:46.587245 coreos-metadata[2221]: Apr 17 23:34:46.586 INFO Fetch successful Apr 17 23:34:46.587245 coreos-metadata[2221]: Apr 17 23:34:46.586 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Apr 17 23:34:46.594968 coreos-metadata[2221]: Apr 17 23:34:46.594 INFO Fetch successful Apr 17 23:34:46.602494 amazon-ssm-agent[2174]: 2026-04-17 23:34:46 INFO Checking if agent identity type OnPrem can be assumed Apr 17 23:34:46.598510 unknown[2221]: wrote ssh authorized keys file for user: core Apr 17 23:34:46.701294 amazon-ssm-agent[2174]: 2026-04-17 23:34:46 INFO Checking if agent identity type EC2 can be assumed Apr 17 23:34:46.741346 update-ssh-keys[2302]: Updated "/home/core/.ssh/authorized_keys" Apr 17 23:34:46.742218 polkitd[2272]: Started polkitd version 121 Apr 17 23:34:46.747497 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Apr 17 23:34:46.767241 systemd[1]: Finished sshkeys.service. Apr 17 23:34:46.801685 amazon-ssm-agent[2174]: 2026-04-17 23:34:46 INFO Agent will take identity from EC2 Apr 17 23:34:46.812581 polkitd[2272]: Loading rules from directory /etc/polkit-1/rules.d Apr 17 23:34:46.812721 polkitd[2272]: Loading rules from directory /usr/share/polkit-1/rules.d Apr 17 23:34:46.824253 polkitd[2272]: Finished loading, compiling and executing 2 rules Apr 17 23:34:46.834523 dbus-daemon[2093]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Apr 17 23:34:46.835100 systemd[1]: Started polkit.service - Authorization Manager. Apr 17 23:34:46.841513 polkitd[2272]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Apr 17 23:34:46.901640 amazon-ssm-agent[2174]: 2026-04-17 23:34:46 INFO [amazon-ssm-agent] using named pipe channel for IPC Apr 17 23:34:46.921324 systemd-hostnamed[2157]: Hostname set to (transient) Apr 17 23:34:46.922345 systemd-resolved[2020]: System hostname changed to 'ip-172-31-31-247'. Apr 17 23:34:46.962191 locksmithd[2180]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 17 23:34:47.002259 amazon-ssm-agent[2174]: 2026-04-17 23:34:46 INFO [amazon-ssm-agent] using named pipe channel for IPC Apr 17 23:34:47.103954 amazon-ssm-agent[2174]: 2026-04-17 23:34:46 INFO [amazon-ssm-agent] using named pipe channel for IPC Apr 17 23:34:47.200543 amazon-ssm-agent[2174]: 2026-04-17 23:34:46 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Apr 17 23:34:47.303002 amazon-ssm-agent[2174]: 2026-04-17 23:34:46 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Apr 17 23:34:47.401110 amazon-ssm-agent[2174]: 2026-04-17 23:34:46 INFO [amazon-ssm-agent] Starting Core Agent Apr 17 23:34:47.502781 amazon-ssm-agent[2174]: 2026-04-17 23:34:46 INFO [amazon-ssm-agent] registrar detected. Attempting registration Apr 17 23:34:47.603086 amazon-ssm-agent[2174]: 2026-04-17 23:34:46 INFO [Registrar] Starting registrar module Apr 17 23:34:47.682791 amazon-ssm-agent[2174]: 2026-04-17 23:34:46 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Apr 17 23:34:47.682791 amazon-ssm-agent[2174]: 2026-04-17 23:34:47 INFO [EC2Identity] EC2 registration was successful. Apr 17 23:34:47.683128 amazon-ssm-agent[2174]: 2026-04-17 23:34:47 INFO [CredentialRefresher] credentialRefresher has started Apr 17 23:34:47.683128 amazon-ssm-agent[2174]: 2026-04-17 23:34:47 INFO [CredentialRefresher] Starting credentials refresher loop Apr 17 23:34:47.683128 amazon-ssm-agent[2174]: 2026-04-17 23:34:47 INFO EC2RoleProvider Successfully connected with instance profile role credentials Apr 17 23:34:47.703575 amazon-ssm-agent[2174]: 2026-04-17 23:34:47 INFO [CredentialRefresher] Next credential rotation will be in 31.716658944566667 minutes Apr 17 23:34:47.838753 tar[2138]: linux-arm64/README.md Apr 17 23:34:47.871987 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 17 23:34:48.528219 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:34:48.542591 (kubelet)[2363]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 17 23:34:48.736946 amazon-ssm-agent[2174]: 2026-04-17 23:34:48 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Apr 17 23:34:48.838062 amazon-ssm-agent[2174]: 2026-04-17 23:34:48 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2365) started Apr 17 23:34:48.939040 amazon-ssm-agent[2174]: 2026-04-17 23:34:48 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Apr 17 23:34:49.168522 sshd_keygen[2149]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 17 23:34:49.215636 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 17 23:34:49.232913 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 17 23:34:49.246963 systemd[1]: Started sshd@0-172.31.31.247:22-4.175.71.9:38730.service - OpenSSH per-connection server daemon (4.175.71.9:38730). Apr 17 23:34:49.269664 systemd[1]: issuegen.service: Deactivated successfully. Apr 17 23:34:49.270258 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 17 23:34:49.288434 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 17 23:34:49.329705 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 17 23:34:49.343205 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 17 23:34:49.355425 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 17 23:34:49.362086 systemd[1]: Reached target getty.target - Login Prompts. Apr 17 23:34:49.367095 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 17 23:34:49.376028 systemd[1]: Startup finished in 10.048s (kernel) + 10.843s (userspace) = 20.892s. Apr 17 23:34:49.668966 kubelet[2363]: E0417 23:34:49.668786 2363 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 17 23:34:49.675244 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 17 23:34:49.675671 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 17 23:34:50.324449 sshd[2388]: Accepted publickey for core from 4.175.71.9 port 38730 ssh2: RSA SHA256:Y4BPHWm1n8mK0R4k3Nc8+65YIxJqSgtKkzRPVXbpsws Apr 17 23:34:50.328793 sshd[2388]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:34:50.351249 systemd-logind[2124]: New session 1 of user core. Apr 17 23:34:50.354473 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 17 23:34:50.362424 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 17 23:34:50.398405 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 17 23:34:50.425531 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 17 23:34:50.430880 (systemd)[2411]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 17 23:34:50.659829 systemd[2411]: Queued start job for default target default.target. Apr 17 23:34:50.660555 systemd[2411]: Created slice app.slice - User Application Slice. Apr 17 23:34:50.660608 systemd[2411]: Reached target paths.target - Paths. Apr 17 23:34:50.660640 systemd[2411]: Reached target timers.target - Timers. Apr 17 23:34:50.667115 systemd[2411]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 17 23:34:50.694116 systemd[2411]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 17 23:34:50.694270 systemd[2411]: Reached target sockets.target - Sockets. Apr 17 23:34:50.694303 systemd[2411]: Reached target basic.target - Basic System. Apr 17 23:34:50.694399 systemd[2411]: Reached target default.target - Main User Target. Apr 17 23:34:50.694459 systemd[2411]: Startup finished in 251ms. Apr 17 23:34:50.695619 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 17 23:34:50.704574 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 17 23:34:51.416357 systemd[1]: Started sshd@1-172.31.31.247:22-4.175.71.9:38732.service - OpenSSH per-connection server daemon (4.175.71.9:38732). Apr 17 23:34:51.818057 systemd-resolved[2020]: Clock change detected. Flushing caches. Apr 17 23:34:52.110782 sshd[2423]: Accepted publickey for core from 4.175.71.9 port 38732 ssh2: RSA SHA256:Y4BPHWm1n8mK0R4k3Nc8+65YIxJqSgtKkzRPVXbpsws Apr 17 23:34:52.113461 sshd[2423]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:34:52.121846 systemd-logind[2124]: New session 2 of user core. Apr 17 23:34:52.130708 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 17 23:34:52.793426 sshd[2423]: pam_unix(sshd:session): session closed for user core Apr 17 23:34:52.801597 systemd-logind[2124]: Session 2 logged out. Waiting for processes to exit. Apr 17 23:34:52.802938 systemd[1]: sshd@1-172.31.31.247:22-4.175.71.9:38732.service: Deactivated successfully. Apr 17 23:34:52.808066 systemd[1]: session-2.scope: Deactivated successfully. Apr 17 23:34:52.811177 systemd-logind[2124]: Removed session 2. Apr 17 23:34:52.973565 systemd[1]: Started sshd@2-172.31.31.247:22-4.175.71.9:38738.service - OpenSSH per-connection server daemon (4.175.71.9:38738). Apr 17 23:34:54.016551 sshd[2431]: Accepted publickey for core from 4.175.71.9 port 38738 ssh2: RSA SHA256:Y4BPHWm1n8mK0R4k3Nc8+65YIxJqSgtKkzRPVXbpsws Apr 17 23:34:54.018265 sshd[2431]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:34:54.027206 systemd-logind[2124]: New session 3 of user core. Apr 17 23:34:54.034117 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 17 23:34:54.720376 sshd[2431]: pam_unix(sshd:session): session closed for user core Apr 17 23:34:54.724998 systemd[1]: sshd@2-172.31.31.247:22-4.175.71.9:38738.service: Deactivated successfully. Apr 17 23:34:54.732506 systemd[1]: session-3.scope: Deactivated successfully. Apr 17 23:34:54.734046 systemd-logind[2124]: Session 3 logged out. Waiting for processes to exit. Apr 17 23:34:54.735856 systemd-logind[2124]: Removed session 3. Apr 17 23:34:54.878603 systemd[1]: Started sshd@3-172.31.31.247:22-4.175.71.9:38740.service - OpenSSH per-connection server daemon (4.175.71.9:38740). Apr 17 23:34:55.875134 sshd[2439]: Accepted publickey for core from 4.175.71.9 port 38740 ssh2: RSA SHA256:Y4BPHWm1n8mK0R4k3Nc8+65YIxJqSgtKkzRPVXbpsws Apr 17 23:34:55.876841 sshd[2439]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:34:55.886075 systemd-logind[2124]: New session 4 of user core. Apr 17 23:34:55.892655 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 17 23:34:56.558423 sshd[2439]: pam_unix(sshd:session): session closed for user core Apr 17 23:34:56.563718 systemd-logind[2124]: Session 4 logged out. Waiting for processes to exit. Apr 17 23:34:56.565367 systemd[1]: sshd@3-172.31.31.247:22-4.175.71.9:38740.service: Deactivated successfully. Apr 17 23:34:56.572060 systemd[1]: session-4.scope: Deactivated successfully. Apr 17 23:34:56.574557 systemd-logind[2124]: Removed session 4. Apr 17 23:34:56.726642 systemd[1]: Started sshd@4-172.31.31.247:22-4.175.71.9:41756.service - OpenSSH per-connection server daemon (4.175.71.9:41756). Apr 17 23:34:57.726591 sshd[2447]: Accepted publickey for core from 4.175.71.9 port 41756 ssh2: RSA SHA256:Y4BPHWm1n8mK0R4k3Nc8+65YIxJqSgtKkzRPVXbpsws Apr 17 23:34:57.729191 sshd[2447]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:34:57.737709 systemd-logind[2124]: New session 5 of user core. Apr 17 23:34:57.749609 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 17 23:34:58.269687 sudo[2451]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 17 23:34:58.270427 sudo[2451]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 17 23:34:58.288736 sudo[2451]: pam_unix(sudo:session): session closed for user root Apr 17 23:34:58.450465 sshd[2447]: pam_unix(sshd:session): session closed for user core Apr 17 23:34:58.457383 systemd-logind[2124]: Session 5 logged out. Waiting for processes to exit. Apr 17 23:34:58.458860 systemd[1]: sshd@4-172.31.31.247:22-4.175.71.9:41756.service: Deactivated successfully. Apr 17 23:34:58.464625 systemd[1]: session-5.scope: Deactivated successfully. Apr 17 23:34:58.466421 systemd-logind[2124]: Removed session 5. Apr 17 23:34:58.616671 systemd[1]: Started sshd@5-172.31.31.247:22-4.175.71.9:41758.service - OpenSSH per-connection server daemon (4.175.71.9:41758). Apr 17 23:34:59.453401 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 17 23:34:59.461456 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:34:59.626728 sshd[2456]: Accepted publickey for core from 4.175.71.9 port 41758 ssh2: RSA SHA256:Y4BPHWm1n8mK0R4k3Nc8+65YIxJqSgtKkzRPVXbpsws Apr 17 23:34:59.628725 sshd[2456]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:34:59.638453 systemd-logind[2124]: New session 6 of user core. Apr 17 23:34:59.646623 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 17 23:34:59.827447 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:34:59.845713 (kubelet)[2472]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 17 23:34:59.916600 kubelet[2472]: E0417 23:34:59.916506 2472 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 17 23:34:59.923262 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 17 23:34:59.923637 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 17 23:35:00.152455 sudo[2481]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 17 23:35:00.153190 sudo[2481]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 17 23:35:00.160123 sudo[2481]: pam_unix(sudo:session): session closed for user root Apr 17 23:35:00.170402 sudo[2480]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 17 23:35:00.171565 sudo[2480]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 17 23:35:00.196578 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 17 23:35:00.200973 auditctl[2484]: No rules Apr 17 23:35:00.202011 systemd[1]: audit-rules.service: Deactivated successfully. Apr 17 23:35:00.202554 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 17 23:35:00.217009 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 17 23:35:00.262016 augenrules[2503]: No rules Apr 17 23:35:00.266961 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 17 23:35:00.271501 sudo[2480]: pam_unix(sudo:session): session closed for user root Apr 17 23:35:00.432439 sshd[2456]: pam_unix(sshd:session): session closed for user core Apr 17 23:35:00.441819 systemd[1]: sshd@5-172.31.31.247:22-4.175.71.9:41758.service: Deactivated successfully. Apr 17 23:35:00.446794 systemd[1]: session-6.scope: Deactivated successfully. Apr 17 23:35:00.447135 systemd-logind[2124]: Session 6 logged out. Waiting for processes to exit. Apr 17 23:35:00.449789 systemd-logind[2124]: Removed session 6. Apr 17 23:35:00.613528 systemd[1]: Started sshd@6-172.31.31.247:22-4.175.71.9:41762.service - OpenSSH per-connection server daemon (4.175.71.9:41762). Apr 17 23:35:01.629391 sshd[2512]: Accepted publickey for core from 4.175.71.9 port 41762 ssh2: RSA SHA256:Y4BPHWm1n8mK0R4k3Nc8+65YIxJqSgtKkzRPVXbpsws Apr 17 23:35:01.632212 sshd[2512]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:35:01.641337 systemd-logind[2124]: New session 7 of user core. Apr 17 23:35:01.645627 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 17 23:35:02.170702 sudo[2516]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 17 23:35:02.171380 sudo[2516]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 17 23:35:02.665551 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 17 23:35:02.679775 (dockerd)[2531]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 17 23:35:03.091312 dockerd[2531]: time="2026-04-17T23:35:03.090989751Z" level=info msg="Starting up" Apr 17 23:35:03.218636 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1637997657-merged.mount: Deactivated successfully. Apr 17 23:35:03.511885 systemd[1]: var-lib-docker-metacopy\x2dcheck3488007897-merged.mount: Deactivated successfully. Apr 17 23:35:03.530165 dockerd[2531]: time="2026-04-17T23:35:03.529417410Z" level=info msg="Loading containers: start." Apr 17 23:35:03.689149 kernel: Initializing XFRM netlink socket Apr 17 23:35:03.722924 (udev-worker)[2553]: Network interface NamePolicy= disabled on kernel command line. Apr 17 23:35:03.817340 systemd-networkd[1694]: docker0: Link UP Apr 17 23:35:03.844498 dockerd[2531]: time="2026-04-17T23:35:03.844448635Z" level=info msg="Loading containers: done." Apr 17 23:35:03.871903 dockerd[2531]: time="2026-04-17T23:35:03.871183855Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 17 23:35:03.871903 dockerd[2531]: time="2026-04-17T23:35:03.871355839Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 17 23:35:03.871903 dockerd[2531]: time="2026-04-17T23:35:03.871538899Z" level=info msg="Daemon has completed initialization" Apr 17 23:35:03.931824 dockerd[2531]: time="2026-04-17T23:35:03.931598696Z" level=info msg="API listen on /run/docker.sock" Apr 17 23:35:03.932335 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 17 23:35:04.211224 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3279735666-merged.mount: Deactivated successfully. Apr 17 23:35:04.750159 containerd[2145]: time="2026-04-17T23:35:04.749721740Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.11\"" Apr 17 23:35:05.356118 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2972370164.mount: Deactivated successfully. Apr 17 23:35:06.878791 containerd[2145]: time="2026-04-17T23:35:06.878520910Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:35:06.881131 containerd[2145]: time="2026-04-17T23:35:06.880755550Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.11: active requests=0, bytes read=27008787" Apr 17 23:35:06.881131 containerd[2145]: time="2026-04-17T23:35:06.880886686Z" level=info msg="ImageCreate event name:\"sha256:51b83c5cb2f791f72696c040be904535bad3c81a6ffc19a55013ac150a24d9b0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:35:06.887896 containerd[2145]: time="2026-04-17T23:35:06.887153230Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:18e9f2b6e4d67c24941e14b2d41ec0aa6e5f628e39f2ef2163e176de85bbe39e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:35:06.890131 containerd[2145]: time="2026-04-17T23:35:06.889656754Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.11\" with image id \"sha256:51b83c5cb2f791f72696c040be904535bad3c81a6ffc19a55013ac150a24d9b0\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:18e9f2b6e4d67c24941e14b2d41ec0aa6e5f628e39f2ef2163e176de85bbe39e\", size \"27005386\" in 2.139869854s" Apr 17 23:35:06.890131 containerd[2145]: time="2026-04-17T23:35:06.889725082Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.11\" returns image reference \"sha256:51b83c5cb2f791f72696c040be904535bad3c81a6ffc19a55013ac150a24d9b0\"" Apr 17 23:35:06.891947 containerd[2145]: time="2026-04-17T23:35:06.891655618Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.11\"" Apr 17 23:35:08.536649 containerd[2145]: time="2026-04-17T23:35:08.536570351Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:35:08.538787 containerd[2145]: time="2026-04-17T23:35:08.538259951Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.11: active requests=0, bytes read=23297774" Apr 17 23:35:08.540921 containerd[2145]: time="2026-04-17T23:35:08.540263399Z" level=info msg="ImageCreate event name:\"sha256:df8bcecad66863646fb4016494163838761da38376bae5a7592e04041db8489a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:35:08.552842 containerd[2145]: time="2026-04-17T23:35:08.552301355Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7579451c5b3c2715da4a263c5d80a3367a24fdc12e86fde6851674d567d1dfb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:35:08.555122 containerd[2145]: time="2026-04-17T23:35:08.555008315Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.11\" with image id \"sha256:df8bcecad66863646fb4016494163838761da38376bae5a7592e04041db8489a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7579451c5b3c2715da4a263c5d80a3367a24fdc12e86fde6851674d567d1dfb2\", size \"24804413\" in 1.663281669s" Apr 17 23:35:08.555257 containerd[2145]: time="2026-04-17T23:35:08.555136247Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.11\" returns image reference \"sha256:df8bcecad66863646fb4016494163838761da38376bae5a7592e04041db8489a\"" Apr 17 23:35:08.556358 containerd[2145]: time="2026-04-17T23:35:08.556303343Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.11\"" Apr 17 23:35:09.948945 containerd[2145]: time="2026-04-17T23:35:09.948882494Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:35:09.951165 containerd[2145]: time="2026-04-17T23:35:09.951107630Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.11: active requests=0, bytes read=18141358" Apr 17 23:35:09.952125 containerd[2145]: time="2026-04-17T23:35:09.951622286Z" level=info msg="ImageCreate event name:\"sha256:8c8e25fd00e5c108fb9ab5490c25bfaeb0231b1c59f749dab4f5300f1c49995b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:35:09.957795 containerd[2145]: time="2026-04-17T23:35:09.957696482Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:5506f0f94c4d9aeb071664893aabc12166bcb7f775008a6fff02d004e6091d28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:35:09.961320 containerd[2145]: time="2026-04-17T23:35:09.960935150Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.11\" with image id \"sha256:8c8e25fd00e5c108fb9ab5490c25bfaeb0231b1c59f749dab4f5300f1c49995b\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:5506f0f94c4d9aeb071664893aabc12166bcb7f775008a6fff02d004e6091d28\", size \"19648015\" in 1.404569179s" Apr 17 23:35:09.961320 containerd[2145]: time="2026-04-17T23:35:09.960997826Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.11\" returns image reference \"sha256:8c8e25fd00e5c108fb9ab5490c25bfaeb0231b1c59f749dab4f5300f1c49995b\"" Apr 17 23:35:09.962471 containerd[2145]: time="2026-04-17T23:35:09.962206346Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.11\"" Apr 17 23:35:10.010022 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 17 23:35:10.019458 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:35:10.406464 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:35:10.423816 (kubelet)[2750]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 17 23:35:10.522966 kubelet[2750]: E0417 23:35:10.522877 2750 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 17 23:35:10.526969 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 17 23:35:10.527556 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 17 23:35:11.297649 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3474262321.mount: Deactivated successfully. Apr 17 23:35:11.893629 containerd[2145]: time="2026-04-17T23:35:11.893558715Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:35:11.895778 containerd[2145]: time="2026-04-17T23:35:11.895705347Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.11: active requests=0, bytes read=28040508" Apr 17 23:35:11.897376 containerd[2145]: time="2026-04-17T23:35:11.897300843Z" level=info msg="ImageCreate event name:\"sha256:7ce14d6fb1e5134a578d2aaa327fd701273e3d222b9b8d88054dd86b87a7dc36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:35:11.903407 containerd[2145]: time="2026-04-17T23:35:11.903072675Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8d18637b5c5f58a4ca0163d3cf184e53d4c522963c242860562be7cb25e9303e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:35:11.905394 containerd[2145]: time="2026-04-17T23:35:11.905143347Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.11\" with image id \"sha256:7ce14d6fb1e5134a578d2aaa327fd701273e3d222b9b8d88054dd86b87a7dc36\", repo tag \"registry.k8s.io/kube-proxy:v1.33.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:8d18637b5c5f58a4ca0163d3cf184e53d4c522963c242860562be7cb25e9303e\", size \"28039527\" in 1.942881741s" Apr 17 23:35:11.905394 containerd[2145]: time="2026-04-17T23:35:11.905249043Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.11\" returns image reference \"sha256:7ce14d6fb1e5134a578d2aaa327fd701273e3d222b9b8d88054dd86b87a7dc36\"" Apr 17 23:35:11.908232 containerd[2145]: time="2026-04-17T23:35:11.907312047Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Apr 17 23:35:12.446723 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount297373754.mount: Deactivated successfully. Apr 17 23:35:13.559122 containerd[2145]: time="2026-04-17T23:35:13.556986171Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:35:13.560023 containerd[2145]: time="2026-04-17T23:35:13.559963239Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152117" Apr 17 23:35:13.563110 containerd[2145]: time="2026-04-17T23:35:13.563025364Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:35:13.568644 containerd[2145]: time="2026-04-17T23:35:13.568591096Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:35:13.571285 containerd[2145]: time="2026-04-17T23:35:13.571234360Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.663863921s" Apr 17 23:35:13.571461 containerd[2145]: time="2026-04-17T23:35:13.571432264Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Apr 17 23:35:13.572397 containerd[2145]: time="2026-04-17T23:35:13.572250652Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Apr 17 23:35:14.030494 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3874958343.mount: Deactivated successfully. Apr 17 23:35:14.038133 containerd[2145]: time="2026-04-17T23:35:14.037918802Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:35:14.039737 containerd[2145]: time="2026-04-17T23:35:14.039680162Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Apr 17 23:35:14.040021 containerd[2145]: time="2026-04-17T23:35:14.039873338Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:35:14.044050 containerd[2145]: time="2026-04-17T23:35:14.043986902Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:35:14.046530 containerd[2145]: time="2026-04-17T23:35:14.045860642Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 473.556086ms" Apr 17 23:35:14.046530 containerd[2145]: time="2026-04-17T23:35:14.045905750Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Apr 17 23:35:14.046932 containerd[2145]: time="2026-04-17T23:35:14.046894754Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Apr 17 23:35:14.578889 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1764163117.mount: Deactivated successfully. Apr 17 23:35:16.393120 containerd[2145]: time="2026-04-17T23:35:16.391331442Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:35:16.394013 containerd[2145]: time="2026-04-17T23:35:16.393932562Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=21886366" Apr 17 23:35:16.394706 containerd[2145]: time="2026-04-17T23:35:16.394644078Z" level=info msg="ImageCreate event name:\"sha256:1211402d28f5813ed906916bfcdd0a7404c2f9048ef5bb54387a6745bc410eca\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:35:16.403462 containerd[2145]: time="2026-04-17T23:35:16.403379994Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:35:16.406334 containerd[2145]: time="2026-04-17T23:35:16.406280202Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:1211402d28f5813ed906916bfcdd0a7404c2f9048ef5bb54387a6745bc410eca\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"21882972\" in 2.359226052s" Apr 17 23:35:16.406512 containerd[2145]: time="2026-04-17T23:35:16.406482246Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:1211402d28f5813ed906916bfcdd0a7404c2f9048ef5bb54387a6745bc410eca\"" Apr 17 23:35:16.657887 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Apr 17 23:35:20.759621 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Apr 17 23:35:20.769738 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:35:21.149582 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:35:21.166723 (kubelet)[2922]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 17 23:35:21.252142 kubelet[2922]: E0417 23:35:21.249945 2922 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 17 23:35:21.254952 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 17 23:35:21.259572 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 17 23:35:23.427681 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:35:23.437679 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:35:23.499638 systemd[1]: Reloading requested from client PID 2939 ('systemctl') (unit session-7.scope)... Apr 17 23:35:23.499671 systemd[1]: Reloading... Apr 17 23:35:23.740118 zram_generator::config[2979]: No configuration found. Apr 17 23:35:24.036597 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 17 23:35:24.210942 systemd[1]: Reloading finished in 710 ms. Apr 17 23:35:24.290867 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 17 23:35:24.291249 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 17 23:35:24.292037 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:35:24.302949 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:35:24.649444 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:35:24.670274 (kubelet)[3052]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 17 23:35:24.754807 kubelet[3052]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 17 23:35:24.754807 kubelet[3052]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 17 23:35:24.754807 kubelet[3052]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 17 23:35:24.755800 kubelet[3052]: I0417 23:35:24.754839 3052 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 17 23:35:25.314683 kubelet[3052]: I0417 23:35:25.314607 3052 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 17 23:35:25.314683 kubelet[3052]: I0417 23:35:25.314665 3052 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 17 23:35:25.315198 kubelet[3052]: I0417 23:35:25.315043 3052 server.go:956] "Client rotation is on, will bootstrap in background" Apr 17 23:35:25.369138 kubelet[3052]: E0417 23:35:25.366911 3052 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.31.247:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.31.247:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 17 23:35:25.370347 kubelet[3052]: I0417 23:35:25.370289 3052 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 17 23:35:25.386384 kubelet[3052]: E0417 23:35:25.386333 3052 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 17 23:35:25.386652 kubelet[3052]: I0417 23:35:25.386605 3052 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 17 23:35:25.394352 kubelet[3052]: I0417 23:35:25.394299 3052 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 17 23:35:25.397572 kubelet[3052]: I0417 23:35:25.397513 3052 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 17 23:35:25.398018 kubelet[3052]: I0417 23:35:25.397746 3052 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-31-247","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Apr 17 23:35:25.398292 kubelet[3052]: I0417 23:35:25.398271 3052 topology_manager.go:138] "Creating topology manager with none policy" Apr 17 23:35:25.398389 kubelet[3052]: I0417 23:35:25.398371 3052 container_manager_linux.go:303] "Creating device plugin manager" Apr 17 23:35:25.398844 kubelet[3052]: I0417 23:35:25.398822 3052 state_mem.go:36] "Initialized new in-memory state store" Apr 17 23:35:25.405948 kubelet[3052]: I0417 23:35:25.405908 3052 kubelet.go:480] "Attempting to sync node with API server" Apr 17 23:35:25.406202 kubelet[3052]: I0417 23:35:25.406177 3052 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 17 23:35:25.406325 kubelet[3052]: I0417 23:35:25.406308 3052 kubelet.go:386] "Adding apiserver pod source" Apr 17 23:35:25.406434 kubelet[3052]: I0417 23:35:25.406416 3052 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 17 23:35:25.410596 kubelet[3052]: E0417 23:35:25.410519 3052 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.31.247:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-31-247&limit=500&resourceVersion=0\": dial tcp 172.31.31.247:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 17 23:35:25.412804 kubelet[3052]: E0417 23:35:25.412738 3052 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.31.247:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.31.247:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 17 23:35:25.412972 kubelet[3052]: I0417 23:35:25.412943 3052 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 17 23:35:25.414150 kubelet[3052]: I0417 23:35:25.414067 3052 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 17 23:35:25.414408 kubelet[3052]: W0417 23:35:25.414367 3052 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 17 23:35:25.422108 kubelet[3052]: I0417 23:35:25.422028 3052 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 17 23:35:25.422224 kubelet[3052]: I0417 23:35:25.422130 3052 server.go:1289] "Started kubelet" Apr 17 23:35:25.426668 kubelet[3052]: I0417 23:35:25.426615 3052 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 17 23:35:25.428063 kubelet[3052]: I0417 23:35:25.427963 3052 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 17 23:35:25.428633 kubelet[3052]: I0417 23:35:25.428584 3052 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 17 23:35:25.428780 kubelet[3052]: I0417 23:35:25.428756 3052 server.go:317] "Adding debug handlers to kubelet server" Apr 17 23:35:25.433390 kubelet[3052]: I0417 23:35:25.433351 3052 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 17 23:35:25.443882 kubelet[3052]: I0417 23:35:25.443822 3052 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 17 23:35:25.450180 kubelet[3052]: E0417 23:35:25.433850 3052 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.31.247:6443/api/v1/namespaces/default/events\": dial tcp 172.31.31.247:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-31-247.18a74913564d4d26 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-31-247,UID:ip-172-31-31-247,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-31-247,},FirstTimestamp:2026-04-17 23:35:25.422062886 +0000 UTC m=+0.742628872,LastTimestamp:2026-04-17 23:35:25.422062886 +0000 UTC m=+0.742628872,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-31-247,}" Apr 17 23:35:25.451059 kubelet[3052]: I0417 23:35:25.450477 3052 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 17 23:35:25.452407 kubelet[3052]: E0417 23:35:25.452360 3052 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-31-247\" not found" Apr 17 23:35:25.457704 kubelet[3052]: I0417 23:35:25.457650 3052 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 17 23:35:25.457994 kubelet[3052]: I0417 23:35:25.457972 3052 reconciler.go:26] "Reconciler: start to sync state" Apr 17 23:35:25.459384 kubelet[3052]: E0417 23:35:25.459337 3052 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.31.247:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.31.247:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 17 23:35:25.459725 kubelet[3052]: E0417 23:35:25.459681 3052 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.31.247:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-247?timeout=10s\": dial tcp 172.31.31.247:6443: connect: connection refused" interval="200ms" Apr 17 23:35:25.460159 kubelet[3052]: I0417 23:35:25.460079 3052 factory.go:223] Registration of the systemd container factory successfully Apr 17 23:35:25.460426 kubelet[3052]: I0417 23:35:25.460395 3052 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 17 23:35:25.463921 kubelet[3052]: I0417 23:35:25.463883 3052 factory.go:223] Registration of the containerd container factory successfully Apr 17 23:35:25.486148 kubelet[3052]: I0417 23:35:25.485606 3052 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 17 23:35:25.503823 kubelet[3052]: I0417 23:35:25.503760 3052 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 17 23:35:25.504124 kubelet[3052]: I0417 23:35:25.504025 3052 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 17 23:35:25.504398 kubelet[3052]: I0417 23:35:25.504266 3052 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 17 23:35:25.504398 kubelet[3052]: I0417 23:35:25.504296 3052 kubelet.go:2436] "Starting kubelet main sync loop" Apr 17 23:35:25.504832 kubelet[3052]: E0417 23:35:25.504656 3052 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 17 23:35:25.509377 kubelet[3052]: E0417 23:35:25.509263 3052 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.31.247:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.31.247:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 17 23:35:25.524890 kubelet[3052]: I0417 23:35:25.524482 3052 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 17 23:35:25.524890 kubelet[3052]: I0417 23:35:25.524513 3052 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 17 23:35:25.524890 kubelet[3052]: I0417 23:35:25.524547 3052 state_mem.go:36] "Initialized new in-memory state store" Apr 17 23:35:25.527449 kubelet[3052]: I0417 23:35:25.527413 3052 policy_none.go:49] "None policy: Start" Apr 17 23:35:25.527631 kubelet[3052]: I0417 23:35:25.527609 3052 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 17 23:35:25.527736 kubelet[3052]: I0417 23:35:25.527719 3052 state_mem.go:35] "Initializing new in-memory state store" Apr 17 23:35:25.539138 kubelet[3052]: E0417 23:35:25.537136 3052 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 17 23:35:25.539138 kubelet[3052]: I0417 23:35:25.537449 3052 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 17 23:35:25.539138 kubelet[3052]: I0417 23:35:25.537487 3052 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 17 23:35:25.541451 kubelet[3052]: I0417 23:35:25.541418 3052 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 17 23:35:25.547478 kubelet[3052]: E0417 23:35:25.547441 3052 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 17 23:35:25.547704 kubelet[3052]: E0417 23:35:25.547679 3052 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-31-247\" not found" Apr 17 23:35:25.619982 kubelet[3052]: E0417 23:35:25.619851 3052 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-31-247\" not found" node="ip-172-31-31-247" Apr 17 23:35:25.628488 kubelet[3052]: E0417 23:35:25.628447 3052 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-31-247\" not found" node="ip-172-31-31-247" Apr 17 23:35:25.633898 kubelet[3052]: E0417 23:35:25.633851 3052 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-31-247\" not found" node="ip-172-31-31-247" Apr 17 23:35:25.642780 kubelet[3052]: I0417 23:35:25.642743 3052 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-31-247" Apr 17 23:35:25.643585 kubelet[3052]: E0417 23:35:25.643498 3052 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.31.247:6443/api/v1/nodes\": dial tcp 172.31.31.247:6443: connect: connection refused" node="ip-172-31-31-247" Apr 17 23:35:25.659662 kubelet[3052]: I0417 23:35:25.659347 3052 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7b4ff9d2a54c79aed4b1e1bc862590ed-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-31-247\" (UID: \"7b4ff9d2a54c79aed4b1e1bc862590ed\") " pod="kube-system/kube-controller-manager-ip-172-31-31-247" Apr 17 23:35:25.659662 kubelet[3052]: I0417 23:35:25.659410 3052 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7b4ff9d2a54c79aed4b1e1bc862590ed-kubeconfig\") pod \"kube-controller-manager-ip-172-31-31-247\" (UID: \"7b4ff9d2a54c79aed4b1e1bc862590ed\") " pod="kube-system/kube-controller-manager-ip-172-31-31-247" Apr 17 23:35:25.659662 kubelet[3052]: I0417 23:35:25.659457 3052 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7b4ff9d2a54c79aed4b1e1bc862590ed-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-31-247\" (UID: \"7b4ff9d2a54c79aed4b1e1bc862590ed\") " pod="kube-system/kube-controller-manager-ip-172-31-31-247" Apr 17 23:35:25.659662 kubelet[3052]: I0417 23:35:25.659496 3052 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/774e69a71cad02a917c49adf4e4837cf-kubeconfig\") pod \"kube-scheduler-ip-172-31-31-247\" (UID: \"774e69a71cad02a917c49adf4e4837cf\") " pod="kube-system/kube-scheduler-ip-172-31-31-247" Apr 17 23:35:25.659662 kubelet[3052]: I0417 23:35:25.659534 3052 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c2520430b5e31eb79e9b11ca7a09f22f-ca-certs\") pod \"kube-apiserver-ip-172-31-31-247\" (UID: \"c2520430b5e31eb79e9b11ca7a09f22f\") " pod="kube-system/kube-apiserver-ip-172-31-31-247" Apr 17 23:35:25.660008 kubelet[3052]: I0417 23:35:25.659568 3052 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7b4ff9d2a54c79aed4b1e1bc862590ed-k8s-certs\") pod \"kube-controller-manager-ip-172-31-31-247\" (UID: \"7b4ff9d2a54c79aed4b1e1bc862590ed\") " pod="kube-system/kube-controller-manager-ip-172-31-31-247" Apr 17 23:35:25.660008 kubelet[3052]: I0417 23:35:25.659602 3052 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c2520430b5e31eb79e9b11ca7a09f22f-k8s-certs\") pod \"kube-apiserver-ip-172-31-31-247\" (UID: \"c2520430b5e31eb79e9b11ca7a09f22f\") " pod="kube-system/kube-apiserver-ip-172-31-31-247" Apr 17 23:35:25.660008 kubelet[3052]: I0417 23:35:25.659660 3052 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c2520430b5e31eb79e9b11ca7a09f22f-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-31-247\" (UID: \"c2520430b5e31eb79e9b11ca7a09f22f\") " pod="kube-system/kube-apiserver-ip-172-31-31-247" Apr 17 23:35:25.660008 kubelet[3052]: I0417 23:35:25.659713 3052 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7b4ff9d2a54c79aed4b1e1bc862590ed-ca-certs\") pod \"kube-controller-manager-ip-172-31-31-247\" (UID: \"7b4ff9d2a54c79aed4b1e1bc862590ed\") " pod="kube-system/kube-controller-manager-ip-172-31-31-247" Apr 17 23:35:25.660725 kubelet[3052]: E0417 23:35:25.660675 3052 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.31.247:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-247?timeout=10s\": dial tcp 172.31.31.247:6443: connect: connection refused" interval="400ms" Apr 17 23:35:25.846797 kubelet[3052]: I0417 23:35:25.846696 3052 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-31-247" Apr 17 23:35:25.847658 kubelet[3052]: E0417 23:35:25.847226 3052 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.31.247:6443/api/v1/nodes\": dial tcp 172.31.31.247:6443: connect: connection refused" node="ip-172-31-31-247" Apr 17 23:35:25.926746 containerd[2145]: time="2026-04-17T23:35:25.926160629Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-31-247,Uid:c2520430b5e31eb79e9b11ca7a09f22f,Namespace:kube-system,Attempt:0,}" Apr 17 23:35:25.931739 containerd[2145]: time="2026-04-17T23:35:25.930824525Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-31-247,Uid:7b4ff9d2a54c79aed4b1e1bc862590ed,Namespace:kube-system,Attempt:0,}" Apr 17 23:35:25.935461 containerd[2145]: time="2026-04-17T23:35:25.935398637Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-31-247,Uid:774e69a71cad02a917c49adf4e4837cf,Namespace:kube-system,Attempt:0,}" Apr 17 23:35:26.061663 kubelet[3052]: E0417 23:35:26.061608 3052 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.31.247:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-247?timeout=10s\": dial tcp 172.31.31.247:6443: connect: connection refused" interval="800ms" Apr 17 23:35:26.250376 kubelet[3052]: I0417 23:35:26.250157 3052 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-31-247" Apr 17 23:35:26.251481 kubelet[3052]: E0417 23:35:26.251430 3052 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.31.247:6443/api/v1/nodes\": dial tcp 172.31.31.247:6443: connect: connection refused" node="ip-172-31-31-247" Apr 17 23:35:26.409340 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount593129972.mount: Deactivated successfully. Apr 17 23:35:26.417302 containerd[2145]: time="2026-04-17T23:35:26.417216963Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 23:35:26.420442 containerd[2145]: time="2026-04-17T23:35:26.420372951Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Apr 17 23:35:26.426127 containerd[2145]: time="2026-04-17T23:35:26.424696767Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 23:35:26.427448 containerd[2145]: time="2026-04-17T23:35:26.427364955Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 17 23:35:26.427934 containerd[2145]: time="2026-04-17T23:35:26.427894347Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 23:35:26.431592 containerd[2145]: time="2026-04-17T23:35:26.431543667Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 17 23:35:26.433808 containerd[2145]: time="2026-04-17T23:35:26.433757043Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 23:35:26.442367 containerd[2145]: time="2026-04-17T23:35:26.442306707Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 516.014786ms" Apr 17 23:35:26.444295 containerd[2145]: time="2026-04-17T23:35:26.444229863Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 23:35:26.447718 containerd[2145]: time="2026-04-17T23:35:26.447652852Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 516.710247ms" Apr 17 23:35:26.450138 containerd[2145]: time="2026-04-17T23:35:26.449037088Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 513.520311ms" Apr 17 23:35:26.548509 kubelet[3052]: E0417 23:35:26.548453 3052 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.31.247:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.31.247:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 17 23:35:26.663505 containerd[2145]: time="2026-04-17T23:35:26.662890193Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:35:26.663709 containerd[2145]: time="2026-04-17T23:35:26.663633269Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:35:26.664031 containerd[2145]: time="2026-04-17T23:35:26.663834005Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:35:26.665374 containerd[2145]: time="2026-04-17T23:35:26.665138453Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:35:26.671619 containerd[2145]: time="2026-04-17T23:35:26.671449577Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:35:26.672270 containerd[2145]: time="2026-04-17T23:35:26.671560157Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:35:26.673252 containerd[2145]: time="2026-04-17T23:35:26.672233801Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:35:26.673724 containerd[2145]: time="2026-04-17T23:35:26.673559057Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:35:26.682924 containerd[2145]: time="2026-04-17T23:35:26.682379033Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:35:26.682924 containerd[2145]: time="2026-04-17T23:35:26.682507049Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:35:26.682924 containerd[2145]: time="2026-04-17T23:35:26.682559393Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:35:26.683372 containerd[2145]: time="2026-04-17T23:35:26.682919969Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:35:26.773082 kubelet[3052]: E0417 23:35:26.773016 3052 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.31.247:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.31.247:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 17 23:35:26.826622 kubelet[3052]: E0417 23:35:26.826413 3052 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.31.247:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-31-247&limit=500&resourceVersion=0\": dial tcp 172.31.31.247:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 17 23:35:26.829803 containerd[2145]: time="2026-04-17T23:35:26.829709489Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-31-247,Uid:774e69a71cad02a917c49adf4e4837cf,Namespace:kube-system,Attempt:0,} returns sandbox id \"05809f66e18eee642700d83226592e563032b5d0e92bec312bda6fa9549e9d3b\"" Apr 17 23:35:26.842390 containerd[2145]: time="2026-04-17T23:35:26.842300165Z" level=info msg="CreateContainer within sandbox \"05809f66e18eee642700d83226592e563032b5d0e92bec312bda6fa9549e9d3b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 17 23:35:26.862589 kubelet[3052]: E0417 23:35:26.862191 3052 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.31.247:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-247?timeout=10s\": dial tcp 172.31.31.247:6443: connect: connection refused" interval="1.6s" Apr 17 23:35:26.876755 containerd[2145]: time="2026-04-17T23:35:26.876670374Z" level=info msg="CreateContainer within sandbox \"05809f66e18eee642700d83226592e563032b5d0e92bec312bda6fa9549e9d3b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"6f477d52e87bb8761609bf71216f2313bcb3986c5b57b18ad7de9108042dbcbd\"" Apr 17 23:35:26.879548 containerd[2145]: time="2026-04-17T23:35:26.879486774Z" level=info msg="StartContainer for \"6f477d52e87bb8761609bf71216f2313bcb3986c5b57b18ad7de9108042dbcbd\"" Apr 17 23:35:26.882009 containerd[2145]: time="2026-04-17T23:35:26.881797410Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-31-247,Uid:7b4ff9d2a54c79aed4b1e1bc862590ed,Namespace:kube-system,Attempt:0,} returns sandbox id \"23a4fb56bbfe11ad7ea282986febff40f1d88be2c31df8a445a44aa553578837\"" Apr 17 23:35:26.892726 containerd[2145]: time="2026-04-17T23:35:26.892291998Z" level=info msg="CreateContainer within sandbox \"23a4fb56bbfe11ad7ea282986febff40f1d88be2c31df8a445a44aa553578837\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 17 23:35:26.892726 containerd[2145]: time="2026-04-17T23:35:26.892586310Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-31-247,Uid:c2520430b5e31eb79e9b11ca7a09f22f,Namespace:kube-system,Attempt:0,} returns sandbox id \"9bc39ac31d36626a4ced3ee40e17e3b9ce1f474ff25b6d6dd1907eea3f5e3deb\"" Apr 17 23:35:26.904912 containerd[2145]: time="2026-04-17T23:35:26.904841406Z" level=info msg="CreateContainer within sandbox \"9bc39ac31d36626a4ced3ee40e17e3b9ce1f474ff25b6d6dd1907eea3f5e3deb\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 17 23:35:26.924614 containerd[2145]: time="2026-04-17T23:35:26.924031230Z" level=info msg="CreateContainer within sandbox \"23a4fb56bbfe11ad7ea282986febff40f1d88be2c31df8a445a44aa553578837\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"c40bc6a700a3c184516cdf70b5e81008ae423025705dca1e374a3e262367eca7\"" Apr 17 23:35:26.925267 containerd[2145]: time="2026-04-17T23:35:26.925012062Z" level=info msg="StartContainer for \"c40bc6a700a3c184516cdf70b5e81008ae423025705dca1e374a3e262367eca7\"" Apr 17 23:35:26.930587 containerd[2145]: time="2026-04-17T23:35:26.930498270Z" level=info msg="CreateContainer within sandbox \"9bc39ac31d36626a4ced3ee40e17e3b9ce1f474ff25b6d6dd1907eea3f5e3deb\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"889beee3a8391253bb5a0de4608e051a4bffb5c477bc7c19258e8e702052531f\"" Apr 17 23:35:26.934884 containerd[2145]: time="2026-04-17T23:35:26.934496490Z" level=info msg="StartContainer for \"889beee3a8391253bb5a0de4608e051a4bffb5c477bc7c19258e8e702052531f\"" Apr 17 23:35:27.055138 containerd[2145]: time="2026-04-17T23:35:27.052764651Z" level=info msg="StartContainer for \"6f477d52e87bb8761609bf71216f2313bcb3986c5b57b18ad7de9108042dbcbd\" returns successfully" Apr 17 23:35:27.066132 kubelet[3052]: I0417 23:35:27.062184 3052 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-31-247" Apr 17 23:35:27.066132 kubelet[3052]: E0417 23:35:27.062674 3052 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.31.247:6443/api/v1/nodes\": dial tcp 172.31.31.247:6443: connect: connection refused" node="ip-172-31-31-247" Apr 17 23:35:27.066584 kubelet[3052]: E0417 23:35:27.066494 3052 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.31.247:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.31.247:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 17 23:35:27.155019 containerd[2145]: time="2026-04-17T23:35:27.153469155Z" level=info msg="StartContainer for \"c40bc6a700a3c184516cdf70b5e81008ae423025705dca1e374a3e262367eca7\" returns successfully" Apr 17 23:35:27.210394 containerd[2145]: time="2026-04-17T23:35:27.210314079Z" level=info msg="StartContainer for \"889beee3a8391253bb5a0de4608e051a4bffb5c477bc7c19258e8e702052531f\" returns successfully" Apr 17 23:35:27.538344 kubelet[3052]: E0417 23:35:27.538285 3052 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-31-247\" not found" node="ip-172-31-31-247" Apr 17 23:35:27.554799 kubelet[3052]: E0417 23:35:27.552589 3052 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-31-247\" not found" node="ip-172-31-31-247" Apr 17 23:35:27.569361 kubelet[3052]: E0417 23:35:27.568766 3052 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-31-247\" not found" node="ip-172-31-31-247" Apr 17 23:35:28.573590 kubelet[3052]: E0417 23:35:28.572215 3052 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-31-247\" not found" node="ip-172-31-31-247" Apr 17 23:35:28.573590 kubelet[3052]: E0417 23:35:28.572638 3052 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-31-247\" not found" node="ip-172-31-31-247" Apr 17 23:35:28.576745 kubelet[3052]: E0417 23:35:28.576695 3052 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-31-247\" not found" node="ip-172-31-31-247" Apr 17 23:35:28.669190 kubelet[3052]: I0417 23:35:28.667508 3052 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-31-247" Apr 17 23:35:29.574339 kubelet[3052]: E0417 23:35:29.574278 3052 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-31-247\" not found" node="ip-172-31-31-247" Apr 17 23:35:29.574965 kubelet[3052]: E0417 23:35:29.574915 3052 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-31-247\" not found" node="ip-172-31-31-247" Apr 17 23:35:30.211225 update_engine[2128]: I20260417 23:35:30.211134 2128 update_attempter.cc:509] Updating boot flags... Apr 17 23:35:30.421116 kubelet[3052]: I0417 23:35:30.413422 3052 apiserver.go:52] "Watching apiserver" Apr 17 23:35:30.484125 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 32 scanned by (udev-worker) (3350) Apr 17 23:35:30.558286 kubelet[3052]: I0417 23:35:30.558243 3052 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 17 23:35:30.578995 kubelet[3052]: E0417 23:35:30.578945 3052 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-31-247\" not found" node="ip-172-31-31-247" Apr 17 23:35:30.759169 kubelet[3052]: I0417 23:35:30.757012 3052 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-31-247" Apr 17 23:35:30.858545 kubelet[3052]: I0417 23:35:30.858210 3052 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-31-247" Apr 17 23:35:30.929204 kubelet[3052]: E0417 23:35:30.926230 3052 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-31-247\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-31-247" Apr 17 23:35:30.929204 kubelet[3052]: I0417 23:35:30.926285 3052 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-31-247" Apr 17 23:35:30.955122 kubelet[3052]: E0417 23:35:30.942452 3052 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-31-247\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-31-247" Apr 17 23:35:30.955122 kubelet[3052]: I0417 23:35:30.946720 3052 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-31-247" Apr 17 23:35:30.957477 kubelet[3052]: E0417 23:35:30.957402 3052 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-31-247\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-31-247" Apr 17 23:35:31.193138 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 32 scanned by (udev-worker) (3349) Apr 17 23:35:31.699154 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 32 scanned by (udev-worker) (3349) Apr 17 23:35:33.160269 systemd[1]: Reloading requested from client PID 3605 ('systemctl') (unit session-7.scope)... Apr 17 23:35:33.160301 systemd[1]: Reloading... Apr 17 23:35:33.335199 zram_generator::config[3648]: No configuration found. Apr 17 23:35:33.589582 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 17 23:35:33.784585 systemd[1]: Reloading finished in 623 ms. Apr 17 23:35:33.846940 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:35:33.862764 systemd[1]: kubelet.service: Deactivated successfully. Apr 17 23:35:33.863552 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:35:33.875031 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:35:34.415424 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:35:34.434869 (kubelet)[3715]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 17 23:35:34.550813 kubelet[3715]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 17 23:35:34.550813 kubelet[3715]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 17 23:35:34.552455 kubelet[3715]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 17 23:35:34.552455 kubelet[3715]: I0417 23:35:34.551438 3715 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 17 23:35:34.567179 kubelet[3715]: I0417 23:35:34.565032 3715 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 17 23:35:34.567179 kubelet[3715]: I0417 23:35:34.565076 3715 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 17 23:35:34.567179 kubelet[3715]: I0417 23:35:34.565543 3715 server.go:956] "Client rotation is on, will bootstrap in background" Apr 17 23:35:34.568332 kubelet[3715]: I0417 23:35:34.568295 3715 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 17 23:35:34.581602 kubelet[3715]: I0417 23:35:34.581530 3715 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 17 23:35:34.589339 kubelet[3715]: E0417 23:35:34.589269 3715 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 17 23:35:34.589339 kubelet[3715]: I0417 23:35:34.589325 3715 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 17 23:35:34.595746 kubelet[3715]: I0417 23:35:34.595694 3715 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 17 23:35:34.597637 kubelet[3715]: I0417 23:35:34.596724 3715 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 17 23:35:34.597637 kubelet[3715]: I0417 23:35:34.596792 3715 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-31-247","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Apr 17 23:35:34.597637 kubelet[3715]: I0417 23:35:34.597060 3715 topology_manager.go:138] "Creating topology manager with none policy" Apr 17 23:35:34.597637 kubelet[3715]: I0417 23:35:34.597118 3715 container_manager_linux.go:303] "Creating device plugin manager" Apr 17 23:35:34.597637 kubelet[3715]: I0417 23:35:34.597242 3715 state_mem.go:36] "Initialized new in-memory state store" Apr 17 23:35:34.598067 kubelet[3715]: I0417 23:35:34.597542 3715 kubelet.go:480] "Attempting to sync node with API server" Apr 17 23:35:34.601247 kubelet[3715]: I0417 23:35:34.601196 3715 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 17 23:35:34.601386 kubelet[3715]: I0417 23:35:34.601291 3715 kubelet.go:386] "Adding apiserver pod source" Apr 17 23:35:34.607801 kubelet[3715]: I0417 23:35:34.607755 3715 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 17 23:35:34.617914 kubelet[3715]: I0417 23:35:34.616322 3715 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 17 23:35:34.617914 kubelet[3715]: I0417 23:35:34.617276 3715 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 17 23:35:34.621170 kubelet[3715]: I0417 23:35:34.621130 3715 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 17 23:35:34.621314 kubelet[3715]: I0417 23:35:34.621198 3715 server.go:1289] "Started kubelet" Apr 17 23:35:34.629169 kubelet[3715]: I0417 23:35:34.628230 3715 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 17 23:35:34.632005 kubelet[3715]: I0417 23:35:34.631049 3715 server.go:317] "Adding debug handlers to kubelet server" Apr 17 23:35:34.634915 kubelet[3715]: I0417 23:35:34.634846 3715 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 17 23:35:34.640174 kubelet[3715]: I0417 23:35:34.639662 3715 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 17 23:35:34.640174 kubelet[3715]: I0417 23:35:34.640005 3715 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 17 23:35:34.651474 kubelet[3715]: I0417 23:35:34.650736 3715 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 17 23:35:34.663549 kubelet[3715]: I0417 23:35:34.663190 3715 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 17 23:35:34.663676 kubelet[3715]: E0417 23:35:34.663643 3715 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-31-247\" not found" Apr 17 23:35:34.670151 kubelet[3715]: I0417 23:35:34.668999 3715 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 17 23:35:34.680473 kubelet[3715]: I0417 23:35:34.680412 3715 reconciler.go:26] "Reconciler: start to sync state" Apr 17 23:35:34.688333 kubelet[3715]: I0417 23:35:34.688035 3715 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 17 23:35:34.698634 kubelet[3715]: I0417 23:35:34.698592 3715 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 17 23:35:34.699676 kubelet[3715]: I0417 23:35:34.698996 3715 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 17 23:35:34.699676 kubelet[3715]: I0417 23:35:34.699048 3715 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 17 23:35:34.699676 kubelet[3715]: I0417 23:35:34.699064 3715 kubelet.go:2436] "Starting kubelet main sync loop" Apr 17 23:35:34.699676 kubelet[3715]: I0417 23:35:34.699054 3715 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 17 23:35:34.699676 kubelet[3715]: E0417 23:35:34.699166 3715 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 17 23:35:34.714873 kubelet[3715]: I0417 23:35:34.714355 3715 factory.go:223] Registration of the containerd container factory successfully Apr 17 23:35:34.715059 kubelet[3715]: I0417 23:35:34.715033 3715 factory.go:223] Registration of the systemd container factory successfully Apr 17 23:35:34.802228 kubelet[3715]: E0417 23:35:34.801281 3715 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 17 23:35:34.989620 kubelet[3715]: I0417 23:35:34.989406 3715 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 17 23:35:34.990983 kubelet[3715]: I0417 23:35:34.989895 3715 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 17 23:35:34.990983 kubelet[3715]: I0417 23:35:34.989972 3715 state_mem.go:36] "Initialized new in-memory state store" Apr 17 23:35:34.990983 kubelet[3715]: I0417 23:35:34.990812 3715 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 17 23:35:34.990983 kubelet[3715]: I0417 23:35:34.990839 3715 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 17 23:35:34.990983 kubelet[3715]: I0417 23:35:34.990872 3715 policy_none.go:49] "None policy: Start" Apr 17 23:35:34.990983 kubelet[3715]: I0417 23:35:34.990893 3715 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 17 23:35:34.990983 kubelet[3715]: I0417 23:35:34.990918 3715 state_mem.go:35] "Initializing new in-memory state store" Apr 17 23:35:34.991421 kubelet[3715]: I0417 23:35:34.991121 3715 state_mem.go:75] "Updated machine memory state" Apr 17 23:35:34.994566 kubelet[3715]: E0417 23:35:34.994526 3715 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 17 23:35:34.996125 kubelet[3715]: I0417 23:35:34.994977 3715 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 17 23:35:34.996125 kubelet[3715]: I0417 23:35:34.995002 3715 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 17 23:35:34.997145 kubelet[3715]: I0417 23:35:34.996927 3715 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 17 23:35:35.002352 kubelet[3715]: E0417 23:35:35.002043 3715 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 17 23:35:35.005792 kubelet[3715]: I0417 23:35:35.005442 3715 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-31-247" Apr 17 23:35:35.013637 kubelet[3715]: I0417 23:35:35.013594 3715 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-31-247" Apr 17 23:35:35.017356 kubelet[3715]: I0417 23:35:35.016325 3715 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-31-247" Apr 17 23:35:35.086400 kubelet[3715]: I0417 23:35:35.086347 3715 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7b4ff9d2a54c79aed4b1e1bc862590ed-ca-certs\") pod \"kube-controller-manager-ip-172-31-31-247\" (UID: \"7b4ff9d2a54c79aed4b1e1bc862590ed\") " pod="kube-system/kube-controller-manager-ip-172-31-31-247" Apr 17 23:35:35.086679 kubelet[3715]: I0417 23:35:35.086648 3715 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7b4ff9d2a54c79aed4b1e1bc862590ed-kubeconfig\") pod \"kube-controller-manager-ip-172-31-31-247\" (UID: \"7b4ff9d2a54c79aed4b1e1bc862590ed\") " pod="kube-system/kube-controller-manager-ip-172-31-31-247" Apr 17 23:35:35.086898 kubelet[3715]: I0417 23:35:35.086875 3715 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/774e69a71cad02a917c49adf4e4837cf-kubeconfig\") pod \"kube-scheduler-ip-172-31-31-247\" (UID: \"774e69a71cad02a917c49adf4e4837cf\") " pod="kube-system/kube-scheduler-ip-172-31-31-247" Apr 17 23:35:35.087124 kubelet[3715]: I0417 23:35:35.087046 3715 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c2520430b5e31eb79e9b11ca7a09f22f-ca-certs\") pod \"kube-apiserver-ip-172-31-31-247\" (UID: \"c2520430b5e31eb79e9b11ca7a09f22f\") " pod="kube-system/kube-apiserver-ip-172-31-31-247" Apr 17 23:35:35.087334 kubelet[3715]: I0417 23:35:35.087247 3715 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c2520430b5e31eb79e9b11ca7a09f22f-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-31-247\" (UID: \"c2520430b5e31eb79e9b11ca7a09f22f\") " pod="kube-system/kube-apiserver-ip-172-31-31-247" Apr 17 23:35:35.087562 kubelet[3715]: I0417 23:35:35.087424 3715 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7b4ff9d2a54c79aed4b1e1bc862590ed-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-31-247\" (UID: \"7b4ff9d2a54c79aed4b1e1bc862590ed\") " pod="kube-system/kube-controller-manager-ip-172-31-31-247" Apr 17 23:35:35.087562 kubelet[3715]: I0417 23:35:35.087507 3715 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7b4ff9d2a54c79aed4b1e1bc862590ed-k8s-certs\") pod \"kube-controller-manager-ip-172-31-31-247\" (UID: \"7b4ff9d2a54c79aed4b1e1bc862590ed\") " pod="kube-system/kube-controller-manager-ip-172-31-31-247" Apr 17 23:35:35.087860 kubelet[3715]: I0417 23:35:35.087714 3715 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7b4ff9d2a54c79aed4b1e1bc862590ed-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-31-247\" (UID: \"7b4ff9d2a54c79aed4b1e1bc862590ed\") " pod="kube-system/kube-controller-manager-ip-172-31-31-247" Apr 17 23:35:35.087860 kubelet[3715]: I0417 23:35:35.087804 3715 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c2520430b5e31eb79e9b11ca7a09f22f-k8s-certs\") pod \"kube-apiserver-ip-172-31-31-247\" (UID: \"c2520430b5e31eb79e9b11ca7a09f22f\") " pod="kube-system/kube-apiserver-ip-172-31-31-247" Apr 17 23:35:35.128178 kubelet[3715]: I0417 23:35:35.126847 3715 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-31-247" Apr 17 23:35:35.143534 kubelet[3715]: I0417 23:35:35.143489 3715 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-31-247" Apr 17 23:35:35.143841 kubelet[3715]: I0417 23:35:35.143821 3715 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-31-247" Apr 17 23:35:35.609153 kubelet[3715]: I0417 23:35:35.609070 3715 apiserver.go:52] "Watching apiserver" Apr 17 23:35:35.669596 kubelet[3715]: I0417 23:35:35.669542 3715 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 17 23:35:35.839994 kubelet[3715]: I0417 23:35:35.838450 3715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-31-247" podStartSLOduration=0.838432622 podStartE2EDuration="838.432622ms" podCreationTimestamp="2026-04-17 23:35:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:35:35.837979694 +0000 UTC m=+1.387771988" watchObservedRunningTime="2026-04-17 23:35:35.838432622 +0000 UTC m=+1.388224916" Apr 17 23:35:35.874030 kubelet[3715]: I0417 23:35:35.873773 3715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-31-247" podStartSLOduration=0.873749978 podStartE2EDuration="873.749978ms" podCreationTimestamp="2026-04-17 23:35:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:35:35.853773554 +0000 UTC m=+1.403565836" watchObservedRunningTime="2026-04-17 23:35:35.873749978 +0000 UTC m=+1.423542260" Apr 17 23:35:35.894794 kubelet[3715]: I0417 23:35:35.894697 3715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-31-247" podStartSLOduration=0.894675818 podStartE2EDuration="894.675818ms" podCreationTimestamp="2026-04-17 23:35:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:35:35.874154834 +0000 UTC m=+1.423947128" watchObservedRunningTime="2026-04-17 23:35:35.894675818 +0000 UTC m=+1.444468100" Apr 17 23:35:39.553993 kubelet[3715]: I0417 23:35:39.553484 3715 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 17 23:35:39.556765 containerd[2145]: time="2026-04-17T23:35:39.556687529Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 17 23:35:39.557370 kubelet[3715]: I0417 23:35:39.557215 3715 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 17 23:35:40.318447 kubelet[3715]: I0417 23:35:40.318255 3715 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a52e9757-46d7-4158-a322-d46bf2d4b289-lib-modules\") pod \"kube-proxy-h5ttk\" (UID: \"a52e9757-46d7-4158-a322-d46bf2d4b289\") " pod="kube-system/kube-proxy-h5ttk" Apr 17 23:35:40.318447 kubelet[3715]: I0417 23:35:40.318368 3715 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a52e9757-46d7-4158-a322-d46bf2d4b289-kube-proxy\") pod \"kube-proxy-h5ttk\" (UID: \"a52e9757-46d7-4158-a322-d46bf2d4b289\") " pod="kube-system/kube-proxy-h5ttk" Apr 17 23:35:40.318447 kubelet[3715]: I0417 23:35:40.318414 3715 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a52e9757-46d7-4158-a322-d46bf2d4b289-xtables-lock\") pod \"kube-proxy-h5ttk\" (UID: \"a52e9757-46d7-4158-a322-d46bf2d4b289\") " pod="kube-system/kube-proxy-h5ttk" Apr 17 23:35:40.318747 kubelet[3715]: I0417 23:35:40.318479 3715 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwhz6\" (UniqueName: \"kubernetes.io/projected/a52e9757-46d7-4158-a322-d46bf2d4b289-kube-api-access-dwhz6\") pod \"kube-proxy-h5ttk\" (UID: \"a52e9757-46d7-4158-a322-d46bf2d4b289\") " pod="kube-system/kube-proxy-h5ttk" Apr 17 23:35:40.434758 kubelet[3715]: E0417 23:35:40.434699 3715 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Apr 17 23:35:40.434758 kubelet[3715]: E0417 23:35:40.434754 3715 projected.go:194] Error preparing data for projected volume kube-api-access-dwhz6 for pod kube-system/kube-proxy-h5ttk: configmap "kube-root-ca.crt" not found Apr 17 23:35:40.435055 kubelet[3715]: E0417 23:35:40.434862 3715 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a52e9757-46d7-4158-a322-d46bf2d4b289-kube-api-access-dwhz6 podName:a52e9757-46d7-4158-a322-d46bf2d4b289 nodeName:}" failed. No retries permitted until 2026-04-17 23:35:40.934827529 +0000 UTC m=+6.484619811 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-dwhz6" (UniqueName: "kubernetes.io/projected/a52e9757-46d7-4158-a322-d46bf2d4b289-kube-api-access-dwhz6") pod "kube-proxy-h5ttk" (UID: "a52e9757-46d7-4158-a322-d46bf2d4b289") : configmap "kube-root-ca.crt" not found Apr 17 23:35:40.822162 kubelet[3715]: I0417 23:35:40.822081 3715 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxkhj\" (UniqueName: \"kubernetes.io/projected/f538a66a-6882-41ac-ad4e-df10301d05bf-kube-api-access-zxkhj\") pod \"tigera-operator-6bf85f8dd-lml4w\" (UID: \"f538a66a-6882-41ac-ad4e-df10301d05bf\") " pod="tigera-operator/tigera-operator-6bf85f8dd-lml4w" Apr 17 23:35:40.822784 kubelet[3715]: I0417 23:35:40.822171 3715 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f538a66a-6882-41ac-ad4e-df10301d05bf-var-lib-calico\") pod \"tigera-operator-6bf85f8dd-lml4w\" (UID: \"f538a66a-6882-41ac-ad4e-df10301d05bf\") " pod="tigera-operator/tigera-operator-6bf85f8dd-lml4w" Apr 17 23:35:41.112294 containerd[2145]: time="2026-04-17T23:35:41.112131484Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-lml4w,Uid:f538a66a-6882-41ac-ad4e-df10301d05bf,Namespace:tigera-operator,Attempt:0,}" Apr 17 23:35:41.160595 containerd[2145]: time="2026-04-17T23:35:41.160301849Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:35:41.160595 containerd[2145]: time="2026-04-17T23:35:41.160480313Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:35:41.161074 containerd[2145]: time="2026-04-17T23:35:41.160552985Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:35:41.161236 containerd[2145]: time="2026-04-17T23:35:41.161044913Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:35:41.204138 containerd[2145]: time="2026-04-17T23:35:41.204017249Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-h5ttk,Uid:a52e9757-46d7-4158-a322-d46bf2d4b289,Namespace:kube-system,Attempt:0,}" Apr 17 23:35:41.260082 containerd[2145]: time="2026-04-17T23:35:41.259544453Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:35:41.260082 containerd[2145]: time="2026-04-17T23:35:41.259664321Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:35:41.260082 containerd[2145]: time="2026-04-17T23:35:41.259699601Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:35:41.260082 containerd[2145]: time="2026-04-17T23:35:41.259885169Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:35:41.273442 containerd[2145]: time="2026-04-17T23:35:41.273251633Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-lml4w,Uid:f538a66a-6882-41ac-ad4e-df10301d05bf,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"a7959f8454fff2e88890f014648d4387a6d48b9b0176eef86518d9a05198bce8\"" Apr 17 23:35:41.279134 containerd[2145]: time="2026-04-17T23:35:41.277851449Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Apr 17 23:35:41.336139 containerd[2145]: time="2026-04-17T23:35:41.336030629Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-h5ttk,Uid:a52e9757-46d7-4158-a322-d46bf2d4b289,Namespace:kube-system,Attempt:0,} returns sandbox id \"e79d2f16eb53d61cc60e7db6f8b55974aa544ab8f4dee7f0b8dcef8305a5ba46\"" Apr 17 23:35:41.346758 containerd[2145]: time="2026-04-17T23:35:41.345117258Z" level=info msg="CreateContainer within sandbox \"e79d2f16eb53d61cc60e7db6f8b55974aa544ab8f4dee7f0b8dcef8305a5ba46\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 17 23:35:41.366137 containerd[2145]: time="2026-04-17T23:35:41.365175522Z" level=info msg="CreateContainer within sandbox \"e79d2f16eb53d61cc60e7db6f8b55974aa544ab8f4dee7f0b8dcef8305a5ba46\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"a16d5959748485969eeef07e43bfa6b88a1b1b382c419c6c642c2bf3c82b8442\"" Apr 17 23:35:41.367520 containerd[2145]: time="2026-04-17T23:35:41.367454346Z" level=info msg="StartContainer for \"a16d5959748485969eeef07e43bfa6b88a1b1b382c419c6c642c2bf3c82b8442\"" Apr 17 23:35:41.472655 containerd[2145]: time="2026-04-17T23:35:41.472510722Z" level=info msg="StartContainer for \"a16d5959748485969eeef07e43bfa6b88a1b1b382c419c6c642c2bf3c82b8442\" returns successfully" Apr 17 23:35:42.487125 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1839483806.mount: Deactivated successfully. Apr 17 23:35:44.720315 kubelet[3715]: I0417 23:35:44.720154 3715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-h5ttk" podStartSLOduration=4.720131806 podStartE2EDuration="4.720131806s" podCreationTimestamp="2026-04-17 23:35:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:35:41.848919944 +0000 UTC m=+7.398712250" watchObservedRunningTime="2026-04-17 23:35:44.720131806 +0000 UTC m=+10.269924100" Apr 17 23:35:47.966208 containerd[2145]: time="2026-04-17T23:35:47.965579354Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:35:47.968231 containerd[2145]: time="2026-04-17T23:35:47.968157506Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=25071565" Apr 17 23:35:47.969137 containerd[2145]: time="2026-04-17T23:35:47.968811410Z" level=info msg="ImageCreate event name:\"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:35:47.975140 containerd[2145]: time="2026-04-17T23:35:47.974708822Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:35:47.977816 containerd[2145]: time="2026-04-17T23:35:47.976581902Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"25067560\" in 6.698664621s" Apr 17 23:35:47.977816 containerd[2145]: time="2026-04-17T23:35:47.976641542Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\"" Apr 17 23:35:47.992436 containerd[2145]: time="2026-04-17T23:35:47.992381499Z" level=info msg="CreateContainer within sandbox \"a7959f8454fff2e88890f014648d4387a6d48b9b0176eef86518d9a05198bce8\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Apr 17 23:35:48.013305 containerd[2145]: time="2026-04-17T23:35:48.013230455Z" level=info msg="CreateContainer within sandbox \"a7959f8454fff2e88890f014648d4387a6d48b9b0176eef86518d9a05198bce8\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"d6cb68fd9a1e3af8ac0e0daa28ae3787dd817199cd55215402d13fd1ad328be1\"" Apr 17 23:35:48.016107 containerd[2145]: time="2026-04-17T23:35:48.014343587Z" level=info msg="StartContainer for \"d6cb68fd9a1e3af8ac0e0daa28ae3787dd817199cd55215402d13fd1ad328be1\"" Apr 17 23:35:48.116206 containerd[2145]: time="2026-04-17T23:35:48.115963811Z" level=info msg="StartContainer for \"d6cb68fd9a1e3af8ac0e0daa28ae3787dd817199cd55215402d13fd1ad328be1\" returns successfully" Apr 17 23:35:48.875660 kubelet[3715]: I0417 23:35:48.875044 3715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6bf85f8dd-lml4w" podStartSLOduration=2.173827686 podStartE2EDuration="8.874999503s" podCreationTimestamp="2026-04-17 23:35:40 +0000 UTC" firstStartedPulling="2026-04-17 23:35:41.276961529 +0000 UTC m=+6.826753811" lastFinishedPulling="2026-04-17 23:35:47.978133358 +0000 UTC m=+13.527925628" observedRunningTime="2026-04-17 23:35:48.874842915 +0000 UTC m=+14.424635197" watchObservedRunningTime="2026-04-17 23:35:48.874999503 +0000 UTC m=+14.424791797" Apr 17 23:35:57.031424 sudo[2516]: pam_unix(sudo:session): session closed for user root Apr 17 23:35:57.200526 sshd[2512]: pam_unix(sshd:session): session closed for user core Apr 17 23:35:57.212907 systemd-logind[2124]: Session 7 logged out. Waiting for processes to exit. Apr 17 23:35:57.216304 systemd[1]: sshd@6-172.31.31.247:22-4.175.71.9:41762.service: Deactivated successfully. Apr 17 23:35:57.236043 systemd[1]: session-7.scope: Deactivated successfully. Apr 17 23:35:57.242173 systemd-logind[2124]: Removed session 7. Apr 17 23:36:04.896724 kubelet[3715]: I0417 23:36:04.896604 3715 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z896h\" (UniqueName: \"kubernetes.io/projected/f09f269f-adfb-45a9-8aee-060f9f0c48e6-kube-api-access-z896h\") pod \"calico-typha-7b697c7478-ckkdx\" (UID: \"f09f269f-adfb-45a9-8aee-060f9f0c48e6\") " pod="calico-system/calico-typha-7b697c7478-ckkdx" Apr 17 23:36:04.896724 kubelet[3715]: I0417 23:36:04.896699 3715 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/f09f269f-adfb-45a9-8aee-060f9f0c48e6-typha-certs\") pod \"calico-typha-7b697c7478-ckkdx\" (UID: \"f09f269f-adfb-45a9-8aee-060f9f0c48e6\") " pod="calico-system/calico-typha-7b697c7478-ckkdx" Apr 17 23:36:04.897541 kubelet[3715]: I0417 23:36:04.896746 3715 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f09f269f-adfb-45a9-8aee-060f9f0c48e6-tigera-ca-bundle\") pod \"calico-typha-7b697c7478-ckkdx\" (UID: \"f09f269f-adfb-45a9-8aee-060f9f0c48e6\") " pod="calico-system/calico-typha-7b697c7478-ckkdx" Apr 17 23:36:05.117522 containerd[2145]: time="2026-04-17T23:36:05.116918404Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7b697c7478-ckkdx,Uid:f09f269f-adfb-45a9-8aee-060f9f0c48e6,Namespace:calico-system,Attempt:0,}" Apr 17 23:36:05.199431 containerd[2145]: time="2026-04-17T23:36:05.194560816Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:36:05.199431 containerd[2145]: time="2026-04-17T23:36:05.194672728Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:36:05.199431 containerd[2145]: time="2026-04-17T23:36:05.194709304Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:36:05.199431 containerd[2145]: time="2026-04-17T23:36:05.194894044Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:36:05.200961 kubelet[3715]: I0417 23:36:05.198635 3715 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/8fd2e2dc-62f4-4060-8ba5-29d283cdb4a0-nodeproc\") pod \"calico-node-r6fj2\" (UID: \"8fd2e2dc-62f4-4060-8ba5-29d283cdb4a0\") " pod="calico-system/calico-node-r6fj2" Apr 17 23:36:05.200961 kubelet[3715]: I0417 23:36:05.198722 3715 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/8fd2e2dc-62f4-4060-8ba5-29d283cdb4a0-policysync\") pod \"calico-node-r6fj2\" (UID: \"8fd2e2dc-62f4-4060-8ba5-29d283cdb4a0\") " pod="calico-system/calico-node-r6fj2" Apr 17 23:36:05.200961 kubelet[3715]: I0417 23:36:05.198785 3715 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/8fd2e2dc-62f4-4060-8ba5-29d283cdb4a0-cni-bin-dir\") pod \"calico-node-r6fj2\" (UID: \"8fd2e2dc-62f4-4060-8ba5-29d283cdb4a0\") " pod="calico-system/calico-node-r6fj2" Apr 17 23:36:05.200961 kubelet[3715]: I0417 23:36:05.198829 3715 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/8fd2e2dc-62f4-4060-8ba5-29d283cdb4a0-cni-net-dir\") pod \"calico-node-r6fj2\" (UID: \"8fd2e2dc-62f4-4060-8ba5-29d283cdb4a0\") " pod="calico-system/calico-node-r6fj2" Apr 17 23:36:05.200961 kubelet[3715]: I0417 23:36:05.198866 3715 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8fd2e2dc-62f4-4060-8ba5-29d283cdb4a0-lib-modules\") pod \"calico-node-r6fj2\" (UID: \"8fd2e2dc-62f4-4060-8ba5-29d283cdb4a0\") " pod="calico-system/calico-node-r6fj2" Apr 17 23:36:05.201332 kubelet[3715]: I0417 23:36:05.198904 3715 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8fd2e2dc-62f4-4060-8ba5-29d283cdb4a0-xtables-lock\") pod \"calico-node-r6fj2\" (UID: \"8fd2e2dc-62f4-4060-8ba5-29d283cdb4a0\") " pod="calico-system/calico-node-r6fj2" Apr 17 23:36:05.201332 kubelet[3715]: I0417 23:36:05.198991 3715 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/8fd2e2dc-62f4-4060-8ba5-29d283cdb4a0-node-certs\") pod \"calico-node-r6fj2\" (UID: \"8fd2e2dc-62f4-4060-8ba5-29d283cdb4a0\") " pod="calico-system/calico-node-r6fj2" Apr 17 23:36:05.201332 kubelet[3715]: I0417 23:36:05.199027 3715 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8fd2e2dc-62f4-4060-8ba5-29d283cdb4a0-tigera-ca-bundle\") pod \"calico-node-r6fj2\" (UID: \"8fd2e2dc-62f4-4060-8ba5-29d283cdb4a0\") " pod="calico-system/calico-node-r6fj2" Apr 17 23:36:05.201332 kubelet[3715]: I0417 23:36:05.199129 3715 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/8fd2e2dc-62f4-4060-8ba5-29d283cdb4a0-bpffs\") pod \"calico-node-r6fj2\" (UID: \"8fd2e2dc-62f4-4060-8ba5-29d283cdb4a0\") " pod="calico-system/calico-node-r6fj2" Apr 17 23:36:05.201332 kubelet[3715]: I0417 23:36:05.199166 3715 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/8fd2e2dc-62f4-4060-8ba5-29d283cdb4a0-var-run-calico\") pod \"calico-node-r6fj2\" (UID: \"8fd2e2dc-62f4-4060-8ba5-29d283cdb4a0\") " pod="calico-system/calico-node-r6fj2" Apr 17 23:36:05.201604 kubelet[3715]: I0417 23:36:05.199204 3715 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/8fd2e2dc-62f4-4060-8ba5-29d283cdb4a0-flexvol-driver-host\") pod \"calico-node-r6fj2\" (UID: \"8fd2e2dc-62f4-4060-8ba5-29d283cdb4a0\") " pod="calico-system/calico-node-r6fj2" Apr 17 23:36:05.201604 kubelet[3715]: I0417 23:36:05.199240 3715 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/8fd2e2dc-62f4-4060-8ba5-29d283cdb4a0-sys-fs\") pod \"calico-node-r6fj2\" (UID: \"8fd2e2dc-62f4-4060-8ba5-29d283cdb4a0\") " pod="calico-system/calico-node-r6fj2" Apr 17 23:36:05.201604 kubelet[3715]: I0417 23:36:05.199275 3715 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/8fd2e2dc-62f4-4060-8ba5-29d283cdb4a0-var-lib-calico\") pod \"calico-node-r6fj2\" (UID: \"8fd2e2dc-62f4-4060-8ba5-29d283cdb4a0\") " pod="calico-system/calico-node-r6fj2" Apr 17 23:36:05.201604 kubelet[3715]: I0417 23:36:05.199317 3715 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjn24\" (UniqueName: \"kubernetes.io/projected/8fd2e2dc-62f4-4060-8ba5-29d283cdb4a0-kube-api-access-mjn24\") pod \"calico-node-r6fj2\" (UID: \"8fd2e2dc-62f4-4060-8ba5-29d283cdb4a0\") " pod="calico-system/calico-node-r6fj2" Apr 17 23:36:05.201604 kubelet[3715]: I0417 23:36:05.199356 3715 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/8fd2e2dc-62f4-4060-8ba5-29d283cdb4a0-cni-log-dir\") pod \"calico-node-r6fj2\" (UID: \"8fd2e2dc-62f4-4060-8ba5-29d283cdb4a0\") " pod="calico-system/calico-node-r6fj2" Apr 17 23:36:05.324477 kubelet[3715]: E0417 23:36:05.322260 3715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:05.324477 kubelet[3715]: W0417 23:36:05.322356 3715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:05.324477 kubelet[3715]: E0417 23:36:05.322398 3715 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:05.333315 kubelet[3715]: E0417 23:36:05.333193 3715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:05.333315 kubelet[3715]: W0417 23:36:05.333237 3715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:05.333315 kubelet[3715]: E0417 23:36:05.333291 3715 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:05.338547 kubelet[3715]: E0417 23:36:05.337638 3715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:05.338547 kubelet[3715]: W0417 23:36:05.337676 3715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:05.338547 kubelet[3715]: E0417 23:36:05.337711 3715 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:05.346249 kubelet[3715]: E0417 23:36:05.344410 3715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:05.346249 kubelet[3715]: W0417 23:36:05.344448 3715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:05.346249 kubelet[3715]: E0417 23:36:05.344479 3715 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:05.347278 kubelet[3715]: E0417 23:36:05.347030 3715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:05.347278 kubelet[3715]: W0417 23:36:05.347270 3715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:05.347634 kubelet[3715]: E0417 23:36:05.347566 3715 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:05.352609 kubelet[3715]: E0417 23:36:05.352216 3715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:05.352609 kubelet[3715]: W0417 23:36:05.352275 3715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:05.352609 kubelet[3715]: E0417 23:36:05.352308 3715 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:05.356273 kubelet[3715]: E0417 23:36:05.356227 3715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:05.356736 kubelet[3715]: W0417 23:36:05.356694 3715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:05.358966 kubelet[3715]: E0417 23:36:05.358910 3715 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:05.364764 kubelet[3715]: E0417 23:36:05.364662 3715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:05.364764 kubelet[3715]: W0417 23:36:05.364758 3715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:05.365248 kubelet[3715]: E0417 23:36:05.364795 3715 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:05.366307 kubelet[3715]: E0417 23:36:05.366248 3715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:05.366307 kubelet[3715]: W0417 23:36:05.366292 3715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:05.367625 kubelet[3715]: E0417 23:36:05.366326 3715 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:05.370173 kubelet[3715]: E0417 23:36:05.370133 3715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:05.371038 kubelet[3715]: W0417 23:36:05.370647 3715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:05.371038 kubelet[3715]: E0417 23:36:05.370808 3715 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:05.374290 kubelet[3715]: E0417 23:36:05.374236 3715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:05.374290 kubelet[3715]: W0417 23:36:05.374278 3715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:05.374861 kubelet[3715]: E0417 23:36:05.374314 3715 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:05.378151 kubelet[3715]: E0417 23:36:05.378046 3715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:05.378151 kubelet[3715]: W0417 23:36:05.378119 3715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:05.378151 kubelet[3715]: E0417 23:36:05.378156 3715 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:05.383352 kubelet[3715]: E0417 23:36:05.383288 3715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:05.383352 kubelet[3715]: W0417 23:36:05.383331 3715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:05.383584 kubelet[3715]: E0417 23:36:05.383367 3715 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:05.388319 kubelet[3715]: E0417 23:36:05.388269 3715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:05.388319 kubelet[3715]: W0417 23:36:05.388307 3715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:05.388495 kubelet[3715]: E0417 23:36:05.388340 3715 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:05.392356 kubelet[3715]: E0417 23:36:05.391786 3715 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-m7ffs" podUID="5743ab00-ab78-4f3d-b36a-493284f86675" Apr 17 23:36:05.453272 kubelet[3715]: E0417 23:36:05.451522 3715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:05.453272 kubelet[3715]: W0417 23:36:05.451562 3715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:05.453272 kubelet[3715]: E0417 23:36:05.451599 3715 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:05.472296 kubelet[3715]: E0417 23:36:05.470788 3715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:05.472296 kubelet[3715]: W0417 23:36:05.471299 3715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:05.472296 kubelet[3715]: E0417 23:36:05.471359 3715 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:05.474421 kubelet[3715]: E0417 23:36:05.474279 3715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:05.474421 kubelet[3715]: W0417 23:36:05.474338 3715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:05.474570 kubelet[3715]: E0417 23:36:05.474439 3715 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:05.477200 kubelet[3715]: E0417 23:36:05.475891 3715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:05.477200 kubelet[3715]: W0417 23:36:05.475923 3715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:05.477200 kubelet[3715]: E0417 23:36:05.475955 3715 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:05.479446 kubelet[3715]: E0417 23:36:05.479137 3715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:05.479446 kubelet[3715]: W0417 23:36:05.479173 3715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:05.479446 kubelet[3715]: E0417 23:36:05.479221 3715 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:05.482038 kubelet[3715]: E0417 23:36:05.481531 3715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:05.483484 kubelet[3715]: W0417 23:36:05.481570 3715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:05.483484 kubelet[3715]: E0417 23:36:05.482486 3715 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:05.487080 kubelet[3715]: E0417 23:36:05.484561 3715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:05.487080 kubelet[3715]: W0417 23:36:05.484593 3715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:05.487080 kubelet[3715]: E0417 23:36:05.484817 3715 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:05.489880 kubelet[3715]: E0417 23:36:05.488163 3715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:05.489880 kubelet[3715]: W0417 23:36:05.488198 3715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:05.489880 kubelet[3715]: E0417 23:36:05.488234 3715 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:05.491155 kubelet[3715]: E0417 23:36:05.490243 3715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:05.491155 kubelet[3715]: W0417 23:36:05.490283 3715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:05.491155 kubelet[3715]: E0417 23:36:05.490320 3715 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:05.492123 kubelet[3715]: E0417 23:36:05.492055 3715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:05.492340 kubelet[3715]: W0417 23:36:05.492150 3715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:05.492340 kubelet[3715]: E0417 23:36:05.492186 3715 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:05.494475 kubelet[3715]: E0417 23:36:05.494422 3715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:05.494475 kubelet[3715]: W0417 23:36:05.494463 3715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:05.494667 kubelet[3715]: E0417 23:36:05.494496 3715 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:05.495632 kubelet[3715]: E0417 23:36:05.495567 3715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:05.495632 kubelet[3715]: W0417 23:36:05.495608 3715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:05.496823 kubelet[3715]: E0417 23:36:05.495642 3715 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:05.497331 kubelet[3715]: E0417 23:36:05.497034 3715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:05.497331 kubelet[3715]: W0417 23:36:05.497072 3715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:05.497331 kubelet[3715]: E0417 23:36:05.497172 3715 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:05.498640 kubelet[3715]: E0417 23:36:05.498444 3715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:05.498640 kubelet[3715]: W0417 23:36:05.498522 3715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:05.498640 kubelet[3715]: E0417 23:36:05.498558 3715 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:05.499394 kubelet[3715]: E0417 23:36:05.499360 3715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:05.499473 kubelet[3715]: W0417 23:36:05.499395 3715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:05.499473 kubelet[3715]: E0417 23:36:05.499426 3715 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:05.500425 kubelet[3715]: E0417 23:36:05.500245 3715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:05.500425 kubelet[3715]: W0417 23:36:05.500281 3715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:05.500425 kubelet[3715]: E0417 23:36:05.500309 3715 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:05.501646 kubelet[3715]: E0417 23:36:05.500991 3715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:05.501646 kubelet[3715]: W0417 23:36:05.501182 3715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:05.501646 kubelet[3715]: E0417 23:36:05.501214 3715 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:05.506285 kubelet[3715]: E0417 23:36:05.506232 3715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:05.506285 kubelet[3715]: W0417 23:36:05.506272 3715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:05.507307 kubelet[3715]: E0417 23:36:05.506307 3715 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:05.508462 kubelet[3715]: E0417 23:36:05.508341 3715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:05.508462 kubelet[3715]: W0417 23:36:05.508436 3715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:05.508737 kubelet[3715]: E0417 23:36:05.508473 3715 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:05.510947 kubelet[3715]: E0417 23:36:05.509804 3715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:05.510947 kubelet[3715]: W0417 23:36:05.510066 3715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:05.510947 kubelet[3715]: E0417 23:36:05.510452 3715 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:05.513840 kubelet[3715]: E0417 23:36:05.512930 3715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:05.513840 kubelet[3715]: W0417 23:36:05.512984 3715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:05.513840 kubelet[3715]: E0417 23:36:05.513019 3715 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:05.519968 kubelet[3715]: E0417 23:36:05.516683 3715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:05.519968 kubelet[3715]: W0417 23:36:05.516722 3715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:05.519968 kubelet[3715]: E0417 23:36:05.516788 3715 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:05.519968 kubelet[3715]: I0417 23:36:05.516866 3715 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grl85\" (UniqueName: \"kubernetes.io/projected/5743ab00-ab78-4f3d-b36a-493284f86675-kube-api-access-grl85\") pod \"csi-node-driver-m7ffs\" (UID: \"5743ab00-ab78-4f3d-b36a-493284f86675\") " pod="calico-system/csi-node-driver-m7ffs" Apr 17 23:36:05.519968 kubelet[3715]: E0417 23:36:05.519169 3715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:05.519968 kubelet[3715]: W0417 23:36:05.519203 3715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:05.519968 kubelet[3715]: E0417 23:36:05.519240 3715 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:05.519968 kubelet[3715]: I0417 23:36:05.519282 3715 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/5743ab00-ab78-4f3d-b36a-493284f86675-registration-dir\") pod \"csi-node-driver-m7ffs\" (UID: \"5743ab00-ab78-4f3d-b36a-493284f86675\") " pod="calico-system/csi-node-driver-m7ffs" Apr 17 23:36:05.526135 kubelet[3715]: E0417 23:36:05.521495 3715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:05.526135 kubelet[3715]: W0417 23:36:05.521538 3715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:05.526135 kubelet[3715]: E0417 23:36:05.521572 3715 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:05.526135 kubelet[3715]: I0417 23:36:05.521623 3715 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/5743ab00-ab78-4f3d-b36a-493284f86675-socket-dir\") pod \"csi-node-driver-m7ffs\" (UID: \"5743ab00-ab78-4f3d-b36a-493284f86675\") " pod="calico-system/csi-node-driver-m7ffs" Apr 17 23:36:05.526135 kubelet[3715]: E0417 23:36:05.523548 3715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:05.526135 kubelet[3715]: W0417 23:36:05.523581 3715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:05.526135 kubelet[3715]: E0417 23:36:05.523614 3715 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:05.526135 kubelet[3715]: I0417 23:36:05.523753 3715 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/5743ab00-ab78-4f3d-b36a-493284f86675-varrun\") pod \"csi-node-driver-m7ffs\" (UID: \"5743ab00-ab78-4f3d-b36a-493284f86675\") " pod="calico-system/csi-node-driver-m7ffs" Apr 17 23:36:05.526135 kubelet[3715]: E0417 23:36:05.524533 3715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:05.526785 kubelet[3715]: W0417 23:36:05.524559 3715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:05.526785 kubelet[3715]: E0417 23:36:05.524712 3715 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:05.526785 kubelet[3715]: E0417 23:36:05.525644 3715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:05.526785 kubelet[3715]: W0417 23:36:05.525673 3715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:05.526785 kubelet[3715]: E0417 23:36:05.525703 3715 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:05.527050 kubelet[3715]: E0417 23:36:05.526817 3715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:05.527050 kubelet[3715]: W0417 23:36:05.526843 3715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:05.527050 kubelet[3715]: E0417 23:36:05.526872 3715 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:05.531377 kubelet[3715]: E0417 23:36:05.527860 3715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:05.531377 kubelet[3715]: W0417 23:36:05.527900 3715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:05.531377 kubelet[3715]: E0417 23:36:05.527931 3715 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:05.531377 kubelet[3715]: I0417 23:36:05.528251 3715 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5743ab00-ab78-4f3d-b36a-493284f86675-kubelet-dir\") pod \"csi-node-driver-m7ffs\" (UID: \"5743ab00-ab78-4f3d-b36a-493284f86675\") " pod="calico-system/csi-node-driver-m7ffs" Apr 17 23:36:05.531377 kubelet[3715]: E0417 23:36:05.529217 3715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:05.531377 kubelet[3715]: W0417 23:36:05.529243 3715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:05.531377 kubelet[3715]: E0417 23:36:05.529293 3715 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:05.531377 kubelet[3715]: E0417 23:36:05.530428 3715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:05.531377 kubelet[3715]: W0417 23:36:05.530457 3715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:05.531924 kubelet[3715]: E0417 23:36:05.530488 3715 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:05.531924 kubelet[3715]: E0417 23:36:05.531650 3715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:05.531924 kubelet[3715]: W0417 23:36:05.531678 3715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:05.531924 kubelet[3715]: E0417 23:36:05.531710 3715 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:05.535260 kubelet[3715]: E0417 23:36:05.532687 3715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:05.535260 kubelet[3715]: W0417 23:36:05.532731 3715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:05.535260 kubelet[3715]: E0417 23:36:05.532764 3715 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:05.535260 kubelet[3715]: E0417 23:36:05.533783 3715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:05.535260 kubelet[3715]: W0417 23:36:05.534172 3715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:05.535260 kubelet[3715]: E0417 23:36:05.534206 3715 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:05.535260 kubelet[3715]: E0417 23:36:05.535194 3715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:05.535701 kubelet[3715]: W0417 23:36:05.535334 3715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:05.535701 kubelet[3715]: E0417 23:36:05.535373 3715 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:05.540151 kubelet[3715]: E0417 23:36:05.537436 3715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:05.540151 kubelet[3715]: W0417 23:36:05.537480 3715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:05.540151 kubelet[3715]: E0417 23:36:05.537514 3715 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:05.596362 containerd[2145]: time="2026-04-17T23:36:05.596292558Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7b697c7478-ckkdx,Uid:f09f269f-adfb-45a9-8aee-060f9f0c48e6,Namespace:calico-system,Attempt:0,} returns sandbox id \"4fef46bfbaff6465c9849e4b521a93a8c44ce507cf78fbefb9c310b824006b1c\"" Apr 17 23:36:05.605796 containerd[2145]: time="2026-04-17T23:36:05.605727114Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Apr 17 23:36:05.641672 kubelet[3715]: E0417 23:36:05.641381 3715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:05.641672 kubelet[3715]: W0417 23:36:05.641436 3715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:05.641672 kubelet[3715]: E0417 23:36:05.641471 3715 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:05.647114 kubelet[3715]: E0417 23:36:05.645495 3715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:05.647114 kubelet[3715]: W0417 23:36:05.645530 3715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:05.647114 kubelet[3715]: E0417 23:36:05.645558 3715 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:05.647374 kubelet[3715]: E0417 23:36:05.645405 3715 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods/besteffort/podf09f269f-adfb-45a9-8aee-060f9f0c48e6/4fef46bfbaff6465c9849e4b521a93a8c44ce507cf78fbefb9c310b824006b1c\": RecentStats: unable to find data in memory cache]" Apr 17 23:36:05.647783 kubelet[3715]: E0417 23:36:05.647756 3715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:05.648138 kubelet[3715]: W0417 23:36:05.647880 3715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:05.648138 kubelet[3715]: E0417 23:36:05.647916 3715 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:05.648541 kubelet[3715]: E0417 23:36:05.648516 3715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:05.649279 kubelet[3715]: W0417 23:36:05.649228 3715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:05.649467 kubelet[3715]: E0417 23:36:05.649440 3715 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:05.650320 kubelet[3715]: E0417 23:36:05.650228 3715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:05.650466 kubelet[3715]: W0417 23:36:05.650441 3715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:05.650598 kubelet[3715]: E0417 23:36:05.650575 3715 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:05.652562 kubelet[3715]: E0417 23:36:05.652364 3715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:05.652562 kubelet[3715]: W0417 23:36:05.652393 3715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:05.652562 kubelet[3715]: E0417 23:36:05.652420 3715 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:05.653208 kubelet[3715]: E0417 23:36:05.652996 3715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:05.653208 kubelet[3715]: W0417 23:36:05.653020 3715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:05.653208 kubelet[3715]: E0417 23:36:05.653043 3715 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:05.654690 kubelet[3715]: E0417 23:36:05.654661 3715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:05.655106 kubelet[3715]: W0417 23:36:05.654829 3715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:05.655106 kubelet[3715]: E0417 23:36:05.654864 3715 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:05.656381 kubelet[3715]: E0417 23:36:05.656354 3715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:05.656522 kubelet[3715]: W0417 23:36:05.656499 3715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:05.656650 kubelet[3715]: E0417 23:36:05.656629 3715 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:05.657472 kubelet[3715]: E0417 23:36:05.657386 3715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:05.657472 kubelet[3715]: W0417 23:36:05.657415 3715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:05.657472 kubelet[3715]: E0417 23:36:05.657438 3715 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:05.658441 kubelet[3715]: E0417 23:36:05.658177 3715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:05.658441 kubelet[3715]: W0417 23:36:05.658237 3715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:05.658441 kubelet[3715]: E0417 23:36:05.658264 3715 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:05.659249 kubelet[3715]: E0417 23:36:05.659077 3715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:05.659249 kubelet[3715]: W0417 23:36:05.659124 3715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:05.659249 kubelet[3715]: E0417 23:36:05.659183 3715 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:05.661641 kubelet[3715]: E0417 23:36:05.661423 3715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:05.661641 kubelet[3715]: W0417 23:36:05.661457 3715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:05.661641 kubelet[3715]: E0417 23:36:05.661515 3715 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:05.662422 kubelet[3715]: E0417 23:36:05.662237 3715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:05.662422 kubelet[3715]: W0417 23:36:05.662263 3715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:05.662422 kubelet[3715]: E0417 23:36:05.662288 3715 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:05.663816 kubelet[3715]: E0417 23:36:05.663449 3715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:05.663816 kubelet[3715]: W0417 23:36:05.663496 3715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:05.663816 kubelet[3715]: E0417 23:36:05.663529 3715 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:05.664449 kubelet[3715]: E0417 23:36:05.664225 3715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:05.664449 kubelet[3715]: W0417 23:36:05.664253 3715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:05.664449 kubelet[3715]: E0417 23:36:05.664280 3715 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:05.664821 kubelet[3715]: E0417 23:36:05.664800 3715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:05.664928 kubelet[3715]: W0417 23:36:05.664907 3715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:05.665035 kubelet[3715]: E0417 23:36:05.665007 3715 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:05.665550 kubelet[3715]: E0417 23:36:05.665526 3715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:05.665673 kubelet[3715]: W0417 23:36:05.665651 3715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:05.665943 kubelet[3715]: E0417 23:36:05.665776 3715 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:05.666207 kubelet[3715]: E0417 23:36:05.666186 3715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:05.666306 kubelet[3715]: W0417 23:36:05.666285 3715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:05.666405 kubelet[3715]: E0417 23:36:05.666385 3715 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:05.666853 kubelet[3715]: E0417 23:36:05.666832 3715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:05.666961 kubelet[3715]: W0417 23:36:05.666941 3715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:05.667061 kubelet[3715]: E0417 23:36:05.667041 3715 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:05.667765 kubelet[3715]: E0417 23:36:05.667536 3715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:05.667765 kubelet[3715]: W0417 23:36:05.667558 3715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:05.667765 kubelet[3715]: E0417 23:36:05.667578 3715 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:05.668226 kubelet[3715]: E0417 23:36:05.668202 3715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:05.668342 kubelet[3715]: W0417 23:36:05.668318 3715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:05.668640 kubelet[3715]: E0417 23:36:05.668426 3715 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:05.669138 kubelet[3715]: E0417 23:36:05.669056 3715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:05.669559 kubelet[3715]: W0417 23:36:05.669312 3715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:05.669559 kubelet[3715]: E0417 23:36:05.669353 3715 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:05.670787 kubelet[3715]: E0417 23:36:05.670749 3715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:05.671074 kubelet[3715]: W0417 23:36:05.670978 3715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:05.671074 kubelet[3715]: E0417 23:36:05.671021 3715 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:05.672372 kubelet[3715]: E0417 23:36:05.672188 3715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:05.672777 kubelet[3715]: W0417 23:36:05.672725 3715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:05.672903 kubelet[3715]: E0417 23:36:05.672778 3715 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:05.687856 kubelet[3715]: E0417 23:36:05.687799 3715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:05.687856 kubelet[3715]: W0417 23:36:05.687836 3715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:05.688034 kubelet[3715]: E0417 23:36:05.687869 3715 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:05.690505 containerd[2145]: time="2026-04-17T23:36:05.689717118Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-r6fj2,Uid:8fd2e2dc-62f4-4060-8ba5-29d283cdb4a0,Namespace:calico-system,Attempt:0,}" Apr 17 23:36:05.730990 containerd[2145]: time="2026-04-17T23:36:05.729898183Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:36:05.730990 containerd[2145]: time="2026-04-17T23:36:05.730023355Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:36:05.730990 containerd[2145]: time="2026-04-17T23:36:05.730062535Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:36:05.730990 containerd[2145]: time="2026-04-17T23:36:05.730384111Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:36:05.813357 containerd[2145]: time="2026-04-17T23:36:05.813291967Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-r6fj2,Uid:8fd2e2dc-62f4-4060-8ba5-29d283cdb4a0,Namespace:calico-system,Attempt:0,} returns sandbox id \"9befb5c7c60b830b17ace25821ddc60cef2d113de9b2bedfd25ee55e02222675\"" Apr 17 23:36:06.705506 kubelet[3715]: E0417 23:36:06.704891 3715 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-m7ffs" podUID="5743ab00-ab78-4f3d-b36a-493284f86675" Apr 17 23:36:06.890244 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2397965723.mount: Deactivated successfully. Apr 17 23:36:07.712624 containerd[2145]: time="2026-04-17T23:36:07.712535468Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:36:07.715362 containerd[2145]: time="2026-04-17T23:36:07.715278020Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=33865174" Apr 17 23:36:07.716569 containerd[2145]: time="2026-04-17T23:36:07.716507204Z" level=info msg="ImageCreate event name:\"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:36:07.724206 containerd[2145]: time="2026-04-17T23:36:07.724129437Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:36:07.727462 containerd[2145]: time="2026-04-17T23:36:07.727392993Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"33865028\" in 2.121598871s" Apr 17 23:36:07.727462 containerd[2145]: time="2026-04-17T23:36:07.727455501Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\"" Apr 17 23:36:07.732317 containerd[2145]: time="2026-04-17T23:36:07.731695521Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Apr 17 23:36:07.757961 containerd[2145]: time="2026-04-17T23:36:07.757668465Z" level=info msg="CreateContainer within sandbox \"4fef46bfbaff6465c9849e4b521a93a8c44ce507cf78fbefb9c310b824006b1c\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Apr 17 23:36:07.783706 containerd[2145]: time="2026-04-17T23:36:07.783636381Z" level=info msg="CreateContainer within sandbox \"4fef46bfbaff6465c9849e4b521a93a8c44ce507cf78fbefb9c310b824006b1c\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"fd8881bd06d3528a49919d895cf5806f173818ad370975c399ba5c91a8f42c97\"" Apr 17 23:36:07.784910 containerd[2145]: time="2026-04-17T23:36:07.784849017Z" level=info msg="StartContainer for \"fd8881bd06d3528a49919d895cf5806f173818ad370975c399ba5c91a8f42c97\"" Apr 17 23:36:07.918242 containerd[2145]: time="2026-04-17T23:36:07.916988433Z" level=info msg="StartContainer for \"fd8881bd06d3528a49919d895cf5806f173818ad370975c399ba5c91a8f42c97\" returns successfully" Apr 17 23:36:07.962933 kubelet[3715]: I0417 23:36:07.962387 3715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7b697c7478-ckkdx" podStartSLOduration=1.8383641910000001 podStartE2EDuration="3.962167978s" podCreationTimestamp="2026-04-17 23:36:04 +0000 UTC" firstStartedPulling="2026-04-17 23:36:05.60474615 +0000 UTC m=+31.154538420" lastFinishedPulling="2026-04-17 23:36:07.728549937 +0000 UTC m=+33.278342207" observedRunningTime="2026-04-17 23:36:07.960217402 +0000 UTC m=+33.510009696" watchObservedRunningTime="2026-04-17 23:36:07.962167978 +0000 UTC m=+33.511960260" Apr 17 23:36:08.033544 kubelet[3715]: E0417 23:36:08.033452 3715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:08.033892 kubelet[3715]: W0417 23:36:08.033693 3715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:08.033892 kubelet[3715]: E0417 23:36:08.033733 3715 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:08.035234 kubelet[3715]: E0417 23:36:08.034884 3715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:08.035234 kubelet[3715]: W0417 23:36:08.034964 3715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:08.035234 kubelet[3715]: E0417 23:36:08.035073 3715 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:08.036182 kubelet[3715]: E0417 23:36:08.035975 3715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:08.036182 kubelet[3715]: W0417 23:36:08.036005 3715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:08.036182 kubelet[3715]: E0417 23:36:08.036234 3715 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:08.039018 kubelet[3715]: E0417 23:36:08.038940 3715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:08.039505 kubelet[3715]: W0417 23:36:08.038977 3715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:08.039505 kubelet[3715]: E0417 23:36:08.039150 3715 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:08.041230 kubelet[3715]: E0417 23:36:08.040216 3715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:08.041230 kubelet[3715]: W0417 23:36:08.040296 3715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:08.041230 kubelet[3715]: E0417 23:36:08.040386 3715 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:08.042143 kubelet[3715]: E0417 23:36:08.041526 3715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:08.042143 kubelet[3715]: W0417 23:36:08.041567 3715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:08.042143 kubelet[3715]: E0417 23:36:08.041601 3715 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:08.042143 kubelet[3715]: E0417 23:36:08.042179 3715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:08.042143 kubelet[3715]: W0417 23:36:08.042203 3715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:08.042143 kubelet[3715]: E0417 23:36:08.042228 3715 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:08.045767 kubelet[3715]: E0417 23:36:08.042641 3715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:08.045767 kubelet[3715]: W0417 23:36:08.042662 3715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:08.045767 kubelet[3715]: E0417 23:36:08.042684 3715 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:08.045767 kubelet[3715]: E0417 23:36:08.043079 3715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:08.045767 kubelet[3715]: W0417 23:36:08.043130 3715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:08.045767 kubelet[3715]: E0417 23:36:08.043153 3715 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:08.045767 kubelet[3715]: E0417 23:36:08.043984 3715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:08.045767 kubelet[3715]: W0417 23:36:08.044062 3715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:08.045767 kubelet[3715]: E0417 23:36:08.044138 3715 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:08.045767 kubelet[3715]: E0417 23:36:08.044495 3715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:08.046328 kubelet[3715]: W0417 23:36:08.044518 3715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:08.046328 kubelet[3715]: E0417 23:36:08.044542 3715 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:08.046328 kubelet[3715]: E0417 23:36:08.044998 3715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:08.046328 kubelet[3715]: W0417 23:36:08.045020 3715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:08.046328 kubelet[3715]: E0417 23:36:08.045044 3715 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:08.046610 kubelet[3715]: E0417 23:36:08.046574 3715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:08.046610 kubelet[3715]: W0417 23:36:08.046599 3715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:08.046720 kubelet[3715]: E0417 23:36:08.046633 3715 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:08.047268 kubelet[3715]: E0417 23:36:08.047235 3715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:08.047268 kubelet[3715]: W0417 23:36:08.047266 3715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:08.047442 kubelet[3715]: E0417 23:36:08.047293 3715 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:08.047755 kubelet[3715]: E0417 23:36:08.047724 3715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:08.047841 kubelet[3715]: W0417 23:36:08.047753 3715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:08.047841 kubelet[3715]: E0417 23:36:08.047777 3715 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:08.081786 kubelet[3715]: E0417 23:36:08.081721 3715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:08.082675 kubelet[3715]: W0417 23:36:08.082063 3715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:08.082675 kubelet[3715]: E0417 23:36:08.082251 3715 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:08.084589 kubelet[3715]: E0417 23:36:08.084437 3715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:08.084589 kubelet[3715]: W0417 23:36:08.084497 3715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:08.084589 kubelet[3715]: E0417 23:36:08.084534 3715 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:08.086706 kubelet[3715]: E0417 23:36:08.086467 3715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:08.086706 kubelet[3715]: W0417 23:36:08.086499 3715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:08.086706 kubelet[3715]: E0417 23:36:08.086532 3715 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:08.088301 kubelet[3715]: E0417 23:36:08.088080 3715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:08.088301 kubelet[3715]: W0417 23:36:08.088155 3715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:08.088301 kubelet[3715]: E0417 23:36:08.088186 3715 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:08.089686 kubelet[3715]: E0417 23:36:08.089645 3715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:08.089686 kubelet[3715]: W0417 23:36:08.089682 3715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:08.091164 kubelet[3715]: E0417 23:36:08.089718 3715 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:08.091799 kubelet[3715]: E0417 23:36:08.091756 3715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:08.091799 kubelet[3715]: W0417 23:36:08.091793 3715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:08.092001 kubelet[3715]: E0417 23:36:08.091827 3715 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:08.092912 kubelet[3715]: E0417 23:36:08.092558 3715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:08.092912 kubelet[3715]: W0417 23:36:08.092590 3715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:08.092912 kubelet[3715]: E0417 23:36:08.092625 3715 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:08.094296 kubelet[3715]: E0417 23:36:08.093774 3715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:08.094296 kubelet[3715]: W0417 23:36:08.093803 3715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:08.094296 kubelet[3715]: E0417 23:36:08.094215 3715 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:08.097020 kubelet[3715]: E0417 23:36:08.096381 3715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:08.097020 kubelet[3715]: W0417 23:36:08.096414 3715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:08.097020 kubelet[3715]: E0417 23:36:08.096444 3715 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:08.099228 kubelet[3715]: E0417 23:36:08.098496 3715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:08.099228 kubelet[3715]: W0417 23:36:08.098531 3715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:08.099228 kubelet[3715]: E0417 23:36:08.098563 3715 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:08.100130 kubelet[3715]: E0417 23:36:08.099922 3715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:08.100130 kubelet[3715]: W0417 23:36:08.099953 3715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:08.100130 kubelet[3715]: E0417 23:36:08.100013 3715 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:08.101611 kubelet[3715]: E0417 23:36:08.101579 3715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:08.101947 kubelet[3715]: W0417 23:36:08.101872 3715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:08.102188 kubelet[3715]: E0417 23:36:08.102039 3715 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:08.103011 kubelet[3715]: E0417 23:36:08.102963 3715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:08.103232 kubelet[3715]: W0417 23:36:08.103114 3715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:08.103232 kubelet[3715]: E0417 23:36:08.103143 3715 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:08.104227 kubelet[3715]: E0417 23:36:08.104011 3715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:08.104930 kubelet[3715]: W0417 23:36:08.104166 3715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:08.105175 kubelet[3715]: E0417 23:36:08.105073 3715 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:08.105817 kubelet[3715]: E0417 23:36:08.105788 3715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:08.106076 kubelet[3715]: W0417 23:36:08.105937 3715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:08.106076 kubelet[3715]: E0417 23:36:08.105973 3715 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:08.107806 kubelet[3715]: E0417 23:36:08.107503 3715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:08.107806 kubelet[3715]: W0417 23:36:08.107797 3715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:08.108406 kubelet[3715]: E0417 23:36:08.107831 3715 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:08.109315 kubelet[3715]: E0417 23:36:08.109279 3715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:08.110674 kubelet[3715]: W0417 23:36:08.109621 3715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:08.110674 kubelet[3715]: E0417 23:36:08.109687 3715 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:08.113803 kubelet[3715]: E0417 23:36:08.113328 3715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:08.113803 kubelet[3715]: W0417 23:36:08.113362 3715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:08.113803 kubelet[3715]: E0417 23:36:08.113510 3715 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:08.699881 kubelet[3715]: E0417 23:36:08.699765 3715 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-m7ffs" podUID="5743ab00-ab78-4f3d-b36a-493284f86675" Apr 17 23:36:08.740904 systemd[1]: run-containerd-runc-k8s.io-fd8881bd06d3528a49919d895cf5806f173818ad370975c399ba5c91a8f42c97-runc.mn1QxT.mount: Deactivated successfully. Apr 17 23:36:08.955245 kubelet[3715]: E0417 23:36:08.954905 3715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:08.955245 kubelet[3715]: W0417 23:36:08.955073 3715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:08.955245 kubelet[3715]: E0417 23:36:08.955159 3715 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:08.960405 kubelet[3715]: E0417 23:36:08.959714 3715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:08.960405 kubelet[3715]: W0417 23:36:08.959753 3715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:08.960405 kubelet[3715]: E0417 23:36:08.959786 3715 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:08.961530 kubelet[3715]: E0417 23:36:08.961204 3715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:08.961530 kubelet[3715]: W0417 23:36:08.961253 3715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:08.961530 kubelet[3715]: E0417 23:36:08.961289 3715 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:08.962718 kubelet[3715]: E0417 23:36:08.962688 3715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:08.963123 kubelet[3715]: W0417 23:36:08.962855 3715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:08.963123 kubelet[3715]: E0417 23:36:08.962893 3715 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:08.965947 kubelet[3715]: E0417 23:36:08.965846 3715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:08.965947 kubelet[3715]: W0417 23:36:08.965888 3715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:08.965947 kubelet[3715]: E0417 23:36:08.965924 3715 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:08.976538 kubelet[3715]: E0417 23:36:08.976482 3715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:08.976538 kubelet[3715]: W0417 23:36:08.976524 3715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:08.976722 kubelet[3715]: E0417 23:36:08.976560 3715 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:08.978761 kubelet[3715]: E0417 23:36:08.977849 3715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:08.978761 kubelet[3715]: W0417 23:36:08.977887 3715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:08.978761 kubelet[3715]: E0417 23:36:08.977921 3715 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:08.978761 kubelet[3715]: E0417 23:36:08.978657 3715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:08.978761 kubelet[3715]: W0417 23:36:08.978685 3715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:08.978761 kubelet[3715]: E0417 23:36:08.978716 3715 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:08.982360 kubelet[3715]: E0417 23:36:08.979329 3715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:08.982360 kubelet[3715]: W0417 23:36:08.979366 3715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:08.982360 kubelet[3715]: E0417 23:36:08.979393 3715 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:08.982360 kubelet[3715]: E0417 23:36:08.980570 3715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:08.982360 kubelet[3715]: W0417 23:36:08.980601 3715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:08.982360 kubelet[3715]: E0417 23:36:08.980633 3715 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:08.984440 kubelet[3715]: E0417 23:36:08.982775 3715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:08.984440 kubelet[3715]: W0417 23:36:08.982831 3715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:08.984440 kubelet[3715]: E0417 23:36:08.982863 3715 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:08.984440 kubelet[3715]: E0417 23:36:08.983864 3715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:08.984440 kubelet[3715]: W0417 23:36:08.983893 3715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:08.984440 kubelet[3715]: E0417 23:36:08.983962 3715 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:08.987565 kubelet[3715]: E0417 23:36:08.984952 3715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:08.987565 kubelet[3715]: W0417 23:36:08.985019 3715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:08.987565 kubelet[3715]: E0417 23:36:08.985051 3715 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:08.987565 kubelet[3715]: E0417 23:36:08.986593 3715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:08.987565 kubelet[3715]: W0417 23:36:08.986623 3715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:08.987565 kubelet[3715]: E0417 23:36:08.986767 3715 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:08.990580 kubelet[3715]: E0417 23:36:08.987945 3715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:08.990580 kubelet[3715]: W0417 23:36:08.987971 3715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:08.990580 kubelet[3715]: E0417 23:36:08.988000 3715 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:09.013381 kubelet[3715]: E0417 23:36:09.013343 3715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:09.014464 kubelet[3715]: W0417 23:36:09.013761 3715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:09.014464 kubelet[3715]: E0417 23:36:09.013845 3715 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:09.019053 kubelet[3715]: E0417 23:36:09.017287 3715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:09.019053 kubelet[3715]: W0417 23:36:09.017321 3715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:09.019053 kubelet[3715]: E0417 23:36:09.018999 3715 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:09.024267 kubelet[3715]: E0417 23:36:09.023321 3715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:09.024267 kubelet[3715]: W0417 23:36:09.023367 3715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:09.024267 kubelet[3715]: E0417 23:36:09.023402 3715 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:09.026357 kubelet[3715]: E0417 23:36:09.025911 3715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:09.026357 kubelet[3715]: W0417 23:36:09.025948 3715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:09.026357 kubelet[3715]: E0417 23:36:09.026183 3715 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:09.028473 kubelet[3715]: E0417 23:36:09.028438 3715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:09.029301 kubelet[3715]: W0417 23:36:09.028885 3715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:09.029301 kubelet[3715]: E0417 23:36:09.028935 3715 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:09.030423 kubelet[3715]: E0417 23:36:09.029980 3715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:09.030423 kubelet[3715]: W0417 23:36:09.030022 3715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:09.030423 kubelet[3715]: E0417 23:36:09.030066 3715 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:09.031735 kubelet[3715]: E0417 23:36:09.031241 3715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:09.031735 kubelet[3715]: W0417 23:36:09.031270 3715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:09.031735 kubelet[3715]: E0417 23:36:09.031300 3715 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:09.032782 kubelet[3715]: E0417 23:36:09.032362 3715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:09.032782 kubelet[3715]: W0417 23:36:09.032390 3715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:09.032782 kubelet[3715]: E0417 23:36:09.032418 3715 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:09.033584 kubelet[3715]: E0417 23:36:09.033309 3715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:09.033584 kubelet[3715]: W0417 23:36:09.033338 3715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:09.033584 kubelet[3715]: E0417 23:36:09.033368 3715 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:09.035297 kubelet[3715]: E0417 23:36:09.035063 3715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:09.035297 kubelet[3715]: W0417 23:36:09.035126 3715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:09.035730 kubelet[3715]: E0417 23:36:09.035159 3715 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:09.036901 kubelet[3715]: E0417 23:36:09.036783 3715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:09.036901 kubelet[3715]: W0417 23:36:09.036819 3715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:09.036901 kubelet[3715]: E0417 23:36:09.036848 3715 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:09.038312 kubelet[3715]: E0417 23:36:09.038081 3715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:09.038312 kubelet[3715]: W0417 23:36:09.038220 3715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:09.038312 kubelet[3715]: E0417 23:36:09.038252 3715 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:09.040249 kubelet[3715]: E0417 23:36:09.039301 3715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:09.040249 kubelet[3715]: W0417 23:36:09.039346 3715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:09.040249 kubelet[3715]: E0417 23:36:09.039379 3715 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:09.040517 kubelet[3715]: E0417 23:36:09.040405 3715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:09.040517 kubelet[3715]: W0417 23:36:09.040430 3715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:09.040517 kubelet[3715]: E0417 23:36:09.040458 3715 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:09.041633 kubelet[3715]: E0417 23:36:09.041593 3715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:09.041633 kubelet[3715]: W0417 23:36:09.041627 3715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:09.041829 kubelet[3715]: E0417 23:36:09.041657 3715 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:09.043653 kubelet[3715]: E0417 23:36:09.042819 3715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:09.043653 kubelet[3715]: W0417 23:36:09.042852 3715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:09.043653 kubelet[3715]: E0417 23:36:09.042881 3715 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:09.044289 kubelet[3715]: E0417 23:36:09.044260 3715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:09.044458 kubelet[3715]: W0417 23:36:09.044432 3715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:09.044596 kubelet[3715]: E0417 23:36:09.044573 3715 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:09.045587 kubelet[3715]: E0417 23:36:09.045553 3715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:09.045587 kubelet[3715]: W0417 23:36:09.045629 3715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:09.045587 kubelet[3715]: E0417 23:36:09.045660 3715 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:09.083377 containerd[2145]: time="2026-04-17T23:36:09.083312203Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:36:09.084885 containerd[2145]: time="2026-04-17T23:36:09.084831187Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4457682" Apr 17 23:36:09.086126 containerd[2145]: time="2026-04-17T23:36:09.085715467Z" level=info msg="ImageCreate event name:\"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:36:09.090073 containerd[2145]: time="2026-04-17T23:36:09.089988235Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:36:09.091883 containerd[2145]: time="2026-04-17T23:36:09.091695727Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"5855167\" in 1.359614526s" Apr 17 23:36:09.091883 containerd[2145]: time="2026-04-17T23:36:09.091751155Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\"" Apr 17 23:36:09.102304 containerd[2145]: time="2026-04-17T23:36:09.102243379Z" level=info msg="CreateContainer within sandbox \"9befb5c7c60b830b17ace25821ddc60cef2d113de9b2bedfd25ee55e02222675\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Apr 17 23:36:09.124424 containerd[2145]: time="2026-04-17T23:36:09.124356391Z" level=info msg="CreateContainer within sandbox \"9befb5c7c60b830b17ace25821ddc60cef2d113de9b2bedfd25ee55e02222675\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"1b1ceec75e9662233885ba698c6c46566cb1507f2c3051b4bb88226c6f1891b2\"" Apr 17 23:36:09.126308 containerd[2145]: time="2026-04-17T23:36:09.126248419Z" level=info msg="StartContainer for \"1b1ceec75e9662233885ba698c6c46566cb1507f2c3051b4bb88226c6f1891b2\"" Apr 17 23:36:09.245884 containerd[2145]: time="2026-04-17T23:36:09.243602312Z" level=info msg="StartContainer for \"1b1ceec75e9662233885ba698c6c46566cb1507f2c3051b4bb88226c6f1891b2\" returns successfully" Apr 17 23:36:09.742563 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1b1ceec75e9662233885ba698c6c46566cb1507f2c3051b4bb88226c6f1891b2-rootfs.mount: Deactivated successfully. Apr 17 23:36:09.782795 containerd[2145]: time="2026-04-17T23:36:09.782211419Z" level=info msg="shim disconnected" id=1b1ceec75e9662233885ba698c6c46566cb1507f2c3051b4bb88226c6f1891b2 namespace=k8s.io Apr 17 23:36:09.782795 containerd[2145]: time="2026-04-17T23:36:09.782293907Z" level=warning msg="cleaning up after shim disconnected" id=1b1ceec75e9662233885ba698c6c46566cb1507f2c3051b4bb88226c6f1891b2 namespace=k8s.io Apr 17 23:36:09.782795 containerd[2145]: time="2026-04-17T23:36:09.782316023Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:36:09.948909 containerd[2145]: time="2026-04-17T23:36:09.948518268Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Apr 17 23:36:10.700808 kubelet[3715]: E0417 23:36:10.699987 3715 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-m7ffs" podUID="5743ab00-ab78-4f3d-b36a-493284f86675" Apr 17 23:36:12.699896 kubelet[3715]: E0417 23:36:12.699841 3715 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-m7ffs" podUID="5743ab00-ab78-4f3d-b36a-493284f86675" Apr 17 23:36:14.701373 kubelet[3715]: E0417 23:36:14.700761 3715 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-m7ffs" podUID="5743ab00-ab78-4f3d-b36a-493284f86675" Apr 17 23:36:16.536925 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3928008453.mount: Deactivated successfully. Apr 17 23:36:16.598932 containerd[2145]: time="2026-04-17T23:36:16.597461417Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:36:16.598932 containerd[2145]: time="2026-04-17T23:36:16.598872641Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=153921674" Apr 17 23:36:16.599843 containerd[2145]: time="2026-04-17T23:36:16.599794373Z" level=info msg="ImageCreate event name:\"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:36:16.603430 containerd[2145]: time="2026-04-17T23:36:16.603375713Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:36:16.605052 containerd[2145]: time="2026-04-17T23:36:16.604986197Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"153921536\" in 6.656365941s" Apr 17 23:36:16.605252 containerd[2145]: time="2026-04-17T23:36:16.605050253Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\"" Apr 17 23:36:16.612620 containerd[2145]: time="2026-04-17T23:36:16.612563453Z" level=info msg="CreateContainer within sandbox \"9befb5c7c60b830b17ace25821ddc60cef2d113de9b2bedfd25ee55e02222675\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Apr 17 23:36:16.645167 containerd[2145]: time="2026-04-17T23:36:16.642505073Z" level=info msg="CreateContainer within sandbox \"9befb5c7c60b830b17ace25821ddc60cef2d113de9b2bedfd25ee55e02222675\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"edba27c1164c573f1b14b0e59621e2d9e6578a0022bf6d2b1f7c8449f36a855c\"" Apr 17 23:36:16.645167 containerd[2145]: time="2026-04-17T23:36:16.643745621Z" level=info msg="StartContainer for \"edba27c1164c573f1b14b0e59621e2d9e6578a0022bf6d2b1f7c8449f36a855c\"" Apr 17 23:36:16.703583 kubelet[3715]: E0417 23:36:16.703171 3715 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-m7ffs" podUID="5743ab00-ab78-4f3d-b36a-493284f86675" Apr 17 23:36:16.758356 containerd[2145]: time="2026-04-17T23:36:16.758280161Z" level=info msg="StartContainer for \"edba27c1164c573f1b14b0e59621e2d9e6578a0022bf6d2b1f7c8449f36a855c\" returns successfully" Apr 17 23:36:17.535609 systemd[1]: run-containerd-runc-k8s.io-edba27c1164c573f1b14b0e59621e2d9e6578a0022bf6d2b1f7c8449f36a855c-runc.G9UTCH.mount: Deactivated successfully. Apr 17 23:36:17.535889 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-edba27c1164c573f1b14b0e59621e2d9e6578a0022bf6d2b1f7c8449f36a855c-rootfs.mount: Deactivated successfully. Apr 17 23:36:17.570513 containerd[2145]: time="2026-04-17T23:36:17.570309869Z" level=info msg="shim disconnected" id=edba27c1164c573f1b14b0e59621e2d9e6578a0022bf6d2b1f7c8449f36a855c namespace=k8s.io Apr 17 23:36:17.570783 containerd[2145]: time="2026-04-17T23:36:17.570525701Z" level=warning msg="cleaning up after shim disconnected" id=edba27c1164c573f1b14b0e59621e2d9e6578a0022bf6d2b1f7c8449f36a855c namespace=k8s.io Apr 17 23:36:17.570783 containerd[2145]: time="2026-04-17T23:36:17.570553085Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:36:17.984010 containerd[2145]: time="2026-04-17T23:36:17.982567807Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Apr 17 23:36:18.700368 kubelet[3715]: E0417 23:36:18.699537 3715 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-m7ffs" podUID="5743ab00-ab78-4f3d-b36a-493284f86675" Apr 17 23:36:20.700824 kubelet[3715]: E0417 23:36:20.700777 3715 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-m7ffs" podUID="5743ab00-ab78-4f3d-b36a-493284f86675" Apr 17 23:36:20.908403 containerd[2145]: time="2026-04-17T23:36:20.907561678Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:36:20.910769 containerd[2145]: time="2026-04-17T23:36:20.909502270Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=66009216" Apr 17 23:36:20.911699 containerd[2145]: time="2026-04-17T23:36:20.911640142Z" level=info msg="ImageCreate event name:\"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:36:20.918598 containerd[2145]: time="2026-04-17T23:36:20.918509902Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:36:20.920337 containerd[2145]: time="2026-04-17T23:36:20.920146486Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"67406741\" in 2.937483675s" Apr 17 23:36:20.920337 containerd[2145]: time="2026-04-17T23:36:20.920200570Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\"" Apr 17 23:36:20.928498 containerd[2145]: time="2026-04-17T23:36:20.928434250Z" level=info msg="CreateContainer within sandbox \"9befb5c7c60b830b17ace25821ddc60cef2d113de9b2bedfd25ee55e02222675\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 17 23:36:20.961327 containerd[2145]: time="2026-04-17T23:36:20.960915970Z" level=info msg="CreateContainer within sandbox \"9befb5c7c60b830b17ace25821ddc60cef2d113de9b2bedfd25ee55e02222675\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"d9ff81f9f66ecd543f695a9af209f16a2b7c9e747130bc1ae708d209331204a6\"" Apr 17 23:36:20.962558 containerd[2145]: time="2026-04-17T23:36:20.962470714Z" level=info msg="StartContainer for \"d9ff81f9f66ecd543f695a9af209f16a2b7c9e747130bc1ae708d209331204a6\"" Apr 17 23:36:21.091756 containerd[2145]: time="2026-04-17T23:36:21.091655851Z" level=info msg="StartContainer for \"d9ff81f9f66ecd543f695a9af209f16a2b7c9e747130bc1ae708d209331204a6\" returns successfully" Apr 17 23:36:22.703399 kubelet[3715]: E0417 23:36:22.702136 3715 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-m7ffs" podUID="5743ab00-ab78-4f3d-b36a-493284f86675" Apr 17 23:36:22.846375 containerd[2145]: time="2026-04-17T23:36:22.846298260Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 17 23:36:22.898151 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d9ff81f9f66ecd543f695a9af209f16a2b7c9e747130bc1ae708d209331204a6-rootfs.mount: Deactivated successfully. Apr 17 23:36:22.902742 containerd[2145]: time="2026-04-17T23:36:22.902638764Z" level=info msg="shim disconnected" id=d9ff81f9f66ecd543f695a9af209f16a2b7c9e747130bc1ae708d209331204a6 namespace=k8s.io Apr 17 23:36:22.902742 containerd[2145]: time="2026-04-17T23:36:22.902736012Z" level=warning msg="cleaning up after shim disconnected" id=d9ff81f9f66ecd543f695a9af209f16a2b7c9e747130bc1ae708d209331204a6 namespace=k8s.io Apr 17 23:36:22.904509 containerd[2145]: time="2026-04-17T23:36:22.902758764Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:36:22.919143 kubelet[3715]: I0417 23:36:22.918027 3715 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Apr 17 23:36:23.055046 kubelet[3715]: I0417 23:36:23.051531 3715 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c42f7cc8-188c-4d4f-adfb-58e20708a530-calico-apiserver-certs\") pod \"calico-apiserver-79855f7dd6-v9wp9\" (UID: \"c42f7cc8-188c-4d4f-adfb-58e20708a530\") " pod="calico-system/calico-apiserver-79855f7dd6-v9wp9" Apr 17 23:36:23.055046 kubelet[3715]: I0417 23:36:23.051635 3715 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqx26\" (UniqueName: \"kubernetes.io/projected/c42f7cc8-188c-4d4f-adfb-58e20708a530-kube-api-access-jqx26\") pod \"calico-apiserver-79855f7dd6-v9wp9\" (UID: \"c42f7cc8-188c-4d4f-adfb-58e20708a530\") " pod="calico-system/calico-apiserver-79855f7dd6-v9wp9" Apr 17 23:36:23.055046 kubelet[3715]: I0417 23:36:23.051708 3715 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zm5th\" (UniqueName: \"kubernetes.io/projected/c4ec6d36-3750-48ee-bd55-9f658f7d853e-kube-api-access-zm5th\") pod \"coredns-674b8bbfcf-tvmqn\" (UID: \"c4ec6d36-3750-48ee-bd55-9f658f7d853e\") " pod="kube-system/coredns-674b8bbfcf-tvmqn" Apr 17 23:36:23.055046 kubelet[3715]: I0417 23:36:23.051929 3715 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c4ec6d36-3750-48ee-bd55-9f658f7d853e-config-volume\") pod \"coredns-674b8bbfcf-tvmqn\" (UID: \"c4ec6d36-3750-48ee-bd55-9f658f7d853e\") " pod="kube-system/coredns-674b8bbfcf-tvmqn" Apr 17 23:36:23.146233 containerd[2145]: time="2026-04-17T23:36:23.144940881Z" level=info msg="CreateContainer within sandbox \"9befb5c7c60b830b17ace25821ddc60cef2d113de9b2bedfd25ee55e02222675\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Apr 17 23:36:23.158547 kubelet[3715]: I0417 23:36:23.158474 3715 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3eb35872-77ff-4818-9956-4687a66f2e28-config\") pod \"goldmane-5b85766d88-xcfhx\" (UID: \"3eb35872-77ff-4818-9956-4687a66f2e28\") " pod="calico-system/goldmane-5b85766d88-xcfhx" Apr 17 23:36:23.159214 kubelet[3715]: I0417 23:36:23.158557 3715 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3eb35872-77ff-4818-9956-4687a66f2e28-goldmane-ca-bundle\") pod \"goldmane-5b85766d88-xcfhx\" (UID: \"3eb35872-77ff-4818-9956-4687a66f2e28\") " pod="calico-system/goldmane-5b85766d88-xcfhx" Apr 17 23:36:23.159214 kubelet[3715]: I0417 23:36:23.158819 3715 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/3eb35872-77ff-4818-9956-4687a66f2e28-goldmane-key-pair\") pod \"goldmane-5b85766d88-xcfhx\" (UID: \"3eb35872-77ff-4818-9956-4687a66f2e28\") " pod="calico-system/goldmane-5b85766d88-xcfhx" Apr 17 23:36:23.159214 kubelet[3715]: I0417 23:36:23.158943 3715 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a39f673-617b-4850-b432-79bcea396fab-tigera-ca-bundle\") pod \"calico-kube-controllers-6c659d559c-t45jr\" (UID: \"6a39f673-617b-4850-b432-79bcea396fab\") " pod="calico-system/calico-kube-controllers-6c659d559c-t45jr" Apr 17 23:36:23.159214 kubelet[3715]: I0417 23:36:23.158994 3715 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8zqp\" (UniqueName: \"kubernetes.io/projected/6a39f673-617b-4850-b432-79bcea396fab-kube-api-access-r8zqp\") pod \"calico-kube-controllers-6c659d559c-t45jr\" (UID: \"6a39f673-617b-4850-b432-79bcea396fab\") " pod="calico-system/calico-kube-controllers-6c659d559c-t45jr" Apr 17 23:36:23.164389 kubelet[3715]: I0417 23:36:23.159266 3715 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vt8m\" (UniqueName: \"kubernetes.io/projected/becb7dd4-85d3-451c-8c3a-f9364e4aeec8-kube-api-access-6vt8m\") pod \"coredns-674b8bbfcf-v5g4s\" (UID: \"becb7dd4-85d3-451c-8c3a-f9364e4aeec8\") " pod="kube-system/coredns-674b8bbfcf-v5g4s" Apr 17 23:36:23.164389 kubelet[3715]: I0417 23:36:23.159416 3715 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/753b7e4f-9fdf-46c1-9a03-9817d9a318d6-calico-apiserver-certs\") pod \"calico-apiserver-79855f7dd6-9l8cf\" (UID: \"753b7e4f-9fdf-46c1-9a03-9817d9a318d6\") " pod="calico-system/calico-apiserver-79855f7dd6-9l8cf" Apr 17 23:36:23.164389 kubelet[3715]: I0417 23:36:23.159471 3715 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/7f054ca4-5891-40ee-ba80-c401a6255ea7-nginx-config\") pod \"whisker-7979cdb584-z9jkm\" (UID: \"7f054ca4-5891-40ee-ba80-c401a6255ea7\") " pod="calico-system/whisker-7979cdb584-z9jkm" Apr 17 23:36:23.164389 kubelet[3715]: I0417 23:36:23.160307 3715 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r78ld\" (UniqueName: \"kubernetes.io/projected/7f054ca4-5891-40ee-ba80-c401a6255ea7-kube-api-access-r78ld\") pod \"whisker-7979cdb584-z9jkm\" (UID: \"7f054ca4-5891-40ee-ba80-c401a6255ea7\") " pod="calico-system/whisker-7979cdb584-z9jkm" Apr 17 23:36:23.164389 kubelet[3715]: I0417 23:36:23.160436 3715 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/becb7dd4-85d3-451c-8c3a-f9364e4aeec8-config-volume\") pod \"coredns-674b8bbfcf-v5g4s\" (UID: \"becb7dd4-85d3-451c-8c3a-f9364e4aeec8\") " pod="kube-system/coredns-674b8bbfcf-v5g4s" Apr 17 23:36:23.164685 kubelet[3715]: I0417 23:36:23.160497 3715 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7f054ca4-5891-40ee-ba80-c401a6255ea7-whisker-backend-key-pair\") pod \"whisker-7979cdb584-z9jkm\" (UID: \"7f054ca4-5891-40ee-ba80-c401a6255ea7\") " pod="calico-system/whisker-7979cdb584-z9jkm" Apr 17 23:36:23.164685 kubelet[3715]: I0417 23:36:23.160619 3715 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8nv4\" (UniqueName: \"kubernetes.io/projected/3eb35872-77ff-4818-9956-4687a66f2e28-kube-api-access-h8nv4\") pod \"goldmane-5b85766d88-xcfhx\" (UID: \"3eb35872-77ff-4818-9956-4687a66f2e28\") " pod="calico-system/goldmane-5b85766d88-xcfhx" Apr 17 23:36:23.164685 kubelet[3715]: I0417 23:36:23.162586 3715 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttfh8\" (UniqueName: \"kubernetes.io/projected/753b7e4f-9fdf-46c1-9a03-9817d9a318d6-kube-api-access-ttfh8\") pod \"calico-apiserver-79855f7dd6-9l8cf\" (UID: \"753b7e4f-9fdf-46c1-9a03-9817d9a318d6\") " pod="calico-system/calico-apiserver-79855f7dd6-9l8cf" Apr 17 23:36:23.164685 kubelet[3715]: I0417 23:36:23.162757 3715 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7f054ca4-5891-40ee-ba80-c401a6255ea7-whisker-ca-bundle\") pod \"whisker-7979cdb584-z9jkm\" (UID: \"7f054ca4-5891-40ee-ba80-c401a6255ea7\") " pod="calico-system/whisker-7979cdb584-z9jkm" Apr 17 23:36:23.187396 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount518630869.mount: Deactivated successfully. Apr 17 23:36:23.195142 containerd[2145]: time="2026-04-17T23:36:23.194958561Z" level=info msg="CreateContainer within sandbox \"9befb5c7c60b830b17ace25821ddc60cef2d113de9b2bedfd25ee55e02222675\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"ede11c2a5ead51fe238696b2dde21610a69acce1b5f92e828052b0d374e47449\"" Apr 17 23:36:23.211135 containerd[2145]: time="2026-04-17T23:36:23.209014053Z" level=info msg="StartContainer for \"ede11c2a5ead51fe238696b2dde21610a69acce1b5f92e828052b0d374e47449\"" Apr 17 23:36:23.336941 containerd[2145]: time="2026-04-17T23:36:23.336496222Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-tvmqn,Uid:c4ec6d36-3750-48ee-bd55-9f658f7d853e,Namespace:kube-system,Attempt:0,}" Apr 17 23:36:23.363579 containerd[2145]: time="2026-04-17T23:36:23.359722666Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79855f7dd6-v9wp9,Uid:c42f7cc8-188c-4d4f-adfb-58e20708a530,Namespace:calico-system,Attempt:0,}" Apr 17 23:36:23.371404 containerd[2145]: time="2026-04-17T23:36:23.371350150Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6c659d559c-t45jr,Uid:6a39f673-617b-4850-b432-79bcea396fab,Namespace:calico-system,Attempt:0,}" Apr 17 23:36:23.376127 containerd[2145]: time="2026-04-17T23:36:23.375940930Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-xcfhx,Uid:3eb35872-77ff-4818-9956-4687a66f2e28,Namespace:calico-system,Attempt:0,}" Apr 17 23:36:23.386584 containerd[2145]: time="2026-04-17T23:36:23.385608166Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-v5g4s,Uid:becb7dd4-85d3-451c-8c3a-f9364e4aeec8,Namespace:kube-system,Attempt:0,}" Apr 17 23:36:23.524376 containerd[2145]: time="2026-04-17T23:36:23.524280203Z" level=info msg="StartContainer for \"ede11c2a5ead51fe238696b2dde21610a69acce1b5f92e828052b0d374e47449\" returns successfully" Apr 17 23:36:23.653227 containerd[2145]: time="2026-04-17T23:36:23.652471680Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7979cdb584-z9jkm,Uid:7f054ca4-5891-40ee-ba80-c401a6255ea7,Namespace:calico-system,Attempt:0,}" Apr 17 23:36:23.683662 containerd[2145]: time="2026-04-17T23:36:23.682165140Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79855f7dd6-9l8cf,Uid:753b7e4f-9fdf-46c1-9a03-9817d9a318d6,Namespace:calico-system,Attempt:0,}" Apr 17 23:36:24.222604 kubelet[3715]: I0417 23:36:24.222294 3715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-r6fj2" podStartSLOduration=4.117977251 podStartE2EDuration="19.222081718s" podCreationTimestamp="2026-04-17 23:36:05 +0000 UTC" firstStartedPulling="2026-04-17 23:36:05.818010751 +0000 UTC m=+31.367803033" lastFinishedPulling="2026-04-17 23:36:20.922115218 +0000 UTC m=+46.471907500" observedRunningTime="2026-04-17 23:36:24.215763298 +0000 UTC m=+49.765555580" watchObservedRunningTime="2026-04-17 23:36:24.222081718 +0000 UTC m=+49.771874012" Apr 17 23:36:24.733387 containerd[2145]: time="2026-04-17T23:36:24.731637817Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-m7ffs,Uid:5743ab00-ab78-4f3d-b36a-493284f86675,Namespace:calico-system,Attempt:0,}" Apr 17 23:36:25.087498 (udev-worker)[4900]: Network interface NamePolicy= disabled on kernel command line. Apr 17 23:36:25.093861 systemd-networkd[1694]: cali6d38a08862d: Link UP Apr 17 23:36:25.098024 systemd-networkd[1694]: cali6d38a08862d: Gained carrier Apr 17 23:36:25.218913 (udev-worker)[4899]: Network interface NamePolicy= disabled on kernel command line. Apr 17 23:36:25.228056 containerd[2145]: 2026-04-17 23:36:24.050 [ERROR][4672] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 17 23:36:25.228056 containerd[2145]: 2026-04-17 23:36:24.187 [INFO][4672] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--31--247-k8s-goldmane--5b85766d88--xcfhx-eth0 goldmane-5b85766d88- calico-system 3eb35872-77ff-4818-9956-4687a66f2e28 900 0 2026-04-17 23:36:02 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:5b85766d88 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-31-247 goldmane-5b85766d88-xcfhx eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali6d38a08862d [] [] }} ContainerID="f982e46f08693c2d5085e0c0b98a2beaa8b55c3702481c693cc8aeaecb44458e" Namespace="calico-system" Pod="goldmane-5b85766d88-xcfhx" WorkloadEndpoint="ip--172--31--31--247-k8s-goldmane--5b85766d88--xcfhx-" Apr 17 23:36:25.228056 containerd[2145]: 2026-04-17 23:36:24.187 [INFO][4672] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f982e46f08693c2d5085e0c0b98a2beaa8b55c3702481c693cc8aeaecb44458e" Namespace="calico-system" Pod="goldmane-5b85766d88-xcfhx" WorkloadEndpoint="ip--172--31--31--247-k8s-goldmane--5b85766d88--xcfhx-eth0" Apr 17 23:36:25.228056 containerd[2145]: 2026-04-17 23:36:24.684 [INFO][4799] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f982e46f08693c2d5085e0c0b98a2beaa8b55c3702481c693cc8aeaecb44458e" HandleID="k8s-pod-network.f982e46f08693c2d5085e0c0b98a2beaa8b55c3702481c693cc8aeaecb44458e" Workload="ip--172--31--31--247-k8s-goldmane--5b85766d88--xcfhx-eth0" Apr 17 23:36:25.228056 containerd[2145]: 2026-04-17 23:36:24.747 [INFO][4799] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="f982e46f08693c2d5085e0c0b98a2beaa8b55c3702481c693cc8aeaecb44458e" HandleID="k8s-pod-network.f982e46f08693c2d5085e0c0b98a2beaa8b55c3702481c693cc8aeaecb44458e" Workload="ip--172--31--31--247-k8s-goldmane--5b85766d88--xcfhx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002cbcc0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-31-247", "pod":"goldmane-5b85766d88-xcfhx", "timestamp":"2026-04-17 23:36:24.684402505 +0000 UTC"}, Hostname:"ip-172-31-31-247", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x4000260420)} Apr 17 23:36:25.228056 containerd[2145]: 2026-04-17 23:36:24.747 [INFO][4799] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:36:25.228056 containerd[2145]: 2026-04-17 23:36:24.747 [INFO][4799] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:36:25.228056 containerd[2145]: 2026-04-17 23:36:24.748 [INFO][4799] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-31-247' Apr 17 23:36:25.228056 containerd[2145]: 2026-04-17 23:36:24.753 [INFO][4799] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.f982e46f08693c2d5085e0c0b98a2beaa8b55c3702481c693cc8aeaecb44458e" host="ip-172-31-31-247" Apr 17 23:36:25.228056 containerd[2145]: 2026-04-17 23:36:24.795 [INFO][4799] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-31-247" Apr 17 23:36:25.228056 containerd[2145]: 2026-04-17 23:36:24.860 [INFO][4799] ipam/ipam.go 526: Trying affinity for 192.168.115.192/26 host="ip-172-31-31-247" Apr 17 23:36:25.228056 containerd[2145]: 2026-04-17 23:36:24.870 [INFO][4799] ipam/ipam.go 160: Attempting to load block cidr=192.168.115.192/26 host="ip-172-31-31-247" Apr 17 23:36:25.228056 containerd[2145]: 2026-04-17 23:36:24.893 [INFO][4799] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.115.192/26 host="ip-172-31-31-247" Apr 17 23:36:25.228056 containerd[2145]: 2026-04-17 23:36:24.893 [INFO][4799] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.115.192/26 handle="k8s-pod-network.f982e46f08693c2d5085e0c0b98a2beaa8b55c3702481c693cc8aeaecb44458e" host="ip-172-31-31-247" Apr 17 23:36:25.228056 containerd[2145]: 2026-04-17 23:36:24.916 [INFO][4799] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.f982e46f08693c2d5085e0c0b98a2beaa8b55c3702481c693cc8aeaecb44458e Apr 17 23:36:25.228056 containerd[2145]: 2026-04-17 23:36:24.935 [INFO][4799] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.115.192/26 handle="k8s-pod-network.f982e46f08693c2d5085e0c0b98a2beaa8b55c3702481c693cc8aeaecb44458e" host="ip-172-31-31-247" Apr 17 23:36:25.228056 containerd[2145]: 2026-04-17 23:36:24.959 [INFO][4799] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.115.193/26] block=192.168.115.192/26 handle="k8s-pod-network.f982e46f08693c2d5085e0c0b98a2beaa8b55c3702481c693cc8aeaecb44458e" host="ip-172-31-31-247" Apr 17 23:36:25.228056 containerd[2145]: 2026-04-17 23:36:24.959 [INFO][4799] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.115.193/26] handle="k8s-pod-network.f982e46f08693c2d5085e0c0b98a2beaa8b55c3702481c693cc8aeaecb44458e" host="ip-172-31-31-247" Apr 17 23:36:25.228056 containerd[2145]: 2026-04-17 23:36:24.959 [INFO][4799] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:36:25.228056 containerd[2145]: 2026-04-17 23:36:24.959 [INFO][4799] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.115.193/26] IPv6=[] ContainerID="f982e46f08693c2d5085e0c0b98a2beaa8b55c3702481c693cc8aeaecb44458e" HandleID="k8s-pod-network.f982e46f08693c2d5085e0c0b98a2beaa8b55c3702481c693cc8aeaecb44458e" Workload="ip--172--31--31--247-k8s-goldmane--5b85766d88--xcfhx-eth0" Apr 17 23:36:25.237456 containerd[2145]: 2026-04-17 23:36:25.029 [INFO][4672] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f982e46f08693c2d5085e0c0b98a2beaa8b55c3702481c693cc8aeaecb44458e" Namespace="calico-system" Pod="goldmane-5b85766d88-xcfhx" WorkloadEndpoint="ip--172--31--31--247-k8s-goldmane--5b85766d88--xcfhx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--247-k8s-goldmane--5b85766d88--xcfhx-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"3eb35872-77ff-4818-9956-4687a66f2e28", ResourceVersion:"900", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 36, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-247", ContainerID:"", Pod:"goldmane-5b85766d88-xcfhx", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.115.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali6d38a08862d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:36:25.237456 containerd[2145]: 2026-04-17 23:36:25.033 [INFO][4672] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.115.193/32] ContainerID="f982e46f08693c2d5085e0c0b98a2beaa8b55c3702481c693cc8aeaecb44458e" Namespace="calico-system" Pod="goldmane-5b85766d88-xcfhx" WorkloadEndpoint="ip--172--31--31--247-k8s-goldmane--5b85766d88--xcfhx-eth0" Apr 17 23:36:25.237456 containerd[2145]: 2026-04-17 23:36:25.033 [INFO][4672] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6d38a08862d ContainerID="f982e46f08693c2d5085e0c0b98a2beaa8b55c3702481c693cc8aeaecb44458e" Namespace="calico-system" Pod="goldmane-5b85766d88-xcfhx" WorkloadEndpoint="ip--172--31--31--247-k8s-goldmane--5b85766d88--xcfhx-eth0" Apr 17 23:36:25.237456 containerd[2145]: 2026-04-17 23:36:25.102 [INFO][4672] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f982e46f08693c2d5085e0c0b98a2beaa8b55c3702481c693cc8aeaecb44458e" Namespace="calico-system" Pod="goldmane-5b85766d88-xcfhx" WorkloadEndpoint="ip--172--31--31--247-k8s-goldmane--5b85766d88--xcfhx-eth0" Apr 17 23:36:25.237456 containerd[2145]: 2026-04-17 23:36:25.109 [INFO][4672] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f982e46f08693c2d5085e0c0b98a2beaa8b55c3702481c693cc8aeaecb44458e" Namespace="calico-system" Pod="goldmane-5b85766d88-xcfhx" WorkloadEndpoint="ip--172--31--31--247-k8s-goldmane--5b85766d88--xcfhx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--247-k8s-goldmane--5b85766d88--xcfhx-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"3eb35872-77ff-4818-9956-4687a66f2e28", ResourceVersion:"900", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 36, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-247", ContainerID:"f982e46f08693c2d5085e0c0b98a2beaa8b55c3702481c693cc8aeaecb44458e", Pod:"goldmane-5b85766d88-xcfhx", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.115.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali6d38a08862d", MAC:"62:e1:60:87:09:ae", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:36:25.237456 containerd[2145]: 2026-04-17 23:36:25.189 [INFO][4672] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f982e46f08693c2d5085e0c0b98a2beaa8b55c3702481c693cc8aeaecb44458e" Namespace="calico-system" Pod="goldmane-5b85766d88-xcfhx" WorkloadEndpoint="ip--172--31--31--247-k8s-goldmane--5b85766d88--xcfhx-eth0" Apr 17 23:36:25.230747 systemd-networkd[1694]: cali82eca6df786: Link UP Apr 17 23:36:25.231789 systemd-networkd[1694]: cali82eca6df786: Gained carrier Apr 17 23:36:25.309142 containerd[2145]: 2026-04-17 23:36:24.791 [INFO][4780] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="91d1491360d3bccf09bd0ef42261b38d9aed5fd141a6ec138e50db941e443647" Apr 17 23:36:25.309142 containerd[2145]: 2026-04-17 23:36:24.791 [INFO][4780] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="91d1491360d3bccf09bd0ef42261b38d9aed5fd141a6ec138e50db941e443647" iface="eth0" netns="/var/run/netns/cni-cb6910b1-9cd2-0474-cdd1-cd6d3fe7d5de" Apr 17 23:36:25.309142 containerd[2145]: 2026-04-17 23:36:24.792 [INFO][4780] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="91d1491360d3bccf09bd0ef42261b38d9aed5fd141a6ec138e50db941e443647" iface="eth0" netns="/var/run/netns/cni-cb6910b1-9cd2-0474-cdd1-cd6d3fe7d5de" Apr 17 23:36:25.309142 containerd[2145]: 2026-04-17 23:36:24.792 [INFO][4780] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="91d1491360d3bccf09bd0ef42261b38d9aed5fd141a6ec138e50db941e443647" iface="eth0" netns="/var/run/netns/cni-cb6910b1-9cd2-0474-cdd1-cd6d3fe7d5de" Apr 17 23:36:25.309142 containerd[2145]: 2026-04-17 23:36:24.792 [INFO][4780] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="91d1491360d3bccf09bd0ef42261b38d9aed5fd141a6ec138e50db941e443647" Apr 17 23:36:25.309142 containerd[2145]: 2026-04-17 23:36:24.792 [INFO][4780] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="91d1491360d3bccf09bd0ef42261b38d9aed5fd141a6ec138e50db941e443647" Apr 17 23:36:25.309142 containerd[2145]: 2026-04-17 23:36:25.191 [INFO][4864] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="91d1491360d3bccf09bd0ef42261b38d9aed5fd141a6ec138e50db941e443647" HandleID="k8s-pod-network.91d1491360d3bccf09bd0ef42261b38d9aed5fd141a6ec138e50db941e443647" Workload="ip--172--31--31--247-k8s-coredns--674b8bbfcf--tvmqn-eth0" Apr 17 23:36:25.309142 containerd[2145]: 2026-04-17 23:36:25.200 [INFO][4864] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:36:25.309142 containerd[2145]: 2026-04-17 23:36:25.200 [INFO][4864] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:36:25.309142 containerd[2145]: 2026-04-17 23:36:25.240 [WARNING][4864] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="91d1491360d3bccf09bd0ef42261b38d9aed5fd141a6ec138e50db941e443647" HandleID="k8s-pod-network.91d1491360d3bccf09bd0ef42261b38d9aed5fd141a6ec138e50db941e443647" Workload="ip--172--31--31--247-k8s-coredns--674b8bbfcf--tvmqn-eth0" Apr 17 23:36:25.309142 containerd[2145]: 2026-04-17 23:36:25.240 [INFO][4864] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="91d1491360d3bccf09bd0ef42261b38d9aed5fd141a6ec138e50db941e443647" HandleID="k8s-pod-network.91d1491360d3bccf09bd0ef42261b38d9aed5fd141a6ec138e50db941e443647" Workload="ip--172--31--31--247-k8s-coredns--674b8bbfcf--tvmqn-eth0" Apr 17 23:36:25.309142 containerd[2145]: 2026-04-17 23:36:25.246 [INFO][4864] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:36:25.309142 containerd[2145]: 2026-04-17 23:36:25.295 [INFO][4780] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="91d1491360d3bccf09bd0ef42261b38d9aed5fd141a6ec138e50db941e443647" Apr 17 23:36:25.317763 systemd[1]: run-netns-cni\x2dcb6910b1\x2d9cd2\x2d0474\x2dcdd1\x2dcd6d3fe7d5de.mount: Deactivated successfully. Apr 17 23:36:25.325805 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-91d1491360d3bccf09bd0ef42261b38d9aed5fd141a6ec138e50db941e443647-shm.mount: Deactivated successfully. Apr 17 23:36:25.329483 containerd[2145]: 2026-04-17 23:36:24.059 [ERROR][4696] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 17 23:36:25.329483 containerd[2145]: 2026-04-17 23:36:24.222 [INFO][4696] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--31--247-k8s-coredns--674b8bbfcf--v5g4s-eth0 coredns-674b8bbfcf- kube-system becb7dd4-85d3-451c-8c3a-f9364e4aeec8 908 0 2026-04-17 23:35:40 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-31-247 coredns-674b8bbfcf-v5g4s eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali82eca6df786 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="a0dc48103fcb5638a979abe5e0759d4a555f65472f93cd14c28119ef4b9978bb" Namespace="kube-system" Pod="coredns-674b8bbfcf-v5g4s" WorkloadEndpoint="ip--172--31--31--247-k8s-coredns--674b8bbfcf--v5g4s-" Apr 17 23:36:25.329483 containerd[2145]: 2026-04-17 23:36:24.222 [INFO][4696] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a0dc48103fcb5638a979abe5e0759d4a555f65472f93cd14c28119ef4b9978bb" Namespace="kube-system" Pod="coredns-674b8bbfcf-v5g4s" WorkloadEndpoint="ip--172--31--31--247-k8s-coredns--674b8bbfcf--v5g4s-eth0" Apr 17 23:36:25.329483 containerd[2145]: 2026-04-17 23:36:24.788 [INFO][4804] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a0dc48103fcb5638a979abe5e0759d4a555f65472f93cd14c28119ef4b9978bb" HandleID="k8s-pod-network.a0dc48103fcb5638a979abe5e0759d4a555f65472f93cd14c28119ef4b9978bb" Workload="ip--172--31--31--247-k8s-coredns--674b8bbfcf--v5g4s-eth0" Apr 17 23:36:25.329483 containerd[2145]: 2026-04-17 23:36:24.858 [INFO][4804] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="a0dc48103fcb5638a979abe5e0759d4a555f65472f93cd14c28119ef4b9978bb" HandleID="k8s-pod-network.a0dc48103fcb5638a979abe5e0759d4a555f65472f93cd14c28119ef4b9978bb" Workload="ip--172--31--31--247-k8s-coredns--674b8bbfcf--v5g4s-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000352690), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-31-247", "pod":"coredns-674b8bbfcf-v5g4s", "timestamp":"2026-04-17 23:36:24.788270941 +0000 UTC"}, Hostname:"ip-172-31-31-247", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40004586e0)} Apr 17 23:36:25.329483 containerd[2145]: 2026-04-17 23:36:24.859 [INFO][4804] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:36:25.329483 containerd[2145]: 2026-04-17 23:36:24.960 [INFO][4804] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:36:25.329483 containerd[2145]: 2026-04-17 23:36:24.960 [INFO][4804] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-31-247' Apr 17 23:36:25.329483 containerd[2145]: 2026-04-17 23:36:24.991 [INFO][4804] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.a0dc48103fcb5638a979abe5e0759d4a555f65472f93cd14c28119ef4b9978bb" host="ip-172-31-31-247" Apr 17 23:36:25.329483 containerd[2145]: 2026-04-17 23:36:25.027 [INFO][4804] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-31-247" Apr 17 23:36:25.329483 containerd[2145]: 2026-04-17 23:36:25.074 [INFO][4804] ipam/ipam.go 526: Trying affinity for 192.168.115.192/26 host="ip-172-31-31-247" Apr 17 23:36:25.329483 containerd[2145]: 2026-04-17 23:36:25.091 [INFO][4804] ipam/ipam.go 160: Attempting to load block cidr=192.168.115.192/26 host="ip-172-31-31-247" Apr 17 23:36:25.329483 containerd[2145]: 2026-04-17 23:36:25.122 [INFO][4804] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.115.192/26 host="ip-172-31-31-247" Apr 17 23:36:25.329483 containerd[2145]: 2026-04-17 23:36:25.135 [INFO][4804] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.115.192/26 handle="k8s-pod-network.a0dc48103fcb5638a979abe5e0759d4a555f65472f93cd14c28119ef4b9978bb" host="ip-172-31-31-247" Apr 17 23:36:25.329483 containerd[2145]: 2026-04-17 23:36:25.165 [INFO][4804] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.a0dc48103fcb5638a979abe5e0759d4a555f65472f93cd14c28119ef4b9978bb Apr 17 23:36:25.329483 containerd[2145]: 2026-04-17 23:36:25.183 [INFO][4804] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.115.192/26 handle="k8s-pod-network.a0dc48103fcb5638a979abe5e0759d4a555f65472f93cd14c28119ef4b9978bb" host="ip-172-31-31-247" Apr 17 23:36:25.329483 containerd[2145]: 2026-04-17 23:36:25.199 [INFO][4804] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.115.194/26] block=192.168.115.192/26 handle="k8s-pod-network.a0dc48103fcb5638a979abe5e0759d4a555f65472f93cd14c28119ef4b9978bb" host="ip-172-31-31-247" Apr 17 23:36:25.329483 containerd[2145]: 2026-04-17 23:36:25.199 [INFO][4804] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.115.194/26] handle="k8s-pod-network.a0dc48103fcb5638a979abe5e0759d4a555f65472f93cd14c28119ef4b9978bb" host="ip-172-31-31-247" Apr 17 23:36:25.329483 containerd[2145]: 2026-04-17 23:36:25.199 [INFO][4804] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:36:25.329483 containerd[2145]: 2026-04-17 23:36:25.199 [INFO][4804] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.115.194/26] IPv6=[] ContainerID="a0dc48103fcb5638a979abe5e0759d4a555f65472f93cd14c28119ef4b9978bb" HandleID="k8s-pod-network.a0dc48103fcb5638a979abe5e0759d4a555f65472f93cd14c28119ef4b9978bb" Workload="ip--172--31--31--247-k8s-coredns--674b8bbfcf--v5g4s-eth0" Apr 17 23:36:25.334596 containerd[2145]: 2026-04-17 23:36:25.207 [INFO][4696] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a0dc48103fcb5638a979abe5e0759d4a555f65472f93cd14c28119ef4b9978bb" Namespace="kube-system" Pod="coredns-674b8bbfcf-v5g4s" WorkloadEndpoint="ip--172--31--31--247-k8s-coredns--674b8bbfcf--v5g4s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--247-k8s-coredns--674b8bbfcf--v5g4s-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"becb7dd4-85d3-451c-8c3a-f9364e4aeec8", ResourceVersion:"908", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 35, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-247", ContainerID:"", Pod:"coredns-674b8bbfcf-v5g4s", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.115.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali82eca6df786", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:36:25.334596 containerd[2145]: 2026-04-17 23:36:25.207 [INFO][4696] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.115.194/32] ContainerID="a0dc48103fcb5638a979abe5e0759d4a555f65472f93cd14c28119ef4b9978bb" Namespace="kube-system" Pod="coredns-674b8bbfcf-v5g4s" WorkloadEndpoint="ip--172--31--31--247-k8s-coredns--674b8bbfcf--v5g4s-eth0" Apr 17 23:36:25.334596 containerd[2145]: 2026-04-17 23:36:25.207 [INFO][4696] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali82eca6df786 ContainerID="a0dc48103fcb5638a979abe5e0759d4a555f65472f93cd14c28119ef4b9978bb" Namespace="kube-system" Pod="coredns-674b8bbfcf-v5g4s" WorkloadEndpoint="ip--172--31--31--247-k8s-coredns--674b8bbfcf--v5g4s-eth0" Apr 17 23:36:25.334596 containerd[2145]: 2026-04-17 23:36:25.239 [INFO][4696] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a0dc48103fcb5638a979abe5e0759d4a555f65472f93cd14c28119ef4b9978bb" Namespace="kube-system" Pod="coredns-674b8bbfcf-v5g4s" WorkloadEndpoint="ip--172--31--31--247-k8s-coredns--674b8bbfcf--v5g4s-eth0" Apr 17 23:36:25.334596 containerd[2145]: 2026-04-17 23:36:25.248 [INFO][4696] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a0dc48103fcb5638a979abe5e0759d4a555f65472f93cd14c28119ef4b9978bb" Namespace="kube-system" Pod="coredns-674b8bbfcf-v5g4s" WorkloadEndpoint="ip--172--31--31--247-k8s-coredns--674b8bbfcf--v5g4s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--247-k8s-coredns--674b8bbfcf--v5g4s-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"becb7dd4-85d3-451c-8c3a-f9364e4aeec8", ResourceVersion:"908", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 35, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-247", ContainerID:"a0dc48103fcb5638a979abe5e0759d4a555f65472f93cd14c28119ef4b9978bb", Pod:"coredns-674b8bbfcf-v5g4s", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.115.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali82eca6df786", MAC:"a2:0f:15:76:93:10", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:36:25.334596 containerd[2145]: 2026-04-17 23:36:25.286 [INFO][4696] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a0dc48103fcb5638a979abe5e0759d4a555f65472f93cd14c28119ef4b9978bb" Namespace="kube-system" Pod="coredns-674b8bbfcf-v5g4s" WorkloadEndpoint="ip--172--31--31--247-k8s-coredns--674b8bbfcf--v5g4s-eth0" Apr 17 23:36:25.340933 containerd[2145]: time="2026-04-17T23:36:25.335781720Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-tvmqn,Uid:c4ec6d36-3750-48ee-bd55-9f658f7d853e,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"91d1491360d3bccf09bd0ef42261b38d9aed5fd141a6ec138e50db941e443647\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:36:25.341080 kubelet[3715]: E0417 23:36:25.338262 3715 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"91d1491360d3bccf09bd0ef42261b38d9aed5fd141a6ec138e50db941e443647\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:36:25.341080 kubelet[3715]: E0417 23:36:25.338356 3715 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"91d1491360d3bccf09bd0ef42261b38d9aed5fd141a6ec138e50db941e443647\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-tvmqn" Apr 17 23:36:25.341080 kubelet[3715]: E0417 23:36:25.338391 3715 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"91d1491360d3bccf09bd0ef42261b38d9aed5fd141a6ec138e50db941e443647\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-tvmqn" Apr 17 23:36:25.343850 kubelet[3715]: E0417 23:36:25.338480 3715 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-tvmqn_kube-system(c4ec6d36-3750-48ee-bd55-9f658f7d853e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-tvmqn_kube-system(c4ec6d36-3750-48ee-bd55-9f658f7d853e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"91d1491360d3bccf09bd0ef42261b38d9aed5fd141a6ec138e50db941e443647\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-tvmqn" podUID="c4ec6d36-3750-48ee-bd55-9f658f7d853e" Apr 17 23:36:25.484458 containerd[2145]: time="2026-04-17T23:36:25.482831965Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:36:25.489126 containerd[2145]: time="2026-04-17T23:36:25.483978997Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:36:25.489126 containerd[2145]: time="2026-04-17T23:36:25.484418113Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:36:25.489126 containerd[2145]: time="2026-04-17T23:36:25.488004133Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:36:25.493073 systemd-networkd[1694]: cali95998a6f9d6: Link UP Apr 17 23:36:25.494293 systemd-networkd[1694]: cali95998a6f9d6: Gained carrier Apr 17 23:36:25.595318 systemd-networkd[1694]: califcec21d797d: Link UP Apr 17 23:36:25.595793 systemd-networkd[1694]: califcec21d797d: Gained carrier Apr 17 23:36:25.636391 containerd[2145]: 2026-04-17 23:36:24.747 [INFO][4781] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="71e1fb48ba63afd9da8f6e55e3515570eb66b0ed837db4a4cab3b97801807140" Apr 17 23:36:25.636391 containerd[2145]: 2026-04-17 23:36:24.754 [INFO][4781] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="71e1fb48ba63afd9da8f6e55e3515570eb66b0ed837db4a4cab3b97801807140" iface="eth0" netns="/var/run/netns/cni-645e95a8-0795-31a4-654f-b8abe9c655a1" Apr 17 23:36:25.636391 containerd[2145]: 2026-04-17 23:36:24.755 [INFO][4781] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="71e1fb48ba63afd9da8f6e55e3515570eb66b0ed837db4a4cab3b97801807140" iface="eth0" netns="/var/run/netns/cni-645e95a8-0795-31a4-654f-b8abe9c655a1" Apr 17 23:36:25.636391 containerd[2145]: 2026-04-17 23:36:24.768 [INFO][4781] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="71e1fb48ba63afd9da8f6e55e3515570eb66b0ed837db4a4cab3b97801807140" iface="eth0" netns="/var/run/netns/cni-645e95a8-0795-31a4-654f-b8abe9c655a1" Apr 17 23:36:25.636391 containerd[2145]: 2026-04-17 23:36:24.768 [INFO][4781] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="71e1fb48ba63afd9da8f6e55e3515570eb66b0ed837db4a4cab3b97801807140" Apr 17 23:36:25.636391 containerd[2145]: 2026-04-17 23:36:24.768 [INFO][4781] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="71e1fb48ba63afd9da8f6e55e3515570eb66b0ed837db4a4cab3b97801807140" Apr 17 23:36:25.636391 containerd[2145]: 2026-04-17 23:36:25.157 [INFO][4858] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="71e1fb48ba63afd9da8f6e55e3515570eb66b0ed837db4a4cab3b97801807140" HandleID="k8s-pod-network.71e1fb48ba63afd9da8f6e55e3515570eb66b0ed837db4a4cab3b97801807140" Workload="ip--172--31--31--247-k8s-calico--apiserver--79855f7dd6--v9wp9-eth0" Apr 17 23:36:25.636391 containerd[2145]: 2026-04-17 23:36:25.158 [INFO][4858] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:36:25.636391 containerd[2145]: 2026-04-17 23:36:25.526 [INFO][4858] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:36:25.636391 containerd[2145]: 2026-04-17 23:36:25.572 [WARNING][4858] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="71e1fb48ba63afd9da8f6e55e3515570eb66b0ed837db4a4cab3b97801807140" HandleID="k8s-pod-network.71e1fb48ba63afd9da8f6e55e3515570eb66b0ed837db4a4cab3b97801807140" Workload="ip--172--31--31--247-k8s-calico--apiserver--79855f7dd6--v9wp9-eth0" Apr 17 23:36:25.636391 containerd[2145]: 2026-04-17 23:36:25.573 [INFO][4858] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="71e1fb48ba63afd9da8f6e55e3515570eb66b0ed837db4a4cab3b97801807140" HandleID="k8s-pod-network.71e1fb48ba63afd9da8f6e55e3515570eb66b0ed837db4a4cab3b97801807140" Workload="ip--172--31--31--247-k8s-calico--apiserver--79855f7dd6--v9wp9-eth0" Apr 17 23:36:25.636391 containerd[2145]: 2026-04-17 23:36:25.580 [INFO][4858] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:36:25.636391 containerd[2145]: 2026-04-17 23:36:25.592 [INFO][4781] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="71e1fb48ba63afd9da8f6e55e3515570eb66b0ed837db4a4cab3b97801807140" Apr 17 23:36:25.638014 containerd[2145]: 2026-04-17 23:36:24.114 [ERROR][4725] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 17 23:36:25.638014 containerd[2145]: 2026-04-17 23:36:24.248 [INFO][4725] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--31--247-k8s-whisker--7979cdb584--z9jkm-eth0 whisker-7979cdb584- calico-system 7f054ca4-5891-40ee-ba80-c401a6255ea7 916 0 2026-04-17 23:36:08 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:7979cdb584 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-31-247 whisker-7979cdb584-z9jkm eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali95998a6f9d6 [] [] }} ContainerID="937f6c9f91371a995defeeaab9eaf2ffabf5dfaf295eb595c0e1b8a0ab440621" Namespace="calico-system" Pod="whisker-7979cdb584-z9jkm" WorkloadEndpoint="ip--172--31--31--247-k8s-whisker--7979cdb584--z9jkm-" Apr 17 23:36:25.638014 containerd[2145]: 2026-04-17 23:36:24.248 [INFO][4725] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="937f6c9f91371a995defeeaab9eaf2ffabf5dfaf295eb595c0e1b8a0ab440621" Namespace="calico-system" Pod="whisker-7979cdb584-z9jkm" WorkloadEndpoint="ip--172--31--31--247-k8s-whisker--7979cdb584--z9jkm-eth0" Apr 17 23:36:25.638014 containerd[2145]: 2026-04-17 23:36:24.949 [INFO][4817] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="937f6c9f91371a995defeeaab9eaf2ffabf5dfaf295eb595c0e1b8a0ab440621" HandleID="k8s-pod-network.937f6c9f91371a995defeeaab9eaf2ffabf5dfaf295eb595c0e1b8a0ab440621" Workload="ip--172--31--31--247-k8s-whisker--7979cdb584--z9jkm-eth0" Apr 17 23:36:25.638014 containerd[2145]: 2026-04-17 23:36:25.035 [INFO][4817] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="937f6c9f91371a995defeeaab9eaf2ffabf5dfaf295eb595c0e1b8a0ab440621" HandleID="k8s-pod-network.937f6c9f91371a995defeeaab9eaf2ffabf5dfaf295eb595c0e1b8a0ab440621" Workload="ip--172--31--31--247-k8s-whisker--7979cdb584--z9jkm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000123880), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-31-247", "pod":"whisker-7979cdb584-z9jkm", "timestamp":"2026-04-17 23:36:24.949057694 +0000 UTC"}, Hostname:"ip-172-31-31-247", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40003c11e0)} Apr 17 23:36:25.638014 containerd[2145]: 2026-04-17 23:36:25.035 [INFO][4817] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:36:25.638014 containerd[2145]: 2026-04-17 23:36:25.256 [INFO][4817] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:36:25.638014 containerd[2145]: 2026-04-17 23:36:25.258 [INFO][4817] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-31-247' Apr 17 23:36:25.638014 containerd[2145]: 2026-04-17 23:36:25.288 [INFO][4817] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.937f6c9f91371a995defeeaab9eaf2ffabf5dfaf295eb595c0e1b8a0ab440621" host="ip-172-31-31-247" Apr 17 23:36:25.638014 containerd[2145]: 2026-04-17 23:36:25.301 [INFO][4817] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-31-247" Apr 17 23:36:25.638014 containerd[2145]: 2026-04-17 23:36:25.361 [INFO][4817] ipam/ipam.go 526: Trying affinity for 192.168.115.192/26 host="ip-172-31-31-247" Apr 17 23:36:25.638014 containerd[2145]: 2026-04-17 23:36:25.370 [INFO][4817] ipam/ipam.go 160: Attempting to load block cidr=192.168.115.192/26 host="ip-172-31-31-247" Apr 17 23:36:25.638014 containerd[2145]: 2026-04-17 23:36:25.376 [INFO][4817] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.115.192/26 host="ip-172-31-31-247" Apr 17 23:36:25.638014 containerd[2145]: 2026-04-17 23:36:25.379 [INFO][4817] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.115.192/26 handle="k8s-pod-network.937f6c9f91371a995defeeaab9eaf2ffabf5dfaf295eb595c0e1b8a0ab440621" host="ip-172-31-31-247" Apr 17 23:36:25.638014 containerd[2145]: 2026-04-17 23:36:25.384 [INFO][4817] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.937f6c9f91371a995defeeaab9eaf2ffabf5dfaf295eb595c0e1b8a0ab440621 Apr 17 23:36:25.638014 containerd[2145]: 2026-04-17 23:36:25.394 [INFO][4817] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.115.192/26 handle="k8s-pod-network.937f6c9f91371a995defeeaab9eaf2ffabf5dfaf295eb595c0e1b8a0ab440621" host="ip-172-31-31-247" Apr 17 23:36:25.638014 containerd[2145]: 2026-04-17 23:36:25.411 [INFO][4817] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.115.195/26] block=192.168.115.192/26 handle="k8s-pod-network.937f6c9f91371a995defeeaab9eaf2ffabf5dfaf295eb595c0e1b8a0ab440621" host="ip-172-31-31-247" Apr 17 23:36:25.638014 containerd[2145]: 2026-04-17 23:36:25.413 [INFO][4817] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.115.195/26] handle="k8s-pod-network.937f6c9f91371a995defeeaab9eaf2ffabf5dfaf295eb595c0e1b8a0ab440621" host="ip-172-31-31-247" Apr 17 23:36:25.638014 containerd[2145]: 2026-04-17 23:36:25.415 [INFO][4817] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:36:25.638014 containerd[2145]: 2026-04-17 23:36:25.417 [INFO][4817] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.115.195/26] IPv6=[] ContainerID="937f6c9f91371a995defeeaab9eaf2ffabf5dfaf295eb595c0e1b8a0ab440621" HandleID="k8s-pod-network.937f6c9f91371a995defeeaab9eaf2ffabf5dfaf295eb595c0e1b8a0ab440621" Workload="ip--172--31--31--247-k8s-whisker--7979cdb584--z9jkm-eth0" Apr 17 23:36:25.641641 containerd[2145]: 2026-04-17 23:36:25.436 [INFO][4725] cni-plugin/k8s.go 418: Populated endpoint ContainerID="937f6c9f91371a995defeeaab9eaf2ffabf5dfaf295eb595c0e1b8a0ab440621" Namespace="calico-system" Pod="whisker-7979cdb584-z9jkm" WorkloadEndpoint="ip--172--31--31--247-k8s-whisker--7979cdb584--z9jkm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--247-k8s-whisker--7979cdb584--z9jkm-eth0", GenerateName:"whisker-7979cdb584-", Namespace:"calico-system", SelfLink:"", UID:"7f054ca4-5891-40ee-ba80-c401a6255ea7", ResourceVersion:"916", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 36, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7979cdb584", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-247", ContainerID:"", Pod:"whisker-7979cdb584-z9jkm", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.115.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali95998a6f9d6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:36:25.641641 containerd[2145]: 2026-04-17 23:36:25.436 [INFO][4725] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.115.195/32] ContainerID="937f6c9f91371a995defeeaab9eaf2ffabf5dfaf295eb595c0e1b8a0ab440621" Namespace="calico-system" Pod="whisker-7979cdb584-z9jkm" WorkloadEndpoint="ip--172--31--31--247-k8s-whisker--7979cdb584--z9jkm-eth0" Apr 17 23:36:25.641641 containerd[2145]: 2026-04-17 23:36:25.436 [INFO][4725] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali95998a6f9d6 ContainerID="937f6c9f91371a995defeeaab9eaf2ffabf5dfaf295eb595c0e1b8a0ab440621" Namespace="calico-system" Pod="whisker-7979cdb584-z9jkm" WorkloadEndpoint="ip--172--31--31--247-k8s-whisker--7979cdb584--z9jkm-eth0" Apr 17 23:36:25.641641 containerd[2145]: 2026-04-17 23:36:25.505 [INFO][4725] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="937f6c9f91371a995defeeaab9eaf2ffabf5dfaf295eb595c0e1b8a0ab440621" Namespace="calico-system" Pod="whisker-7979cdb584-z9jkm" WorkloadEndpoint="ip--172--31--31--247-k8s-whisker--7979cdb584--z9jkm-eth0" Apr 17 23:36:25.641641 containerd[2145]: 2026-04-17 23:36:25.520 [INFO][4725] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="937f6c9f91371a995defeeaab9eaf2ffabf5dfaf295eb595c0e1b8a0ab440621" Namespace="calico-system" Pod="whisker-7979cdb584-z9jkm" WorkloadEndpoint="ip--172--31--31--247-k8s-whisker--7979cdb584--z9jkm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--247-k8s-whisker--7979cdb584--z9jkm-eth0", GenerateName:"whisker-7979cdb584-", Namespace:"calico-system", SelfLink:"", UID:"7f054ca4-5891-40ee-ba80-c401a6255ea7", ResourceVersion:"916", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 36, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7979cdb584", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-247", ContainerID:"937f6c9f91371a995defeeaab9eaf2ffabf5dfaf295eb595c0e1b8a0ab440621", Pod:"whisker-7979cdb584-z9jkm", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.115.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali95998a6f9d6", MAC:"8a:74:91:27:d7:10", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:36:25.641641 containerd[2145]: 2026-04-17 23:36:25.568 [INFO][4725] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="937f6c9f91371a995defeeaab9eaf2ffabf5dfaf295eb595c0e1b8a0ab440621" Namespace="calico-system" Pod="whisker-7979cdb584-z9jkm" WorkloadEndpoint="ip--172--31--31--247-k8s-whisker--7979cdb584--z9jkm-eth0" Apr 17 23:36:25.649346 systemd[1]: run-netns-cni\x2d645e95a8\x2d0795\x2d31a4\x2d654f\x2db8abe9c655a1.mount: Deactivated successfully. Apr 17 23:36:25.655012 containerd[2145]: time="2026-04-17T23:36:25.654903194Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79855f7dd6-v9wp9,Uid:c42f7cc8-188c-4d4f-adfb-58e20708a530,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"71e1fb48ba63afd9da8f6e55e3515570eb66b0ed837db4a4cab3b97801807140\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:36:25.661564 kubelet[3715]: E0417 23:36:25.661040 3715 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"71e1fb48ba63afd9da8f6e55e3515570eb66b0ed837db4a4cab3b97801807140\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:36:25.661564 kubelet[3715]: E0417 23:36:25.661412 3715 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"71e1fb48ba63afd9da8f6e55e3515570eb66b0ed837db4a4cab3b97801807140\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-79855f7dd6-v9wp9" Apr 17 23:36:25.663752 kubelet[3715]: E0417 23:36:25.661459 3715 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"71e1fb48ba63afd9da8f6e55e3515570eb66b0ed837db4a4cab3b97801807140\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-79855f7dd6-v9wp9" Apr 17 23:36:25.663752 kubelet[3715]: E0417 23:36:25.661949 3715 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-79855f7dd6-v9wp9_calico-system(c42f7cc8-188c-4d4f-adfb-58e20708a530)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-79855f7dd6-v9wp9_calico-system(c42f7cc8-188c-4d4f-adfb-58e20708a530)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"71e1fb48ba63afd9da8f6e55e3515570eb66b0ed837db4a4cab3b97801807140\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-79855f7dd6-v9wp9" podUID="c42f7cc8-188c-4d4f-adfb-58e20708a530" Apr 17 23:36:25.681420 containerd[2145]: time="2026-04-17T23:36:25.680743574Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:36:25.681420 containerd[2145]: time="2026-04-17T23:36:25.680956298Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:36:25.681420 containerd[2145]: time="2026-04-17T23:36:25.681006170Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:36:25.681420 containerd[2145]: time="2026-04-17T23:36:25.681240494Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:36:25.682997 containerd[2145]: 2026-04-17 23:36:24.266 [ERROR][4735] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 17 23:36:25.682997 containerd[2145]: 2026-04-17 23:36:24.396 [INFO][4735] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--31--247-k8s-calico--apiserver--79855f7dd6--9l8cf-eth0 calico-apiserver-79855f7dd6- calico-system 753b7e4f-9fdf-46c1-9a03-9817d9a318d6 903 0 2026-04-17 23:36:01 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:79855f7dd6 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-31-247 calico-apiserver-79855f7dd6-9l8cf eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] califcec21d797d [] [] }} ContainerID="9fdf5000068d0bc4ae61628b1284f6348a0b8301c462502def68cfe50dc6a2f1" Namespace="calico-system" Pod="calico-apiserver-79855f7dd6-9l8cf" WorkloadEndpoint="ip--172--31--31--247-k8s-calico--apiserver--79855f7dd6--9l8cf-" Apr 17 23:36:25.682997 containerd[2145]: 2026-04-17 23:36:24.396 [INFO][4735] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9fdf5000068d0bc4ae61628b1284f6348a0b8301c462502def68cfe50dc6a2f1" Namespace="calico-system" Pod="calico-apiserver-79855f7dd6-9l8cf" WorkloadEndpoint="ip--172--31--31--247-k8s-calico--apiserver--79855f7dd6--9l8cf-eth0" Apr 17 23:36:25.682997 containerd[2145]: 2026-04-17 23:36:24.964 [INFO][4829] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9fdf5000068d0bc4ae61628b1284f6348a0b8301c462502def68cfe50dc6a2f1" HandleID="k8s-pod-network.9fdf5000068d0bc4ae61628b1284f6348a0b8301c462502def68cfe50dc6a2f1" Workload="ip--172--31--31--247-k8s-calico--apiserver--79855f7dd6--9l8cf-eth0" Apr 17 23:36:25.682997 containerd[2145]: 2026-04-17 23:36:25.077 [INFO][4829] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="9fdf5000068d0bc4ae61628b1284f6348a0b8301c462502def68cfe50dc6a2f1" HandleID="k8s-pod-network.9fdf5000068d0bc4ae61628b1284f6348a0b8301c462502def68cfe50dc6a2f1" Workload="ip--172--31--31--247-k8s-calico--apiserver--79855f7dd6--9l8cf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40006192e0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-31-247", "pod":"calico-apiserver-79855f7dd6-9l8cf", "timestamp":"2026-04-17 23:36:24.964418846 +0000 UTC"}, Hostname:"ip-172-31-31-247", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40004c8000)} Apr 17 23:36:25.682997 containerd[2145]: 2026-04-17 23:36:25.077 [INFO][4829] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:36:25.682997 containerd[2145]: 2026-04-17 23:36:25.414 [INFO][4829] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:36:25.682997 containerd[2145]: 2026-04-17 23:36:25.414 [INFO][4829] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-31-247' Apr 17 23:36:25.682997 containerd[2145]: 2026-04-17 23:36:25.419 [INFO][4829] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.9fdf5000068d0bc4ae61628b1284f6348a0b8301c462502def68cfe50dc6a2f1" host="ip-172-31-31-247" Apr 17 23:36:25.682997 containerd[2145]: 2026-04-17 23:36:25.437 [INFO][4829] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-31-247" Apr 17 23:36:25.682997 containerd[2145]: 2026-04-17 23:36:25.453 [INFO][4829] ipam/ipam.go 526: Trying affinity for 192.168.115.192/26 host="ip-172-31-31-247" Apr 17 23:36:25.682997 containerd[2145]: 2026-04-17 23:36:25.464 [INFO][4829] ipam/ipam.go 160: Attempting to load block cidr=192.168.115.192/26 host="ip-172-31-31-247" Apr 17 23:36:25.682997 containerd[2145]: 2026-04-17 23:36:25.469 [INFO][4829] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.115.192/26 host="ip-172-31-31-247" Apr 17 23:36:25.682997 containerd[2145]: 2026-04-17 23:36:25.469 [INFO][4829] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.115.192/26 handle="k8s-pod-network.9fdf5000068d0bc4ae61628b1284f6348a0b8301c462502def68cfe50dc6a2f1" host="ip-172-31-31-247" Apr 17 23:36:25.682997 containerd[2145]: 2026-04-17 23:36:25.475 [INFO][4829] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.9fdf5000068d0bc4ae61628b1284f6348a0b8301c462502def68cfe50dc6a2f1 Apr 17 23:36:25.682997 containerd[2145]: 2026-04-17 23:36:25.490 [INFO][4829] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.115.192/26 handle="k8s-pod-network.9fdf5000068d0bc4ae61628b1284f6348a0b8301c462502def68cfe50dc6a2f1" host="ip-172-31-31-247" Apr 17 23:36:25.682997 containerd[2145]: 2026-04-17 23:36:25.521 [INFO][4829] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.115.196/26] block=192.168.115.192/26 handle="k8s-pod-network.9fdf5000068d0bc4ae61628b1284f6348a0b8301c462502def68cfe50dc6a2f1" host="ip-172-31-31-247" Apr 17 23:36:25.682997 containerd[2145]: 2026-04-17 23:36:25.521 [INFO][4829] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.115.196/26] handle="k8s-pod-network.9fdf5000068d0bc4ae61628b1284f6348a0b8301c462502def68cfe50dc6a2f1" host="ip-172-31-31-247" Apr 17 23:36:25.682997 containerd[2145]: 2026-04-17 23:36:25.521 [INFO][4829] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:36:25.682997 containerd[2145]: 2026-04-17 23:36:25.521 [INFO][4829] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.115.196/26] IPv6=[] ContainerID="9fdf5000068d0bc4ae61628b1284f6348a0b8301c462502def68cfe50dc6a2f1" HandleID="k8s-pod-network.9fdf5000068d0bc4ae61628b1284f6348a0b8301c462502def68cfe50dc6a2f1" Workload="ip--172--31--31--247-k8s-calico--apiserver--79855f7dd6--9l8cf-eth0" Apr 17 23:36:25.684193 containerd[2145]: 2026-04-17 23:36:25.559 [INFO][4735] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9fdf5000068d0bc4ae61628b1284f6348a0b8301c462502def68cfe50dc6a2f1" Namespace="calico-system" Pod="calico-apiserver-79855f7dd6-9l8cf" WorkloadEndpoint="ip--172--31--31--247-k8s-calico--apiserver--79855f7dd6--9l8cf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--247-k8s-calico--apiserver--79855f7dd6--9l8cf-eth0", GenerateName:"calico-apiserver-79855f7dd6-", Namespace:"calico-system", SelfLink:"", UID:"753b7e4f-9fdf-46c1-9a03-9817d9a318d6", ResourceVersion:"903", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 36, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"79855f7dd6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-247", ContainerID:"", Pod:"calico-apiserver-79855f7dd6-9l8cf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.115.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"califcec21d797d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:36:25.684193 containerd[2145]: 2026-04-17 23:36:25.571 [INFO][4735] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.115.196/32] ContainerID="9fdf5000068d0bc4ae61628b1284f6348a0b8301c462502def68cfe50dc6a2f1" Namespace="calico-system" Pod="calico-apiserver-79855f7dd6-9l8cf" WorkloadEndpoint="ip--172--31--31--247-k8s-calico--apiserver--79855f7dd6--9l8cf-eth0" Apr 17 23:36:25.684193 containerd[2145]: 2026-04-17 23:36:25.572 [INFO][4735] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califcec21d797d ContainerID="9fdf5000068d0bc4ae61628b1284f6348a0b8301c462502def68cfe50dc6a2f1" Namespace="calico-system" Pod="calico-apiserver-79855f7dd6-9l8cf" WorkloadEndpoint="ip--172--31--31--247-k8s-calico--apiserver--79855f7dd6--9l8cf-eth0" Apr 17 23:36:25.684193 containerd[2145]: 2026-04-17 23:36:25.591 [INFO][4735] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9fdf5000068d0bc4ae61628b1284f6348a0b8301c462502def68cfe50dc6a2f1" Namespace="calico-system" Pod="calico-apiserver-79855f7dd6-9l8cf" WorkloadEndpoint="ip--172--31--31--247-k8s-calico--apiserver--79855f7dd6--9l8cf-eth0" Apr 17 23:36:25.684193 containerd[2145]: 2026-04-17 23:36:25.610 [INFO][4735] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9fdf5000068d0bc4ae61628b1284f6348a0b8301c462502def68cfe50dc6a2f1" Namespace="calico-system" Pod="calico-apiserver-79855f7dd6-9l8cf" WorkloadEndpoint="ip--172--31--31--247-k8s-calico--apiserver--79855f7dd6--9l8cf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--247-k8s-calico--apiserver--79855f7dd6--9l8cf-eth0", GenerateName:"calico-apiserver-79855f7dd6-", Namespace:"calico-system", SelfLink:"", UID:"753b7e4f-9fdf-46c1-9a03-9817d9a318d6", ResourceVersion:"903", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 36, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"79855f7dd6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-247", ContainerID:"9fdf5000068d0bc4ae61628b1284f6348a0b8301c462502def68cfe50dc6a2f1", Pod:"calico-apiserver-79855f7dd6-9l8cf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.115.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"califcec21d797d", MAC:"76:e0:75:26:94:e6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:36:25.684193 containerd[2145]: 2026-04-17 23:36:25.657 [INFO][4735] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9fdf5000068d0bc4ae61628b1284f6348a0b8301c462502def68cfe50dc6a2f1" Namespace="calico-system" Pod="calico-apiserver-79855f7dd6-9l8cf" WorkloadEndpoint="ip--172--31--31--247-k8s-calico--apiserver--79855f7dd6--9l8cf-eth0" Apr 17 23:36:25.693414 containerd[2145]: 2026-04-17 23:36:24.507 [INFO][4779] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="e894d7f353ae02b157e5bded0e008e66a92b0068b1d09cbf30da03dd42bed409" Apr 17 23:36:25.693414 containerd[2145]: 2026-04-17 23:36:24.507 [INFO][4779] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e894d7f353ae02b157e5bded0e008e66a92b0068b1d09cbf30da03dd42bed409" iface="eth0" netns="/var/run/netns/cni-319bebef-018b-7b37-19e7-0e4b4c1ba83e" Apr 17 23:36:25.693414 containerd[2145]: 2026-04-17 23:36:24.518 [INFO][4779] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e894d7f353ae02b157e5bded0e008e66a92b0068b1d09cbf30da03dd42bed409" iface="eth0" netns="/var/run/netns/cni-319bebef-018b-7b37-19e7-0e4b4c1ba83e" Apr 17 23:36:25.693414 containerd[2145]: 2026-04-17 23:36:24.522 [INFO][4779] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e894d7f353ae02b157e5bded0e008e66a92b0068b1d09cbf30da03dd42bed409" iface="eth0" netns="/var/run/netns/cni-319bebef-018b-7b37-19e7-0e4b4c1ba83e" Apr 17 23:36:25.693414 containerd[2145]: 2026-04-17 23:36:24.522 [INFO][4779] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="e894d7f353ae02b157e5bded0e008e66a92b0068b1d09cbf30da03dd42bed409" Apr 17 23:36:25.693414 containerd[2145]: 2026-04-17 23:36:24.522 [INFO][4779] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="e894d7f353ae02b157e5bded0e008e66a92b0068b1d09cbf30da03dd42bed409" Apr 17 23:36:25.693414 containerd[2145]: 2026-04-17 23:36:25.312 [INFO][4841] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="e894d7f353ae02b157e5bded0e008e66a92b0068b1d09cbf30da03dd42bed409" HandleID="k8s-pod-network.e894d7f353ae02b157e5bded0e008e66a92b0068b1d09cbf30da03dd42bed409" Workload="ip--172--31--31--247-k8s-calico--kube--controllers--6c659d559c--t45jr-eth0" Apr 17 23:36:25.693414 containerd[2145]: 2026-04-17 23:36:25.320 [INFO][4841] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:36:25.693414 containerd[2145]: 2026-04-17 23:36:25.581 [INFO][4841] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:36:25.693414 containerd[2145]: 2026-04-17 23:36:25.627 [WARNING][4841] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="e894d7f353ae02b157e5bded0e008e66a92b0068b1d09cbf30da03dd42bed409" HandleID="k8s-pod-network.e894d7f353ae02b157e5bded0e008e66a92b0068b1d09cbf30da03dd42bed409" Workload="ip--172--31--31--247-k8s-calico--kube--controllers--6c659d559c--t45jr-eth0" Apr 17 23:36:25.693414 containerd[2145]: 2026-04-17 23:36:25.627 [INFO][4841] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="e894d7f353ae02b157e5bded0e008e66a92b0068b1d09cbf30da03dd42bed409" HandleID="k8s-pod-network.e894d7f353ae02b157e5bded0e008e66a92b0068b1d09cbf30da03dd42bed409" Workload="ip--172--31--31--247-k8s-calico--kube--controllers--6c659d559c--t45jr-eth0" Apr 17 23:36:25.693414 containerd[2145]: 2026-04-17 23:36:25.633 [INFO][4841] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:36:25.693414 containerd[2145]: 2026-04-17 23:36:25.672 [INFO][4779] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="e894d7f353ae02b157e5bded0e008e66a92b0068b1d09cbf30da03dd42bed409" Apr 17 23:36:25.705395 containerd[2145]: time="2026-04-17T23:36:25.701420942Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6c659d559c-t45jr,Uid:6a39f673-617b-4850-b432-79bcea396fab,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e894d7f353ae02b157e5bded0e008e66a92b0068b1d09cbf30da03dd42bed409\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:36:25.705637 kubelet[3715]: E0417 23:36:25.703758 3715 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e894d7f353ae02b157e5bded0e008e66a92b0068b1d09cbf30da03dd42bed409\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:36:25.705637 kubelet[3715]: E0417 23:36:25.703837 3715 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e894d7f353ae02b157e5bded0e008e66a92b0068b1d09cbf30da03dd42bed409\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6c659d559c-t45jr" Apr 17 23:36:25.705637 kubelet[3715]: E0417 23:36:25.703874 3715 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e894d7f353ae02b157e5bded0e008e66a92b0068b1d09cbf30da03dd42bed409\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6c659d559c-t45jr" Apr 17 23:36:25.705840 kubelet[3715]: E0417 23:36:25.703962 3715 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6c659d559c-t45jr_calico-system(6a39f673-617b-4850-b432-79bcea396fab)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6c659d559c-t45jr_calico-system(6a39f673-617b-4850-b432-79bcea396fab)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e894d7f353ae02b157e5bded0e008e66a92b0068b1d09cbf30da03dd42bed409\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6c659d559c-t45jr" podUID="6a39f673-617b-4850-b432-79bcea396fab" Apr 17 23:36:25.794014 containerd[2145]: time="2026-04-17T23:36:25.791797250Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:36:25.801870 containerd[2145]: time="2026-04-17T23:36:25.800486042Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:36:25.801870 containerd[2145]: time="2026-04-17T23:36:25.800542874Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:36:25.801870 containerd[2145]: time="2026-04-17T23:36:25.800724866Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:36:25.828653 containerd[2145]: time="2026-04-17T23:36:25.827370242Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:36:25.828653 containerd[2145]: time="2026-04-17T23:36:25.827527310Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:36:25.828653 containerd[2145]: time="2026-04-17T23:36:25.827565518Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:36:25.844963 containerd[2145]: time="2026-04-17T23:36:25.844664343Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:36:25.947823 containerd[2145]: time="2026-04-17T23:36:25.947659767Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-v5g4s,Uid:becb7dd4-85d3-451c-8c3a-f9364e4aeec8,Namespace:kube-system,Attempt:0,} returns sandbox id \"a0dc48103fcb5638a979abe5e0759d4a555f65472f93cd14c28119ef4b9978bb\"" Apr 17 23:36:25.976990 containerd[2145]: time="2026-04-17T23:36:25.976897095Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-xcfhx,Uid:3eb35872-77ff-4818-9956-4687a66f2e28,Namespace:calico-system,Attempt:0,} returns sandbox id \"f982e46f08693c2d5085e0c0b98a2beaa8b55c3702481c693cc8aeaecb44458e\"" Apr 17 23:36:25.987199 containerd[2145]: time="2026-04-17T23:36:25.986655411Z" level=info msg="CreateContainer within sandbox \"a0dc48103fcb5638a979abe5e0759d4a555f65472f93cd14c28119ef4b9978bb\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 17 23:36:26.006394 containerd[2145]: time="2026-04-17T23:36:26.006152459Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Apr 17 23:36:26.028700 systemd-networkd[1694]: calief1dc57ba9d: Link UP Apr 17 23:36:26.041136 systemd-networkd[1694]: calief1dc57ba9d: Gained carrier Apr 17 23:36:26.106825 containerd[2145]: 2026-04-17 23:36:25.355 [ERROR][4869] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 17 23:36:26.106825 containerd[2145]: 2026-04-17 23:36:25.426 [INFO][4869] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--31--247-k8s-csi--node--driver--m7ffs-eth0 csi-node-driver- calico-system 5743ab00-ab78-4f3d-b36a-493284f86675 773 0 2026-04-17 23:36:05 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6d9d697c7c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-31-247 csi-node-driver-m7ffs eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calief1dc57ba9d [] [] }} ContainerID="7a0d6bf7e4f9356a7718c21a25c720b3b6921c9350e5ce5418b854a7dd81f79f" Namespace="calico-system" Pod="csi-node-driver-m7ffs" WorkloadEndpoint="ip--172--31--31--247-k8s-csi--node--driver--m7ffs-" Apr 17 23:36:26.106825 containerd[2145]: 2026-04-17 23:36:25.426 [INFO][4869] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7a0d6bf7e4f9356a7718c21a25c720b3b6921c9350e5ce5418b854a7dd81f79f" Namespace="calico-system" Pod="csi-node-driver-m7ffs" WorkloadEndpoint="ip--172--31--31--247-k8s-csi--node--driver--m7ffs-eth0" Apr 17 23:36:26.106825 containerd[2145]: 2026-04-17 23:36:25.752 [INFO][4966] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7a0d6bf7e4f9356a7718c21a25c720b3b6921c9350e5ce5418b854a7dd81f79f" HandleID="k8s-pod-network.7a0d6bf7e4f9356a7718c21a25c720b3b6921c9350e5ce5418b854a7dd81f79f" Workload="ip--172--31--31--247-k8s-csi--node--driver--m7ffs-eth0" Apr 17 23:36:26.106825 containerd[2145]: 2026-04-17 23:36:25.810 [INFO][4966] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="7a0d6bf7e4f9356a7718c21a25c720b3b6921c9350e5ce5418b854a7dd81f79f" HandleID="k8s-pod-network.7a0d6bf7e4f9356a7718c21a25c720b3b6921c9350e5ce5418b854a7dd81f79f" Workload="ip--172--31--31--247-k8s-csi--node--driver--m7ffs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000101eb0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-31-247", "pod":"csi-node-driver-m7ffs", "timestamp":"2026-04-17 23:36:25.752133086 +0000 UTC"}, Hostname:"ip-172-31-31-247", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40004c5600)} Apr 17 23:36:26.106825 containerd[2145]: 2026-04-17 23:36:25.810 [INFO][4966] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:36:26.106825 containerd[2145]: 2026-04-17 23:36:25.810 [INFO][4966] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:36:26.106825 containerd[2145]: 2026-04-17 23:36:25.810 [INFO][4966] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-31-247' Apr 17 23:36:26.106825 containerd[2145]: 2026-04-17 23:36:25.816 [INFO][4966] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.7a0d6bf7e4f9356a7718c21a25c720b3b6921c9350e5ce5418b854a7dd81f79f" host="ip-172-31-31-247" Apr 17 23:36:26.106825 containerd[2145]: 2026-04-17 23:36:25.874 [INFO][4966] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-31-247" Apr 17 23:36:26.106825 containerd[2145]: 2026-04-17 23:36:25.901 [INFO][4966] ipam/ipam.go 526: Trying affinity for 192.168.115.192/26 host="ip-172-31-31-247" Apr 17 23:36:26.106825 containerd[2145]: 2026-04-17 23:36:25.908 [INFO][4966] ipam/ipam.go 160: Attempting to load block cidr=192.168.115.192/26 host="ip-172-31-31-247" Apr 17 23:36:26.106825 containerd[2145]: 2026-04-17 23:36:25.922 [INFO][4966] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.115.192/26 host="ip-172-31-31-247" Apr 17 23:36:26.106825 containerd[2145]: 2026-04-17 23:36:25.922 [INFO][4966] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.115.192/26 handle="k8s-pod-network.7a0d6bf7e4f9356a7718c21a25c720b3b6921c9350e5ce5418b854a7dd81f79f" host="ip-172-31-31-247" Apr 17 23:36:26.106825 containerd[2145]: 2026-04-17 23:36:25.931 [INFO][4966] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.7a0d6bf7e4f9356a7718c21a25c720b3b6921c9350e5ce5418b854a7dd81f79f Apr 17 23:36:26.106825 containerd[2145]: 2026-04-17 23:36:25.945 [INFO][4966] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.115.192/26 handle="k8s-pod-network.7a0d6bf7e4f9356a7718c21a25c720b3b6921c9350e5ce5418b854a7dd81f79f" host="ip-172-31-31-247" Apr 17 23:36:26.106825 containerd[2145]: 2026-04-17 23:36:25.971 [INFO][4966] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.115.197/26] block=192.168.115.192/26 handle="k8s-pod-network.7a0d6bf7e4f9356a7718c21a25c720b3b6921c9350e5ce5418b854a7dd81f79f" host="ip-172-31-31-247" Apr 17 23:36:26.106825 containerd[2145]: 2026-04-17 23:36:25.971 [INFO][4966] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.115.197/26] handle="k8s-pod-network.7a0d6bf7e4f9356a7718c21a25c720b3b6921c9350e5ce5418b854a7dd81f79f" host="ip-172-31-31-247" Apr 17 23:36:26.106825 containerd[2145]: 2026-04-17 23:36:25.971 [INFO][4966] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:36:26.106825 containerd[2145]: 2026-04-17 23:36:25.972 [INFO][4966] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.115.197/26] IPv6=[] ContainerID="7a0d6bf7e4f9356a7718c21a25c720b3b6921c9350e5ce5418b854a7dd81f79f" HandleID="k8s-pod-network.7a0d6bf7e4f9356a7718c21a25c720b3b6921c9350e5ce5418b854a7dd81f79f" Workload="ip--172--31--31--247-k8s-csi--node--driver--m7ffs-eth0" Apr 17 23:36:26.117324 containerd[2145]: 2026-04-17 23:36:25.994 [INFO][4869] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7a0d6bf7e4f9356a7718c21a25c720b3b6921c9350e5ce5418b854a7dd81f79f" Namespace="calico-system" Pod="csi-node-driver-m7ffs" WorkloadEndpoint="ip--172--31--31--247-k8s-csi--node--driver--m7ffs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--247-k8s-csi--node--driver--m7ffs-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5743ab00-ab78-4f3d-b36a-493284f86675", ResourceVersion:"773", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 36, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-247", ContainerID:"", Pod:"csi-node-driver-m7ffs", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.115.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calief1dc57ba9d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:36:26.117324 containerd[2145]: 2026-04-17 23:36:25.997 [INFO][4869] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.115.197/32] ContainerID="7a0d6bf7e4f9356a7718c21a25c720b3b6921c9350e5ce5418b854a7dd81f79f" Namespace="calico-system" Pod="csi-node-driver-m7ffs" WorkloadEndpoint="ip--172--31--31--247-k8s-csi--node--driver--m7ffs-eth0" Apr 17 23:36:26.117324 containerd[2145]: 2026-04-17 23:36:25.997 [INFO][4869] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calief1dc57ba9d ContainerID="7a0d6bf7e4f9356a7718c21a25c720b3b6921c9350e5ce5418b854a7dd81f79f" Namespace="calico-system" Pod="csi-node-driver-m7ffs" WorkloadEndpoint="ip--172--31--31--247-k8s-csi--node--driver--m7ffs-eth0" Apr 17 23:36:26.117324 containerd[2145]: 2026-04-17 23:36:26.045 [INFO][4869] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7a0d6bf7e4f9356a7718c21a25c720b3b6921c9350e5ce5418b854a7dd81f79f" Namespace="calico-system" Pod="csi-node-driver-m7ffs" WorkloadEndpoint="ip--172--31--31--247-k8s-csi--node--driver--m7ffs-eth0" Apr 17 23:36:26.117324 containerd[2145]: 2026-04-17 23:36:26.045 [INFO][4869] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7a0d6bf7e4f9356a7718c21a25c720b3b6921c9350e5ce5418b854a7dd81f79f" Namespace="calico-system" Pod="csi-node-driver-m7ffs" WorkloadEndpoint="ip--172--31--31--247-k8s-csi--node--driver--m7ffs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--247-k8s-csi--node--driver--m7ffs-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5743ab00-ab78-4f3d-b36a-493284f86675", ResourceVersion:"773", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 36, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-247", ContainerID:"7a0d6bf7e4f9356a7718c21a25c720b3b6921c9350e5ce5418b854a7dd81f79f", Pod:"csi-node-driver-m7ffs", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.115.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calief1dc57ba9d", MAC:"7a:2e:a2:00:3c:04", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:36:26.117324 containerd[2145]: 2026-04-17 23:36:26.071 [INFO][4869] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7a0d6bf7e4f9356a7718c21a25c720b3b6921c9350e5ce5418b854a7dd81f79f" Namespace="calico-system" Pod="csi-node-driver-m7ffs" WorkloadEndpoint="ip--172--31--31--247-k8s-csi--node--driver--m7ffs-eth0" Apr 17 23:36:26.174831 containerd[2145]: time="2026-04-17T23:36:26.174650472Z" level=info msg="CreateContainer within sandbox \"a0dc48103fcb5638a979abe5e0759d4a555f65472f93cd14c28119ef4b9978bb\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1590e691a649d8e6e4dd73df830c31d1d402f7b0958ffcf23cdf85a64f41989a\"" Apr 17 23:36:26.184624 containerd[2145]: time="2026-04-17T23:36:26.184553880Z" level=info msg="StartContainer for \"1590e691a649d8e6e4dd73df830c31d1d402f7b0958ffcf23cdf85a64f41989a\"" Apr 17 23:36:26.291389 systemd[1]: run-netns-cni\x2d319bebef\x2d018b\x2d7b37\x2d19e7\x2d0e4b4c1ba83e.mount: Deactivated successfully. Apr 17 23:36:26.291674 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e894d7f353ae02b157e5bded0e008e66a92b0068b1d09cbf30da03dd42bed409-shm.mount: Deactivated successfully. Apr 17 23:36:26.291901 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-71e1fb48ba63afd9da8f6e55e3515570eb66b0ed837db4a4cab3b97801807140-shm.mount: Deactivated successfully. Apr 17 23:36:26.326737 containerd[2145]: time="2026-04-17T23:36:26.326219317Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6c659d559c-t45jr,Uid:6a39f673-617b-4850-b432-79bcea396fab,Namespace:calico-system,Attempt:0,}" Apr 17 23:36:26.346281 containerd[2145]: time="2026-04-17T23:36:26.346205701Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-tvmqn,Uid:c4ec6d36-3750-48ee-bd55-9f658f7d853e,Namespace:kube-system,Attempt:0,}" Apr 17 23:36:26.378226 containerd[2145]: time="2026-04-17T23:36:26.375373093Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79855f7dd6-v9wp9,Uid:c42f7cc8-188c-4d4f-adfb-58e20708a530,Namespace:calico-system,Attempt:0,}" Apr 17 23:36:26.443766 containerd[2145]: time="2026-04-17T23:36:26.441338677Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:36:26.443766 containerd[2145]: time="2026-04-17T23:36:26.441472802Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:36:26.443766 containerd[2145]: time="2026-04-17T23:36:26.441537842Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:36:26.443766 containerd[2145]: time="2026-04-17T23:36:26.441738086Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:36:26.556825 containerd[2145]: time="2026-04-17T23:36:26.556484006Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7979cdb584-z9jkm,Uid:7f054ca4-5891-40ee-ba80-c401a6255ea7,Namespace:calico-system,Attempt:0,} returns sandbox id \"937f6c9f91371a995defeeaab9eaf2ffabf5dfaf295eb595c0e1b8a0ab440621\"" Apr 17 23:36:26.653061 containerd[2145]: time="2026-04-17T23:36:26.653006523Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79855f7dd6-9l8cf,Uid:753b7e4f-9fdf-46c1-9a03-9817d9a318d6,Namespace:calico-system,Attempt:0,} returns sandbox id \"9fdf5000068d0bc4ae61628b1284f6348a0b8301c462502def68cfe50dc6a2f1\"" Apr 17 23:36:26.757375 systemd-networkd[1694]: cali6d38a08862d: Gained IPv6LL Apr 17 23:36:27.076798 systemd-networkd[1694]: cali82eca6df786: Gained IPv6LL Apr 17 23:36:27.141921 systemd-networkd[1694]: cali95998a6f9d6: Gained IPv6LL Apr 17 23:36:27.192475 containerd[2145]: time="2026-04-17T23:36:27.192393661Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-m7ffs,Uid:5743ab00-ab78-4f3d-b36a-493284f86675,Namespace:calico-system,Attempt:0,} returns sandbox id \"7a0d6bf7e4f9356a7718c21a25c720b3b6921c9350e5ce5418b854a7dd81f79f\"" Apr 17 23:36:27.207255 systemd-networkd[1694]: calief1dc57ba9d: Gained IPv6LL Apr 17 23:36:27.207767 systemd-networkd[1694]: califcec21d797d: Gained IPv6LL Apr 17 23:36:27.230562 containerd[2145]: time="2026-04-17T23:36:27.230481265Z" level=info msg="StartContainer for \"1590e691a649d8e6e4dd73df830c31d1d402f7b0958ffcf23cdf85a64f41989a\" returns successfully" Apr 17 23:36:27.472311 kubelet[3715]: I0417 23:36:27.470690 3715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-v5g4s" podStartSLOduration=47.470668491 podStartE2EDuration="47.470668491s" podCreationTimestamp="2026-04-17 23:35:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:36:27.468467079 +0000 UTC m=+53.018259361" watchObservedRunningTime="2026-04-17 23:36:27.470668491 +0000 UTC m=+53.020460773" Apr 17 23:36:27.887319 systemd-networkd[1694]: calicedaf12afb9: Link UP Apr 17 23:36:27.896627 systemd-networkd[1694]: calicedaf12afb9: Gained carrier Apr 17 23:36:27.951596 containerd[2145]: 2026-04-17 23:36:27.200 [ERROR][5248] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 17 23:36:27.951596 containerd[2145]: 2026-04-17 23:36:27.317 [INFO][5248] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--31--247-k8s-calico--kube--controllers--6c659d559c--t45jr-eth0 calico-kube-controllers-6c659d559c- calico-system 6a39f673-617b-4850-b432-79bcea396fab 925 0 2026-04-17 23:36:05 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6c659d559c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-31-247 calico-kube-controllers-6c659d559c-t45jr eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calicedaf12afb9 [] [] }} ContainerID="469431e3e602098e03aae2a7d05cd3d8f0f39ffb00572ac25061e8dfa7402b3c" Namespace="calico-system" Pod="calico-kube-controllers-6c659d559c-t45jr" WorkloadEndpoint="ip--172--31--31--247-k8s-calico--kube--controllers--6c659d559c--t45jr-" Apr 17 23:36:27.951596 containerd[2145]: 2026-04-17 23:36:27.317 [INFO][5248] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="469431e3e602098e03aae2a7d05cd3d8f0f39ffb00572ac25061e8dfa7402b3c" Namespace="calico-system" Pod="calico-kube-controllers-6c659d559c-t45jr" WorkloadEndpoint="ip--172--31--31--247-k8s-calico--kube--controllers--6c659d559c--t45jr-eth0" Apr 17 23:36:27.951596 containerd[2145]: 2026-04-17 23:36:27.615 [INFO][5360] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="469431e3e602098e03aae2a7d05cd3d8f0f39ffb00572ac25061e8dfa7402b3c" HandleID="k8s-pod-network.469431e3e602098e03aae2a7d05cd3d8f0f39ffb00572ac25061e8dfa7402b3c" Workload="ip--172--31--31--247-k8s-calico--kube--controllers--6c659d559c--t45jr-eth0" Apr 17 23:36:27.951596 containerd[2145]: 2026-04-17 23:36:27.663 [INFO][5360] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="469431e3e602098e03aae2a7d05cd3d8f0f39ffb00572ac25061e8dfa7402b3c" HandleID="k8s-pod-network.469431e3e602098e03aae2a7d05cd3d8f0f39ffb00572ac25061e8dfa7402b3c" Workload="ip--172--31--31--247-k8s-calico--kube--controllers--6c659d559c--t45jr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002598b0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-31-247", "pod":"calico-kube-controllers-6c659d559c-t45jr", "timestamp":"2026-04-17 23:36:27.615005703 +0000 UTC"}, Hostname:"ip-172-31-31-247", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x400010b340)} Apr 17 23:36:27.951596 containerd[2145]: 2026-04-17 23:36:27.668 [INFO][5360] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:36:27.951596 containerd[2145]: 2026-04-17 23:36:27.669 [INFO][5360] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:36:27.951596 containerd[2145]: 2026-04-17 23:36:27.671 [INFO][5360] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-31-247' Apr 17 23:36:27.951596 containerd[2145]: 2026-04-17 23:36:27.679 [INFO][5360] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.469431e3e602098e03aae2a7d05cd3d8f0f39ffb00572ac25061e8dfa7402b3c" host="ip-172-31-31-247" Apr 17 23:36:27.951596 containerd[2145]: 2026-04-17 23:36:27.715 [INFO][5360] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-31-247" Apr 17 23:36:27.951596 containerd[2145]: 2026-04-17 23:36:27.741 [INFO][5360] ipam/ipam.go 526: Trying affinity for 192.168.115.192/26 host="ip-172-31-31-247" Apr 17 23:36:27.951596 containerd[2145]: 2026-04-17 23:36:27.754 [INFO][5360] ipam/ipam.go 160: Attempting to load block cidr=192.168.115.192/26 host="ip-172-31-31-247" Apr 17 23:36:27.951596 containerd[2145]: 2026-04-17 23:36:27.764 [INFO][5360] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.115.192/26 host="ip-172-31-31-247" Apr 17 23:36:27.951596 containerd[2145]: 2026-04-17 23:36:27.788 [INFO][5360] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.115.192/26 handle="k8s-pod-network.469431e3e602098e03aae2a7d05cd3d8f0f39ffb00572ac25061e8dfa7402b3c" host="ip-172-31-31-247" Apr 17 23:36:27.951596 containerd[2145]: 2026-04-17 23:36:27.794 [INFO][5360] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.469431e3e602098e03aae2a7d05cd3d8f0f39ffb00572ac25061e8dfa7402b3c Apr 17 23:36:27.951596 containerd[2145]: 2026-04-17 23:36:27.804 [INFO][5360] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.115.192/26 handle="k8s-pod-network.469431e3e602098e03aae2a7d05cd3d8f0f39ffb00572ac25061e8dfa7402b3c" host="ip-172-31-31-247" Apr 17 23:36:27.951596 containerd[2145]: 2026-04-17 23:36:27.825 [INFO][5360] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.115.198/26] block=192.168.115.192/26 handle="k8s-pod-network.469431e3e602098e03aae2a7d05cd3d8f0f39ffb00572ac25061e8dfa7402b3c" host="ip-172-31-31-247" Apr 17 23:36:27.951596 containerd[2145]: 2026-04-17 23:36:27.826 [INFO][5360] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.115.198/26] handle="k8s-pod-network.469431e3e602098e03aae2a7d05cd3d8f0f39ffb00572ac25061e8dfa7402b3c" host="ip-172-31-31-247" Apr 17 23:36:27.951596 containerd[2145]: 2026-04-17 23:36:27.827 [INFO][5360] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:36:27.951596 containerd[2145]: 2026-04-17 23:36:27.836 [INFO][5360] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.115.198/26] IPv6=[] ContainerID="469431e3e602098e03aae2a7d05cd3d8f0f39ffb00572ac25061e8dfa7402b3c" HandleID="k8s-pod-network.469431e3e602098e03aae2a7d05cd3d8f0f39ffb00572ac25061e8dfa7402b3c" Workload="ip--172--31--31--247-k8s-calico--kube--controllers--6c659d559c--t45jr-eth0" Apr 17 23:36:27.958261 containerd[2145]: 2026-04-17 23:36:27.865 [INFO][5248] cni-plugin/k8s.go 418: Populated endpoint ContainerID="469431e3e602098e03aae2a7d05cd3d8f0f39ffb00572ac25061e8dfa7402b3c" Namespace="calico-system" Pod="calico-kube-controllers-6c659d559c-t45jr" WorkloadEndpoint="ip--172--31--31--247-k8s-calico--kube--controllers--6c659d559c--t45jr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--247-k8s-calico--kube--controllers--6c659d559c--t45jr-eth0", GenerateName:"calico-kube-controllers-6c659d559c-", Namespace:"calico-system", SelfLink:"", UID:"6a39f673-617b-4850-b432-79bcea396fab", ResourceVersion:"925", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 36, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6c659d559c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-247", ContainerID:"", Pod:"calico-kube-controllers-6c659d559c-t45jr", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.115.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calicedaf12afb9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:36:27.958261 containerd[2145]: 2026-04-17 23:36:27.871 [INFO][5248] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.115.198/32] ContainerID="469431e3e602098e03aae2a7d05cd3d8f0f39ffb00572ac25061e8dfa7402b3c" Namespace="calico-system" Pod="calico-kube-controllers-6c659d559c-t45jr" WorkloadEndpoint="ip--172--31--31--247-k8s-calico--kube--controllers--6c659d559c--t45jr-eth0" Apr 17 23:36:27.958261 containerd[2145]: 2026-04-17 23:36:27.872 [INFO][5248] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicedaf12afb9 ContainerID="469431e3e602098e03aae2a7d05cd3d8f0f39ffb00572ac25061e8dfa7402b3c" Namespace="calico-system" Pod="calico-kube-controllers-6c659d559c-t45jr" WorkloadEndpoint="ip--172--31--31--247-k8s-calico--kube--controllers--6c659d559c--t45jr-eth0" Apr 17 23:36:27.958261 containerd[2145]: 2026-04-17 23:36:27.880 [INFO][5248] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="469431e3e602098e03aae2a7d05cd3d8f0f39ffb00572ac25061e8dfa7402b3c" Namespace="calico-system" Pod="calico-kube-controllers-6c659d559c-t45jr" WorkloadEndpoint="ip--172--31--31--247-k8s-calico--kube--controllers--6c659d559c--t45jr-eth0" Apr 17 23:36:27.958261 containerd[2145]: 2026-04-17 23:36:27.880 [INFO][5248] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="469431e3e602098e03aae2a7d05cd3d8f0f39ffb00572ac25061e8dfa7402b3c" Namespace="calico-system" Pod="calico-kube-controllers-6c659d559c-t45jr" WorkloadEndpoint="ip--172--31--31--247-k8s-calico--kube--controllers--6c659d559c--t45jr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--247-k8s-calico--kube--controllers--6c659d559c--t45jr-eth0", GenerateName:"calico-kube-controllers-6c659d559c-", Namespace:"calico-system", SelfLink:"", UID:"6a39f673-617b-4850-b432-79bcea396fab", ResourceVersion:"925", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 36, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6c659d559c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-247", ContainerID:"469431e3e602098e03aae2a7d05cd3d8f0f39ffb00572ac25061e8dfa7402b3c", Pod:"calico-kube-controllers-6c659d559c-t45jr", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.115.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calicedaf12afb9", MAC:"2e:6e:6f:30:0f:f2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:36:27.958261 containerd[2145]: 2026-04-17 23:36:27.926 [INFO][5248] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="469431e3e602098e03aae2a7d05cd3d8f0f39ffb00572ac25061e8dfa7402b3c" Namespace="calico-system" Pod="calico-kube-controllers-6c659d559c-t45jr" WorkloadEndpoint="ip--172--31--31--247-k8s-calico--kube--controllers--6c659d559c--t45jr-eth0" Apr 17 23:36:28.074535 systemd-networkd[1694]: cali84f5ed54543: Link UP Apr 17 23:36:28.080202 systemd-networkd[1694]: cali84f5ed54543: Gained carrier Apr 17 23:36:28.132447 containerd[2145]: time="2026-04-17T23:36:28.121373582Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:36:28.132447 containerd[2145]: time="2026-04-17T23:36:28.121501262Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:36:28.132447 containerd[2145]: time="2026-04-17T23:36:28.121539074Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:36:28.132447 containerd[2145]: time="2026-04-17T23:36:28.122922998Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:36:28.180288 containerd[2145]: 2026-04-17 23:36:27.215 [ERROR][5279] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 17 23:36:28.180288 containerd[2145]: 2026-04-17 23:36:27.403 [INFO][5279] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--31--247-k8s-calico--apiserver--79855f7dd6--v9wp9-eth0 calico-apiserver-79855f7dd6- calico-system c42f7cc8-188c-4d4f-adfb-58e20708a530 933 0 2026-04-17 23:36:01 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:79855f7dd6 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-31-247 calico-apiserver-79855f7dd6-v9wp9 eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali84f5ed54543 [] [] }} ContainerID="934032af31620c0e6c38aa13aba5ded17a4621a0fc9c13387e35d88a6a7e952c" Namespace="calico-system" Pod="calico-apiserver-79855f7dd6-v9wp9" WorkloadEndpoint="ip--172--31--31--247-k8s-calico--apiserver--79855f7dd6--v9wp9-" Apr 17 23:36:28.180288 containerd[2145]: 2026-04-17 23:36:27.403 [INFO][5279] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="934032af31620c0e6c38aa13aba5ded17a4621a0fc9c13387e35d88a6a7e952c" Namespace="calico-system" Pod="calico-apiserver-79855f7dd6-v9wp9" WorkloadEndpoint="ip--172--31--31--247-k8s-calico--apiserver--79855f7dd6--v9wp9-eth0" Apr 17 23:36:28.180288 containerd[2145]: 2026-04-17 23:36:27.705 [INFO][5366] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="934032af31620c0e6c38aa13aba5ded17a4621a0fc9c13387e35d88a6a7e952c" HandleID="k8s-pod-network.934032af31620c0e6c38aa13aba5ded17a4621a0fc9c13387e35d88a6a7e952c" Workload="ip--172--31--31--247-k8s-calico--apiserver--79855f7dd6--v9wp9-eth0" Apr 17 23:36:28.180288 containerd[2145]: 2026-04-17 23:36:27.738 [INFO][5366] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="934032af31620c0e6c38aa13aba5ded17a4621a0fc9c13387e35d88a6a7e952c" HandleID="k8s-pod-network.934032af31620c0e6c38aa13aba5ded17a4621a0fc9c13387e35d88a6a7e952c" Workload="ip--172--31--31--247-k8s-calico--apiserver--79855f7dd6--v9wp9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003ee520), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-31-247", "pod":"calico-apiserver-79855f7dd6-v9wp9", "timestamp":"2026-04-17 23:36:27.705053584 +0000 UTC"}, Hostname:"ip-172-31-31-247", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x4000478420)} Apr 17 23:36:28.180288 containerd[2145]: 2026-04-17 23:36:27.738 [INFO][5366] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:36:28.180288 containerd[2145]: 2026-04-17 23:36:27.827 [INFO][5366] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:36:28.180288 containerd[2145]: 2026-04-17 23:36:27.831 [INFO][5366] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-31-247' Apr 17 23:36:28.180288 containerd[2145]: 2026-04-17 23:36:27.865 [INFO][5366] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.934032af31620c0e6c38aa13aba5ded17a4621a0fc9c13387e35d88a6a7e952c" host="ip-172-31-31-247" Apr 17 23:36:28.180288 containerd[2145]: 2026-04-17 23:36:27.882 [INFO][5366] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-31-247" Apr 17 23:36:28.180288 containerd[2145]: 2026-04-17 23:36:27.901 [INFO][5366] ipam/ipam.go 526: Trying affinity for 192.168.115.192/26 host="ip-172-31-31-247" Apr 17 23:36:28.180288 containerd[2145]: 2026-04-17 23:36:27.933 [INFO][5366] ipam/ipam.go 160: Attempting to load block cidr=192.168.115.192/26 host="ip-172-31-31-247" Apr 17 23:36:28.180288 containerd[2145]: 2026-04-17 23:36:27.944 [INFO][5366] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.115.192/26 host="ip-172-31-31-247" Apr 17 23:36:28.180288 containerd[2145]: 2026-04-17 23:36:27.945 [INFO][5366] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.115.192/26 handle="k8s-pod-network.934032af31620c0e6c38aa13aba5ded17a4621a0fc9c13387e35d88a6a7e952c" host="ip-172-31-31-247" Apr 17 23:36:28.180288 containerd[2145]: 2026-04-17 23:36:27.962 [INFO][5366] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.934032af31620c0e6c38aa13aba5ded17a4621a0fc9c13387e35d88a6a7e952c Apr 17 23:36:28.180288 containerd[2145]: 2026-04-17 23:36:27.979 [INFO][5366] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.115.192/26 handle="k8s-pod-network.934032af31620c0e6c38aa13aba5ded17a4621a0fc9c13387e35d88a6a7e952c" host="ip-172-31-31-247" Apr 17 23:36:28.180288 containerd[2145]: 2026-04-17 23:36:28.008 [INFO][5366] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.115.199/26] block=192.168.115.192/26 handle="k8s-pod-network.934032af31620c0e6c38aa13aba5ded17a4621a0fc9c13387e35d88a6a7e952c" host="ip-172-31-31-247" Apr 17 23:36:28.180288 containerd[2145]: 2026-04-17 23:36:28.008 [INFO][5366] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.115.199/26] handle="k8s-pod-network.934032af31620c0e6c38aa13aba5ded17a4621a0fc9c13387e35d88a6a7e952c" host="ip-172-31-31-247" Apr 17 23:36:28.180288 containerd[2145]: 2026-04-17 23:36:28.008 [INFO][5366] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:36:28.180288 containerd[2145]: 2026-04-17 23:36:28.008 [INFO][5366] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.115.199/26] IPv6=[] ContainerID="934032af31620c0e6c38aa13aba5ded17a4621a0fc9c13387e35d88a6a7e952c" HandleID="k8s-pod-network.934032af31620c0e6c38aa13aba5ded17a4621a0fc9c13387e35d88a6a7e952c" Workload="ip--172--31--31--247-k8s-calico--apiserver--79855f7dd6--v9wp9-eth0" Apr 17 23:36:28.181550 containerd[2145]: 2026-04-17 23:36:28.026 [INFO][5279] cni-plugin/k8s.go 418: Populated endpoint ContainerID="934032af31620c0e6c38aa13aba5ded17a4621a0fc9c13387e35d88a6a7e952c" Namespace="calico-system" Pod="calico-apiserver-79855f7dd6-v9wp9" WorkloadEndpoint="ip--172--31--31--247-k8s-calico--apiserver--79855f7dd6--v9wp9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--247-k8s-calico--apiserver--79855f7dd6--v9wp9-eth0", GenerateName:"calico-apiserver-79855f7dd6-", Namespace:"calico-system", SelfLink:"", UID:"c42f7cc8-188c-4d4f-adfb-58e20708a530", ResourceVersion:"933", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 36, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"79855f7dd6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-247", ContainerID:"", Pod:"calico-apiserver-79855f7dd6-v9wp9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.115.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali84f5ed54543", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:36:28.181550 containerd[2145]: 2026-04-17 23:36:28.026 [INFO][5279] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.115.199/32] ContainerID="934032af31620c0e6c38aa13aba5ded17a4621a0fc9c13387e35d88a6a7e952c" Namespace="calico-system" Pod="calico-apiserver-79855f7dd6-v9wp9" WorkloadEndpoint="ip--172--31--31--247-k8s-calico--apiserver--79855f7dd6--v9wp9-eth0" Apr 17 23:36:28.181550 containerd[2145]: 2026-04-17 23:36:28.026 [INFO][5279] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali84f5ed54543 ContainerID="934032af31620c0e6c38aa13aba5ded17a4621a0fc9c13387e35d88a6a7e952c" Namespace="calico-system" Pod="calico-apiserver-79855f7dd6-v9wp9" WorkloadEndpoint="ip--172--31--31--247-k8s-calico--apiserver--79855f7dd6--v9wp9-eth0" Apr 17 23:36:28.181550 containerd[2145]: 2026-04-17 23:36:28.098 [INFO][5279] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="934032af31620c0e6c38aa13aba5ded17a4621a0fc9c13387e35d88a6a7e952c" Namespace="calico-system" Pod="calico-apiserver-79855f7dd6-v9wp9" WorkloadEndpoint="ip--172--31--31--247-k8s-calico--apiserver--79855f7dd6--v9wp9-eth0" Apr 17 23:36:28.181550 containerd[2145]: 2026-04-17 23:36:28.100 [INFO][5279] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="934032af31620c0e6c38aa13aba5ded17a4621a0fc9c13387e35d88a6a7e952c" Namespace="calico-system" Pod="calico-apiserver-79855f7dd6-v9wp9" WorkloadEndpoint="ip--172--31--31--247-k8s-calico--apiserver--79855f7dd6--v9wp9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--247-k8s-calico--apiserver--79855f7dd6--v9wp9-eth0", GenerateName:"calico-apiserver-79855f7dd6-", Namespace:"calico-system", SelfLink:"", UID:"c42f7cc8-188c-4d4f-adfb-58e20708a530", ResourceVersion:"933", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 36, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"79855f7dd6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-247", ContainerID:"934032af31620c0e6c38aa13aba5ded17a4621a0fc9c13387e35d88a6a7e952c", Pod:"calico-apiserver-79855f7dd6-v9wp9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.115.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali84f5ed54543", MAC:"3e:26:75:01:b0:c7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:36:28.181550 containerd[2145]: 2026-04-17 23:36:28.137 [INFO][5279] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="934032af31620c0e6c38aa13aba5ded17a4621a0fc9c13387e35d88a6a7e952c" Namespace="calico-system" Pod="calico-apiserver-79855f7dd6-v9wp9" WorkloadEndpoint="ip--172--31--31--247-k8s-calico--apiserver--79855f7dd6--v9wp9-eth0" Apr 17 23:36:28.250269 systemd-networkd[1694]: cali098dd88e437: Link UP Apr 17 23:36:28.250738 systemd-networkd[1694]: cali098dd88e437: Gained carrier Apr 17 23:36:28.340237 containerd[2145]: 2026-04-17 23:36:27.363 [ERROR][5281] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 17 23:36:28.340237 containerd[2145]: 2026-04-17 23:36:27.539 [INFO][5281] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--31--247-k8s-coredns--674b8bbfcf--tvmqn-eth0 coredns-674b8bbfcf- kube-system c4ec6d36-3750-48ee-bd55-9f658f7d853e 934 0 2026-04-17 23:35:40 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-31-247 coredns-674b8bbfcf-tvmqn eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali098dd88e437 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="1a26a26858d1105dd812e0c738272e296bbc775c71f524523347a2f8fa97d43f" Namespace="kube-system" Pod="coredns-674b8bbfcf-tvmqn" WorkloadEndpoint="ip--172--31--31--247-k8s-coredns--674b8bbfcf--tvmqn-" Apr 17 23:36:28.340237 containerd[2145]: 2026-04-17 23:36:27.539 [INFO][5281] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1a26a26858d1105dd812e0c738272e296bbc775c71f524523347a2f8fa97d43f" Namespace="kube-system" Pod="coredns-674b8bbfcf-tvmqn" WorkloadEndpoint="ip--172--31--31--247-k8s-coredns--674b8bbfcf--tvmqn-eth0" Apr 17 23:36:28.340237 containerd[2145]: 2026-04-17 23:36:27.824 [INFO][5373] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1a26a26858d1105dd812e0c738272e296bbc775c71f524523347a2f8fa97d43f" HandleID="k8s-pod-network.1a26a26858d1105dd812e0c738272e296bbc775c71f524523347a2f8fa97d43f" Workload="ip--172--31--31--247-k8s-coredns--674b8bbfcf--tvmqn-eth0" Apr 17 23:36:28.340237 containerd[2145]: 2026-04-17 23:36:27.914 [INFO][5373] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="1a26a26858d1105dd812e0c738272e296bbc775c71f524523347a2f8fa97d43f" HandleID="k8s-pod-network.1a26a26858d1105dd812e0c738272e296bbc775c71f524523347a2f8fa97d43f" Workload="ip--172--31--31--247-k8s-coredns--674b8bbfcf--tvmqn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000357480), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-31-247", "pod":"coredns-674b8bbfcf-tvmqn", "timestamp":"2026-04-17 23:36:27.82473244 +0000 UTC"}, Hostname:"ip-172-31-31-247", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x400060e160)} Apr 17 23:36:28.340237 containerd[2145]: 2026-04-17 23:36:27.914 [INFO][5373] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:36:28.340237 containerd[2145]: 2026-04-17 23:36:28.008 [INFO][5373] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:36:28.340237 containerd[2145]: 2026-04-17 23:36:28.009 [INFO][5373] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-31-247' Apr 17 23:36:28.340237 containerd[2145]: 2026-04-17 23:36:28.025 [INFO][5373] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.1a26a26858d1105dd812e0c738272e296bbc775c71f524523347a2f8fa97d43f" host="ip-172-31-31-247" Apr 17 23:36:28.340237 containerd[2145]: 2026-04-17 23:36:28.050 [INFO][5373] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-31-247" Apr 17 23:36:28.340237 containerd[2145]: 2026-04-17 23:36:28.080 [INFO][5373] ipam/ipam.go 526: Trying affinity for 192.168.115.192/26 host="ip-172-31-31-247" Apr 17 23:36:28.340237 containerd[2145]: 2026-04-17 23:36:28.092 [INFO][5373] ipam/ipam.go 160: Attempting to load block cidr=192.168.115.192/26 host="ip-172-31-31-247" Apr 17 23:36:28.340237 containerd[2145]: 2026-04-17 23:36:28.109 [INFO][5373] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.115.192/26 host="ip-172-31-31-247" Apr 17 23:36:28.340237 containerd[2145]: 2026-04-17 23:36:28.110 [INFO][5373] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.115.192/26 handle="k8s-pod-network.1a26a26858d1105dd812e0c738272e296bbc775c71f524523347a2f8fa97d43f" host="ip-172-31-31-247" Apr 17 23:36:28.340237 containerd[2145]: 2026-04-17 23:36:28.124 [INFO][5373] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.1a26a26858d1105dd812e0c738272e296bbc775c71f524523347a2f8fa97d43f Apr 17 23:36:28.340237 containerd[2145]: 2026-04-17 23:36:28.157 [INFO][5373] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.115.192/26 handle="k8s-pod-network.1a26a26858d1105dd812e0c738272e296bbc775c71f524523347a2f8fa97d43f" host="ip-172-31-31-247" Apr 17 23:36:28.340237 containerd[2145]: 2026-04-17 23:36:28.200 [INFO][5373] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.115.200/26] block=192.168.115.192/26 handle="k8s-pod-network.1a26a26858d1105dd812e0c738272e296bbc775c71f524523347a2f8fa97d43f" host="ip-172-31-31-247" Apr 17 23:36:28.340237 containerd[2145]: 2026-04-17 23:36:28.201 [INFO][5373] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.115.200/26] handle="k8s-pod-network.1a26a26858d1105dd812e0c738272e296bbc775c71f524523347a2f8fa97d43f" host="ip-172-31-31-247" Apr 17 23:36:28.340237 containerd[2145]: 2026-04-17 23:36:28.201 [INFO][5373] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:36:28.340237 containerd[2145]: 2026-04-17 23:36:28.201 [INFO][5373] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.115.200/26] IPv6=[] ContainerID="1a26a26858d1105dd812e0c738272e296bbc775c71f524523347a2f8fa97d43f" HandleID="k8s-pod-network.1a26a26858d1105dd812e0c738272e296bbc775c71f524523347a2f8fa97d43f" Workload="ip--172--31--31--247-k8s-coredns--674b8bbfcf--tvmqn-eth0" Apr 17 23:36:28.341718 containerd[2145]: 2026-04-17 23:36:28.214 [INFO][5281] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1a26a26858d1105dd812e0c738272e296bbc775c71f524523347a2f8fa97d43f" Namespace="kube-system" Pod="coredns-674b8bbfcf-tvmqn" WorkloadEndpoint="ip--172--31--31--247-k8s-coredns--674b8bbfcf--tvmqn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--247-k8s-coredns--674b8bbfcf--tvmqn-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"c4ec6d36-3750-48ee-bd55-9f658f7d853e", ResourceVersion:"934", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 35, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-247", ContainerID:"", Pod:"coredns-674b8bbfcf-tvmqn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.115.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali098dd88e437", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:36:28.341718 containerd[2145]: 2026-04-17 23:36:28.215 [INFO][5281] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.115.200/32] ContainerID="1a26a26858d1105dd812e0c738272e296bbc775c71f524523347a2f8fa97d43f" Namespace="kube-system" Pod="coredns-674b8bbfcf-tvmqn" WorkloadEndpoint="ip--172--31--31--247-k8s-coredns--674b8bbfcf--tvmqn-eth0" Apr 17 23:36:28.341718 containerd[2145]: 2026-04-17 23:36:28.215 [INFO][5281] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali098dd88e437 ContainerID="1a26a26858d1105dd812e0c738272e296bbc775c71f524523347a2f8fa97d43f" Namespace="kube-system" Pod="coredns-674b8bbfcf-tvmqn" WorkloadEndpoint="ip--172--31--31--247-k8s-coredns--674b8bbfcf--tvmqn-eth0" Apr 17 23:36:28.341718 containerd[2145]: 2026-04-17 23:36:28.255 [INFO][5281] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1a26a26858d1105dd812e0c738272e296bbc775c71f524523347a2f8fa97d43f" Namespace="kube-system" Pod="coredns-674b8bbfcf-tvmqn" WorkloadEndpoint="ip--172--31--31--247-k8s-coredns--674b8bbfcf--tvmqn-eth0" Apr 17 23:36:28.341718 containerd[2145]: 2026-04-17 23:36:28.263 [INFO][5281] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1a26a26858d1105dd812e0c738272e296bbc775c71f524523347a2f8fa97d43f" Namespace="kube-system" Pod="coredns-674b8bbfcf-tvmqn" WorkloadEndpoint="ip--172--31--31--247-k8s-coredns--674b8bbfcf--tvmqn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--247-k8s-coredns--674b8bbfcf--tvmqn-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"c4ec6d36-3750-48ee-bd55-9f658f7d853e", ResourceVersion:"934", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 35, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-247", ContainerID:"1a26a26858d1105dd812e0c738272e296bbc775c71f524523347a2f8fa97d43f", Pod:"coredns-674b8bbfcf-tvmqn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.115.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali098dd88e437", MAC:"96:8d:a8:31:c1:59", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:36:28.341718 containerd[2145]: 2026-04-17 23:36:28.298 [INFO][5281] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1a26a26858d1105dd812e0c738272e296bbc775c71f524523347a2f8fa97d43f" Namespace="kube-system" Pod="coredns-674b8bbfcf-tvmqn" WorkloadEndpoint="ip--172--31--31--247-k8s-coredns--674b8bbfcf--tvmqn-eth0" Apr 17 23:36:28.349824 kernel: calico-node[5244]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Apr 17 23:36:28.768240 containerd[2145]: time="2026-04-17T23:36:28.766859981Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6c659d559c-t45jr,Uid:6a39f673-617b-4850-b432-79bcea396fab,Namespace:calico-system,Attempt:0,} returns sandbox id \"469431e3e602098e03aae2a7d05cd3d8f0f39ffb00572ac25061e8dfa7402b3c\"" Apr 17 23:36:28.929193 containerd[2145]: time="2026-04-17T23:36:28.868079166Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:36:28.929193 containerd[2145]: time="2026-04-17T23:36:28.869309034Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:36:28.929193 containerd[2145]: time="2026-04-17T23:36:28.869354298Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:36:28.929193 containerd[2145]: time="2026-04-17T23:36:28.870572202Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:36:28.929193 containerd[2145]: time="2026-04-17T23:36:28.865706466Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:36:28.929193 containerd[2145]: time="2026-04-17T23:36:28.865953186Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:36:28.929193 containerd[2145]: time="2026-04-17T23:36:28.866001582Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:36:28.929193 containerd[2145]: time="2026-04-17T23:36:28.866221050Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:36:28.997082 systemd-networkd[1694]: calicedaf12afb9: Gained IPv6LL Apr 17 23:36:29.180817 containerd[2145]: time="2026-04-17T23:36:29.180666327Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-tvmqn,Uid:c4ec6d36-3750-48ee-bd55-9f658f7d853e,Namespace:kube-system,Attempt:0,} returns sandbox id \"1a26a26858d1105dd812e0c738272e296bbc775c71f524523347a2f8fa97d43f\"" Apr 17 23:36:29.201700 containerd[2145]: time="2026-04-17T23:36:29.201628647Z" level=info msg="CreateContainer within sandbox \"1a26a26858d1105dd812e0c738272e296bbc775c71f524523347a2f8fa97d43f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 17 23:36:29.259710 containerd[2145]: time="2026-04-17T23:36:29.258844863Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79855f7dd6-v9wp9,Uid:c42f7cc8-188c-4d4f-adfb-58e20708a530,Namespace:calico-system,Attempt:0,} returns sandbox id \"934032af31620c0e6c38aa13aba5ded17a4621a0fc9c13387e35d88a6a7e952c\"" Apr 17 23:36:29.270927 containerd[2145]: time="2026-04-17T23:36:29.270462496Z" level=info msg="CreateContainer within sandbox \"1a26a26858d1105dd812e0c738272e296bbc775c71f524523347a2f8fa97d43f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7ec00550d9bdbe4a7eb7ff83e80a62be55fd8e480df11aa1da525a753dd1f15c\"" Apr 17 23:36:29.274996 containerd[2145]: time="2026-04-17T23:36:29.274753408Z" level=info msg="StartContainer for \"7ec00550d9bdbe4a7eb7ff83e80a62be55fd8e480df11aa1da525a753dd1f15c\"" Apr 17 23:36:29.508429 systemd-networkd[1694]: cali84f5ed54543: Gained IPv6LL Apr 17 23:36:29.642222 containerd[2145]: time="2026-04-17T23:36:29.640398449Z" level=info msg="StartContainer for \"7ec00550d9bdbe4a7eb7ff83e80a62be55fd8e480df11aa1da525a753dd1f15c\" returns successfully" Apr 17 23:36:29.667742 systemd-networkd[1694]: vxlan.calico: Link UP Apr 17 23:36:29.671161 systemd-networkd[1694]: vxlan.calico: Gained carrier Apr 17 23:36:30.216210 systemd-resolved[2020]: Under memory pressure, flushing caches. Apr 17 23:36:30.219830 systemd-journald[1607]: Under memory pressure, flushing caches. Apr 17 23:36:30.216282 systemd-resolved[2020]: Flushed all caches. Apr 17 23:36:30.276589 systemd-networkd[1694]: cali098dd88e437: Gained IPv6LL Apr 17 23:36:30.620627 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1234400176.mount: Deactivated successfully. Apr 17 23:36:30.739916 kubelet[3715]: I0417 23:36:30.737406 3715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-tvmqn" podStartSLOduration=50.737384371 podStartE2EDuration="50.737384371s" podCreationTimestamp="2026-04-17 23:35:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:36:30.60466089 +0000 UTC m=+56.154453184" watchObservedRunningTime="2026-04-17 23:36:30.737384371 +0000 UTC m=+56.287176653" Apr 17 23:36:31.300293 systemd-networkd[1694]: vxlan.calico: Gained IPv6LL Apr 17 23:36:31.498718 containerd[2145]: time="2026-04-17T23:36:31.498595267Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:36:31.501933 containerd[2145]: time="2026-04-17T23:36:31.501876235Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=51613980" Apr 17 23:36:31.503426 containerd[2145]: time="2026-04-17T23:36:31.503348311Z" level=info msg="ImageCreate event name:\"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:36:31.509495 containerd[2145]: time="2026-04-17T23:36:31.509405875Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:36:31.513267 containerd[2145]: time="2026-04-17T23:36:31.512348827Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"51613826\" in 5.50465258s" Apr 17 23:36:31.513267 containerd[2145]: time="2026-04-17T23:36:31.512417275Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\"" Apr 17 23:36:31.517665 containerd[2145]: time="2026-04-17T23:36:31.516855859Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Apr 17 23:36:31.524851 containerd[2145]: time="2026-04-17T23:36:31.524789011Z" level=info msg="CreateContainer within sandbox \"f982e46f08693c2d5085e0c0b98a2beaa8b55c3702481c693cc8aeaecb44458e\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Apr 17 23:36:31.558945 containerd[2145]: time="2026-04-17T23:36:31.558721399Z" level=info msg="CreateContainer within sandbox \"f982e46f08693c2d5085e0c0b98a2beaa8b55c3702481c693cc8aeaecb44458e\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"4559ddfe2dac5f7985d7e6e17fe2f31a982305d06ef18accae6d86a148692259\"" Apr 17 23:36:31.565520 containerd[2145]: time="2026-04-17T23:36:31.564365899Z" level=info msg="StartContainer for \"4559ddfe2dac5f7985d7e6e17fe2f31a982305d06ef18accae6d86a148692259\"" Apr 17 23:36:31.701940 containerd[2145]: time="2026-04-17T23:36:31.701841152Z" level=info msg="StartContainer for \"4559ddfe2dac5f7985d7e6e17fe2f31a982305d06ef18accae6d86a148692259\" returns successfully" Apr 17 23:36:32.260748 systemd-resolved[2020]: Under memory pressure, flushing caches. Apr 17 23:36:32.260775 systemd-resolved[2020]: Flushed all caches. Apr 17 23:36:32.264134 systemd-journald[1607]: Under memory pressure, flushing caches. Apr 17 23:36:32.607832 kubelet[3715]: I0417 23:36:32.605652 3715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-5b85766d88-xcfhx" podStartSLOduration=25.085596684 podStartE2EDuration="30.605628212s" podCreationTimestamp="2026-04-17 23:36:02 +0000 UTC" firstStartedPulling="2026-04-17 23:36:25.996191343 +0000 UTC m=+51.545983625" lastFinishedPulling="2026-04-17 23:36:31.516222859 +0000 UTC m=+57.066015153" observedRunningTime="2026-04-17 23:36:32.604149056 +0000 UTC m=+58.153941362" watchObservedRunningTime="2026-04-17 23:36:32.605628212 +0000 UTC m=+58.155420494" Apr 17 23:36:32.935868 containerd[2145]: time="2026-04-17T23:36:32.935706982Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:36:32.938389 containerd[2145]: time="2026-04-17T23:36:32.938300218Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=5882804" Apr 17 23:36:32.939123 containerd[2145]: time="2026-04-17T23:36:32.938800306Z" level=info msg="ImageCreate event name:\"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:36:32.943799 containerd[2145]: time="2026-04-17T23:36:32.943071610Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:36:32.944893 containerd[2145]: time="2026-04-17T23:36:32.944832094Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7280321\" in 1.427917147s" Apr 17 23:36:32.945022 containerd[2145]: time="2026-04-17T23:36:32.944891506Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\"" Apr 17 23:36:32.949184 containerd[2145]: time="2026-04-17T23:36:32.948997054Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 17 23:36:32.955294 containerd[2145]: time="2026-04-17T23:36:32.955183222Z" level=info msg="CreateContainer within sandbox \"937f6c9f91371a995defeeaab9eaf2ffabf5dfaf295eb595c0e1b8a0ab440621\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Apr 17 23:36:32.981796 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount881039395.mount: Deactivated successfully. Apr 17 23:36:32.985498 containerd[2145]: time="2026-04-17T23:36:32.982448950Z" level=info msg="CreateContainer within sandbox \"937f6c9f91371a995defeeaab9eaf2ffabf5dfaf295eb595c0e1b8a0ab440621\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"42c961f494e6b64ce6f539ae0d4716310fc69d722e415e0aa8d5d9dfcb643509\"" Apr 17 23:36:32.985498 containerd[2145]: time="2026-04-17T23:36:32.984154294Z" level=info msg="StartContainer for \"42c961f494e6b64ce6f539ae0d4716310fc69d722e415e0aa8d5d9dfcb643509\"" Apr 17 23:36:33.137799 containerd[2145]: time="2026-04-17T23:36:33.137683291Z" level=info msg="StartContainer for \"42c961f494e6b64ce6f539ae0d4716310fc69d722e415e0aa8d5d9dfcb643509\" returns successfully" Apr 17 23:36:33.817447 ntpd[2102]: Listen normally on 6 vxlan.calico 192.168.115.192:123 Apr 17 23:36:33.817586 ntpd[2102]: Listen normally on 7 cali6d38a08862d [fe80::ecee:eeff:feee:eeee%4]:123 Apr 17 23:36:33.818299 ntpd[2102]: 17 Apr 23:36:33 ntpd[2102]: Listen normally on 6 vxlan.calico 192.168.115.192:123 Apr 17 23:36:33.818299 ntpd[2102]: 17 Apr 23:36:33 ntpd[2102]: Listen normally on 7 cali6d38a08862d [fe80::ecee:eeff:feee:eeee%4]:123 Apr 17 23:36:33.818299 ntpd[2102]: 17 Apr 23:36:33 ntpd[2102]: Listen normally on 8 cali82eca6df786 [fe80::ecee:eeff:feee:eeee%5]:123 Apr 17 23:36:33.818299 ntpd[2102]: 17 Apr 23:36:33 ntpd[2102]: Listen normally on 9 cali95998a6f9d6 [fe80::ecee:eeff:feee:eeee%6]:123 Apr 17 23:36:33.818299 ntpd[2102]: 17 Apr 23:36:33 ntpd[2102]: Listen normally on 10 califcec21d797d [fe80::ecee:eeff:feee:eeee%7]:123 Apr 17 23:36:33.818299 ntpd[2102]: 17 Apr 23:36:33 ntpd[2102]: Listen normally on 11 calief1dc57ba9d [fe80::ecee:eeff:feee:eeee%8]:123 Apr 17 23:36:33.818299 ntpd[2102]: 17 Apr 23:36:33 ntpd[2102]: Listen normally on 12 calicedaf12afb9 [fe80::ecee:eeff:feee:eeee%9]:123 Apr 17 23:36:33.818299 ntpd[2102]: 17 Apr 23:36:33 ntpd[2102]: Listen normally on 13 cali84f5ed54543 [fe80::ecee:eeff:feee:eeee%10]:123 Apr 17 23:36:33.818299 ntpd[2102]: 17 Apr 23:36:33 ntpd[2102]: Listen normally on 14 cali098dd88e437 [fe80::ecee:eeff:feee:eeee%11]:123 Apr 17 23:36:33.818299 ntpd[2102]: 17 Apr 23:36:33 ntpd[2102]: Listen normally on 15 vxlan.calico [fe80::6453:caff:fed1:fc91%12]:123 Apr 17 23:36:33.817677 ntpd[2102]: Listen normally on 8 cali82eca6df786 [fe80::ecee:eeff:feee:eeee%5]:123 Apr 17 23:36:33.817747 ntpd[2102]: Listen normally on 9 cali95998a6f9d6 [fe80::ecee:eeff:feee:eeee%6]:123 Apr 17 23:36:33.817815 ntpd[2102]: Listen normally on 10 califcec21d797d [fe80::ecee:eeff:feee:eeee%7]:123 Apr 17 23:36:33.817884 ntpd[2102]: Listen normally on 11 calief1dc57ba9d [fe80::ecee:eeff:feee:eeee%8]:123 Apr 17 23:36:33.817955 ntpd[2102]: Listen normally on 12 calicedaf12afb9 [fe80::ecee:eeff:feee:eeee%9]:123 Apr 17 23:36:33.818022 ntpd[2102]: Listen normally on 13 cali84f5ed54543 [fe80::ecee:eeff:feee:eeee%10]:123 Apr 17 23:36:33.818124 ntpd[2102]: Listen normally on 14 cali098dd88e437 [fe80::ecee:eeff:feee:eeee%11]:123 Apr 17 23:36:33.818200 ntpd[2102]: Listen normally on 15 vxlan.calico [fe80::6453:caff:fed1:fc91%12]:123 Apr 17 23:36:35.755007 containerd[2145]: time="2026-04-17T23:36:35.754919928Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:36:35.756648 containerd[2145]: time="2026-04-17T23:36:35.756588348Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=45552315" Apr 17 23:36:35.757964 containerd[2145]: time="2026-04-17T23:36:35.757763040Z" level=info msg="ImageCreate event name:\"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:36:35.763130 containerd[2145]: time="2026-04-17T23:36:35.762747072Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:36:35.764690 containerd[2145]: time="2026-04-17T23:36:35.764516340Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"46949856\" in 2.815458506s" Apr 17 23:36:35.764690 containerd[2145]: time="2026-04-17T23:36:35.764567856Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\"" Apr 17 23:36:35.768731 containerd[2145]: time="2026-04-17T23:36:35.768310488Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Apr 17 23:36:35.780876 containerd[2145]: time="2026-04-17T23:36:35.780811692Z" level=info msg="CreateContainer within sandbox \"9fdf5000068d0bc4ae61628b1284f6348a0b8301c462502def68cfe50dc6a2f1\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 17 23:36:35.810303 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3474859743.mount: Deactivated successfully. Apr 17 23:36:35.813509 containerd[2145]: time="2026-04-17T23:36:35.813253440Z" level=info msg="CreateContainer within sandbox \"9fdf5000068d0bc4ae61628b1284f6348a0b8301c462502def68cfe50dc6a2f1\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"7ee5142f141b29fb3a17ff932ac05b975210b9cfb60a172a698454b363c6b9a4\"" Apr 17 23:36:35.815682 containerd[2145]: time="2026-04-17T23:36:35.814314576Z" level=info msg="StartContainer for \"7ee5142f141b29fb3a17ff932ac05b975210b9cfb60a172a698454b363c6b9a4\"" Apr 17 23:36:35.948781 containerd[2145]: time="2026-04-17T23:36:35.948702877Z" level=info msg="StartContainer for \"7ee5142f141b29fb3a17ff932ac05b975210b9cfb60a172a698454b363c6b9a4\" returns successfully" Apr 17 23:36:37.209517 containerd[2145]: time="2026-04-17T23:36:37.208632515Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:36:37.210827 containerd[2145]: time="2026-04-17T23:36:37.210757067Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8261497" Apr 17 23:36:37.213151 containerd[2145]: time="2026-04-17T23:36:37.212419715Z" level=info msg="ImageCreate event name:\"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:36:37.221135 containerd[2145]: time="2026-04-17T23:36:37.220131539Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:36:37.221362 containerd[2145]: time="2026-04-17T23:36:37.221319407Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"9659022\" in 1.452951967s" Apr 17 23:36:37.221488 containerd[2145]: time="2026-04-17T23:36:37.221459615Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\"" Apr 17 23:36:37.223772 containerd[2145]: time="2026-04-17T23:36:37.223721111Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Apr 17 23:36:37.232405 containerd[2145]: time="2026-04-17T23:36:37.232357475Z" level=info msg="CreateContainer within sandbox \"7a0d6bf7e4f9356a7718c21a25c720b3b6921c9350e5ce5418b854a7dd81f79f\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Apr 17 23:36:37.254982 containerd[2145]: time="2026-04-17T23:36:37.254809187Z" level=info msg="CreateContainer within sandbox \"7a0d6bf7e4f9356a7718c21a25c720b3b6921c9350e5ce5418b854a7dd81f79f\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"e876d726df585d03acdf3d062398aaee8044c332706aabfcf80565db975b4589\"" Apr 17 23:36:37.264358 containerd[2145]: time="2026-04-17T23:36:37.259374107Z" level=info msg="StartContainer for \"e876d726df585d03acdf3d062398aaee8044c332706aabfcf80565db975b4589\"" Apr 17 23:36:37.372842 systemd[1]: run-containerd-runc-k8s.io-e876d726df585d03acdf3d062398aaee8044c332706aabfcf80565db975b4589-runc.YiJ7lx.mount: Deactivated successfully. Apr 17 23:36:37.467425 containerd[2145]: time="2026-04-17T23:36:37.467192328Z" level=info msg="StartContainer for \"e876d726df585d03acdf3d062398aaee8044c332706aabfcf80565db975b4589\" returns successfully" Apr 17 23:36:37.614441 kubelet[3715]: I0417 23:36:37.614363 3715 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 17 23:36:39.988543 containerd[2145]: time="2026-04-17T23:36:39.988471421Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:36:39.991281 containerd[2145]: time="2026-04-17T23:36:39.991214465Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=49189955" Apr 17 23:36:39.993064 containerd[2145]: time="2026-04-17T23:36:39.992992241Z" level=info msg="ImageCreate event name:\"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:36:39.998235 containerd[2145]: time="2026-04-17T23:36:39.998165561Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:36:39.999779 containerd[2145]: time="2026-04-17T23:36:39.999227597Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"50587448\" in 2.773195634s" Apr 17 23:36:39.999779 containerd[2145]: time="2026-04-17T23:36:39.999288497Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\"" Apr 17 23:36:40.002485 containerd[2145]: time="2026-04-17T23:36:40.002425741Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 17 23:36:40.035466 containerd[2145]: time="2026-04-17T23:36:40.035253265Z" level=info msg="CreateContainer within sandbox \"469431e3e602098e03aae2a7d05cd3d8f0f39ffb00572ac25061e8dfa7402b3c\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Apr 17 23:36:40.070030 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3631960100.mount: Deactivated successfully. Apr 17 23:36:40.075315 containerd[2145]: time="2026-04-17T23:36:40.074745421Z" level=info msg="CreateContainer within sandbox \"469431e3e602098e03aae2a7d05cd3d8f0f39ffb00572ac25061e8dfa7402b3c\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"bf48250da9b899562ec725f2893e4082c4a67e327f3cfc7cedcbbfc37e93fd3e\"" Apr 17 23:36:40.077236 containerd[2145]: time="2026-04-17T23:36:40.076698205Z" level=info msg="StartContainer for \"bf48250da9b899562ec725f2893e4082c4a67e327f3cfc7cedcbbfc37e93fd3e\"" Apr 17 23:36:40.227637 containerd[2145]: time="2026-04-17T23:36:40.226864718Z" level=info msg="StartContainer for \"bf48250da9b899562ec725f2893e4082c4a67e327f3cfc7cedcbbfc37e93fd3e\" returns successfully" Apr 17 23:36:40.377663 containerd[2145]: time="2026-04-17T23:36:40.376971315Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:36:40.379688 containerd[2145]: time="2026-04-17T23:36:40.379634775Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Apr 17 23:36:40.390531 containerd[2145]: time="2026-04-17T23:36:40.390448455Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"46949856\" in 387.891746ms" Apr 17 23:36:40.390729 containerd[2145]: time="2026-04-17T23:36:40.390554487Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\"" Apr 17 23:36:40.396055 containerd[2145]: time="2026-04-17T23:36:40.393768543Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Apr 17 23:36:40.403962 containerd[2145]: time="2026-04-17T23:36:40.403884267Z" level=info msg="CreateContainer within sandbox \"934032af31620c0e6c38aa13aba5ded17a4621a0fc9c13387e35d88a6a7e952c\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 17 23:36:40.430461 containerd[2145]: time="2026-04-17T23:36:40.429475671Z" level=info msg="CreateContainer within sandbox \"934032af31620c0e6c38aa13aba5ded17a4621a0fc9c13387e35d88a6a7e952c\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"a8ad07baa06edced4f27042cf4e2baae181c7ef106c6b296d3d114301511b3fd\"" Apr 17 23:36:40.432732 containerd[2145]: time="2026-04-17T23:36:40.432597999Z" level=info msg="StartContainer for \"a8ad07baa06edced4f27042cf4e2baae181c7ef106c6b296d3d114301511b3fd\"" Apr 17 23:36:40.666494 kubelet[3715]: I0417 23:36:40.665866 3715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-79855f7dd6-9l8cf" podStartSLOduration=30.596359927 podStartE2EDuration="39.665842996s" podCreationTimestamp="2026-04-17 23:36:01 +0000 UTC" firstStartedPulling="2026-04-17 23:36:26.697306287 +0000 UTC m=+52.247098569" lastFinishedPulling="2026-04-17 23:36:35.766789368 +0000 UTC m=+61.316581638" observedRunningTime="2026-04-17 23:36:36.630710628 +0000 UTC m=+62.180502922" watchObservedRunningTime="2026-04-17 23:36:40.665842996 +0000 UTC m=+66.215635278" Apr 17 23:36:40.796775 containerd[2145]: time="2026-04-17T23:36:40.796675457Z" level=info msg="StartContainer for \"a8ad07baa06edced4f27042cf4e2baae181c7ef106c6b296d3d114301511b3fd\" returns successfully" Apr 17 23:36:40.832889 kubelet[3715]: I0417 23:36:40.832750 3715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6c659d559c-t45jr" podStartSLOduration=24.624139325 podStartE2EDuration="35.832724897s" podCreationTimestamp="2026-04-17 23:36:05 +0000 UTC" firstStartedPulling="2026-04-17 23:36:28.792611789 +0000 UTC m=+54.342404071" lastFinishedPulling="2026-04-17 23:36:40.001197373 +0000 UTC m=+65.550989643" observedRunningTime="2026-04-17 23:36:40.673239016 +0000 UTC m=+66.223031322" watchObservedRunningTime="2026-04-17 23:36:40.832724897 +0000 UTC m=+66.382517167" Apr 17 23:36:41.712261 kubelet[3715]: I0417 23:36:41.711675 3715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-79855f7dd6-v9wp9" podStartSLOduration=29.58395111 podStartE2EDuration="40.711358445s" podCreationTimestamp="2026-04-17 23:36:01 +0000 UTC" firstStartedPulling="2026-04-17 23:36:29.264351208 +0000 UTC m=+54.814143490" lastFinishedPulling="2026-04-17 23:36:40.391758555 +0000 UTC m=+65.941550825" observedRunningTime="2026-04-17 23:36:41.706901933 +0000 UTC m=+67.256694227" watchObservedRunningTime="2026-04-17 23:36:41.711358445 +0000 UTC m=+67.261150979" Apr 17 23:36:42.339060 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2595782698.mount: Deactivated successfully. Apr 17 23:36:42.360528 containerd[2145]: time="2026-04-17T23:36:42.360455069Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:36:42.363626 containerd[2145]: time="2026-04-17T23:36:42.363495293Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=16426594" Apr 17 23:36:42.367120 containerd[2145]: time="2026-04-17T23:36:42.365621417Z" level=info msg="ImageCreate event name:\"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:36:42.373163 containerd[2145]: time="2026-04-17T23:36:42.373108733Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:36:42.376442 containerd[2145]: time="2026-04-17T23:36:42.376390001Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"16426424\" in 1.982521798s" Apr 17 23:36:42.376635 containerd[2145]: time="2026-04-17T23:36:42.376605581Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\"" Apr 17 23:36:42.379584 containerd[2145]: time="2026-04-17T23:36:42.379536341Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Apr 17 23:36:42.384846 containerd[2145]: time="2026-04-17T23:36:42.384794789Z" level=info msg="CreateContainer within sandbox \"937f6c9f91371a995defeeaab9eaf2ffabf5dfaf295eb595c0e1b8a0ab440621\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Apr 17 23:36:42.412561 containerd[2145]: time="2026-04-17T23:36:42.412505105Z" level=info msg="CreateContainer within sandbox \"937f6c9f91371a995defeeaab9eaf2ffabf5dfaf295eb595c0e1b8a0ab440621\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"daf1ada7aa79185fcdb1f5aa952110e75c1d21b4922234b3b5a2056aebe75372\"" Apr 17 23:36:42.415490 containerd[2145]: time="2026-04-17T23:36:42.413798177Z" level=info msg="StartContainer for \"daf1ada7aa79185fcdb1f5aa952110e75c1d21b4922234b3b5a2056aebe75372\"" Apr 17 23:36:42.609748 containerd[2145]: time="2026-04-17T23:36:42.609565758Z" level=info msg="StartContainer for \"daf1ada7aa79185fcdb1f5aa952110e75c1d21b4922234b3b5a2056aebe75372\" returns successfully" Apr 17 23:36:42.672175 kubelet[3715]: I0417 23:36:42.670357 3715 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 17 23:36:42.672753 containerd[2145]: time="2026-04-17T23:36:42.672691398Z" level=info msg="StopContainer for \"42c961f494e6b64ce6f539ae0d4716310fc69d722e415e0aa8d5d9dfcb643509\" with timeout 30 (s)" Apr 17 23:36:42.673696 containerd[2145]: time="2026-04-17T23:36:42.673646778Z" level=info msg="StopContainer for \"daf1ada7aa79185fcdb1f5aa952110e75c1d21b4922234b3b5a2056aebe75372\" with timeout 30 (s)" Apr 17 23:36:42.674199 containerd[2145]: time="2026-04-17T23:36:42.674161470Z" level=info msg="Stop container \"42c961f494e6b64ce6f539ae0d4716310fc69d722e415e0aa8d5d9dfcb643509\" with signal terminated" Apr 17 23:36:42.677576 containerd[2145]: time="2026-04-17T23:36:42.676729974Z" level=info msg="Stop container \"daf1ada7aa79185fcdb1f5aa952110e75c1d21b4922234b3b5a2056aebe75372\" with signal terminated" Apr 17 23:36:42.712599 kubelet[3715]: I0417 23:36:42.712182 3715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-7979cdb584-z9jkm" podStartSLOduration=18.924479691 podStartE2EDuration="34.712154406s" podCreationTimestamp="2026-04-17 23:36:08 +0000 UTC" firstStartedPulling="2026-04-17 23:36:26.590384126 +0000 UTC m=+52.140176396" lastFinishedPulling="2026-04-17 23:36:42.378058841 +0000 UTC m=+67.927851111" observedRunningTime="2026-04-17 23:36:42.70720167 +0000 UTC m=+68.256994048" watchObservedRunningTime="2026-04-17 23:36:42.712154406 +0000 UTC m=+68.261946700" Apr 17 23:36:42.838355 containerd[2145]: time="2026-04-17T23:36:42.838270483Z" level=info msg="shim disconnected" id=42c961f494e6b64ce6f539ae0d4716310fc69d722e415e0aa8d5d9dfcb643509 namespace=k8s.io Apr 17 23:36:42.838806 containerd[2145]: time="2026-04-17T23:36:42.838540699Z" level=warning msg="cleaning up after shim disconnected" id=42c961f494e6b64ce6f539ae0d4716310fc69d722e415e0aa8d5d9dfcb643509 namespace=k8s.io Apr 17 23:36:42.838806 containerd[2145]: time="2026-04-17T23:36:42.838571719Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:36:42.847532 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-42c961f494e6b64ce6f539ae0d4716310fc69d722e415e0aa8d5d9dfcb643509-rootfs.mount: Deactivated successfully. Apr 17 23:36:42.894975 containerd[2145]: time="2026-04-17T23:36:42.894494059Z" level=warning msg="cleanup warnings time=\"2026-04-17T23:36:42Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 17 23:36:43.329020 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-daf1ada7aa79185fcdb1f5aa952110e75c1d21b4922234b3b5a2056aebe75372-rootfs.mount: Deactivated successfully. Apr 17 23:36:43.522444 containerd[2145]: time="2026-04-17T23:36:43.522315546Z" level=info msg="shim disconnected" id=daf1ada7aa79185fcdb1f5aa952110e75c1d21b4922234b3b5a2056aebe75372 namespace=k8s.io Apr 17 23:36:43.522444 containerd[2145]: time="2026-04-17T23:36:43.522408762Z" level=warning msg="cleaning up after shim disconnected" id=daf1ada7aa79185fcdb1f5aa952110e75c1d21b4922234b3b5a2056aebe75372 namespace=k8s.io Apr 17 23:36:43.522444 containerd[2145]: time="2026-04-17T23:36:43.522432246Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:36:43.538817 containerd[2145]: time="2026-04-17T23:36:43.538170282Z" level=info msg="StopContainer for \"42c961f494e6b64ce6f539ae0d4716310fc69d722e415e0aa8d5d9dfcb643509\" returns successfully" Apr 17 23:36:43.575352 containerd[2145]: time="2026-04-17T23:36:43.575275483Z" level=warning msg="cleanup warnings time=\"2026-04-17T23:36:43Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 17 23:36:43.590497 containerd[2145]: time="2026-04-17T23:36:43.590323471Z" level=info msg="StopContainer for \"daf1ada7aa79185fcdb1f5aa952110e75c1d21b4922234b3b5a2056aebe75372\" returns successfully" Apr 17 23:36:43.596567 containerd[2145]: time="2026-04-17T23:36:43.596496955Z" level=info msg="StopPodSandbox for \"937f6c9f91371a995defeeaab9eaf2ffabf5dfaf295eb595c0e1b8a0ab440621\"" Apr 17 23:36:43.596744 containerd[2145]: time="2026-04-17T23:36:43.596590783Z" level=info msg="Container to stop \"42c961f494e6b64ce6f539ae0d4716310fc69d722e415e0aa8d5d9dfcb643509\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 17 23:36:43.596744 containerd[2145]: time="2026-04-17T23:36:43.596623483Z" level=info msg="Container to stop \"daf1ada7aa79185fcdb1f5aa952110e75c1d21b4922234b3b5a2056aebe75372\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 17 23:36:43.624455 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-937f6c9f91371a995defeeaab9eaf2ffabf5dfaf295eb595c0e1b8a0ab440621-shm.mount: Deactivated successfully. Apr 17 23:36:43.748905 containerd[2145]: time="2026-04-17T23:36:43.748209151Z" level=info msg="shim disconnected" id=937f6c9f91371a995defeeaab9eaf2ffabf5dfaf295eb595c0e1b8a0ab440621 namespace=k8s.io Apr 17 23:36:43.748905 containerd[2145]: time="2026-04-17T23:36:43.748747543Z" level=warning msg="cleaning up after shim disconnected" id=937f6c9f91371a995defeeaab9eaf2ffabf5dfaf295eb595c0e1b8a0ab440621 namespace=k8s.io Apr 17 23:36:43.749828 containerd[2145]: time="2026-04-17T23:36:43.748777483Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:36:43.754893 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-937f6c9f91371a995defeeaab9eaf2ffabf5dfaf295eb595c0e1b8a0ab440621-rootfs.mount: Deactivated successfully. Apr 17 23:36:43.791346 containerd[2145]: time="2026-04-17T23:36:43.791065004Z" level=warning msg="cleanup warnings time=\"2026-04-17T23:36:43Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 17 23:36:43.950454 systemd-networkd[1694]: cali95998a6f9d6: Link DOWN Apr 17 23:36:43.950469 systemd-networkd[1694]: cali95998a6f9d6: Lost carrier Apr 17 23:36:43.982964 systemd[1]: Started sshd@7-172.31.31.247:22-4.175.71.9:54650.service - OpenSSH per-connection server daemon (4.175.71.9:54650). Apr 17 23:36:44.428134 containerd[2145]: 2026-04-17 23:36:43.942 [INFO][6223] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="937f6c9f91371a995defeeaab9eaf2ffabf5dfaf295eb595c0e1b8a0ab440621" Apr 17 23:36:44.428134 containerd[2145]: 2026-04-17 23:36:43.945 [INFO][6223] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="937f6c9f91371a995defeeaab9eaf2ffabf5dfaf295eb595c0e1b8a0ab440621" iface="eth0" netns="/var/run/netns/cni-47bd2e23-a98a-8546-9d92-406ab9b32723" Apr 17 23:36:44.428134 containerd[2145]: 2026-04-17 23:36:43.945 [INFO][6223] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="937f6c9f91371a995defeeaab9eaf2ffabf5dfaf295eb595c0e1b8a0ab440621" iface="eth0" netns="/var/run/netns/cni-47bd2e23-a98a-8546-9d92-406ab9b32723" Apr 17 23:36:44.428134 containerd[2145]: 2026-04-17 23:36:43.979 [INFO][6223] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="937f6c9f91371a995defeeaab9eaf2ffabf5dfaf295eb595c0e1b8a0ab440621" after=33.911845ms iface="eth0" netns="/var/run/netns/cni-47bd2e23-a98a-8546-9d92-406ab9b32723" Apr 17 23:36:44.428134 containerd[2145]: 2026-04-17 23:36:43.979 [INFO][6223] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="937f6c9f91371a995defeeaab9eaf2ffabf5dfaf295eb595c0e1b8a0ab440621" Apr 17 23:36:44.428134 containerd[2145]: 2026-04-17 23:36:43.980 [INFO][6223] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="937f6c9f91371a995defeeaab9eaf2ffabf5dfaf295eb595c0e1b8a0ab440621" Apr 17 23:36:44.428134 containerd[2145]: 2026-04-17 23:36:44.253 [INFO][6235] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="937f6c9f91371a995defeeaab9eaf2ffabf5dfaf295eb595c0e1b8a0ab440621" HandleID="k8s-pod-network.937f6c9f91371a995defeeaab9eaf2ffabf5dfaf295eb595c0e1b8a0ab440621" Workload="ip--172--31--31--247-k8s-whisker--7979cdb584--z9jkm-eth0" Apr 17 23:36:44.428134 containerd[2145]: 2026-04-17 23:36:44.253 [INFO][6235] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:36:44.428134 containerd[2145]: 2026-04-17 23:36:44.253 [INFO][6235] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:36:44.428134 containerd[2145]: 2026-04-17 23:36:44.366 [INFO][6235] ipam/ipam_plugin.go 516: Released address using handleID ContainerID="937f6c9f91371a995defeeaab9eaf2ffabf5dfaf295eb595c0e1b8a0ab440621" HandleID="k8s-pod-network.937f6c9f91371a995defeeaab9eaf2ffabf5dfaf295eb595c0e1b8a0ab440621" Workload="ip--172--31--31--247-k8s-whisker--7979cdb584--z9jkm-eth0" Apr 17 23:36:44.428134 containerd[2145]: 2026-04-17 23:36:44.366 [INFO][6235] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="937f6c9f91371a995defeeaab9eaf2ffabf5dfaf295eb595c0e1b8a0ab440621" HandleID="k8s-pod-network.937f6c9f91371a995defeeaab9eaf2ffabf5dfaf295eb595c0e1b8a0ab440621" Workload="ip--172--31--31--247-k8s-whisker--7979cdb584--z9jkm-eth0" Apr 17 23:36:44.428134 containerd[2145]: 2026-04-17 23:36:44.382 [INFO][6235] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:36:44.428134 containerd[2145]: 2026-04-17 23:36:44.400 [INFO][6223] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="937f6c9f91371a995defeeaab9eaf2ffabf5dfaf295eb595c0e1b8a0ab440621" Apr 17 23:36:44.435864 containerd[2145]: time="2026-04-17T23:36:44.435780919Z" level=info msg="TearDown network for sandbox \"937f6c9f91371a995defeeaab9eaf2ffabf5dfaf295eb595c0e1b8a0ab440621\" successfully" Apr 17 23:36:44.435864 containerd[2145]: time="2026-04-17T23:36:44.435839635Z" level=info msg="StopPodSandbox for \"937f6c9f91371a995defeeaab9eaf2ffabf5dfaf295eb595c0e1b8a0ab440621\" returns successfully" Apr 17 23:36:44.458351 systemd[1]: run-netns-cni\x2d47bd2e23\x2da98a\x2d8546\x2d9d92\x2d406ab9b32723.mount: Deactivated successfully. Apr 17 23:36:44.604328 kubelet[3715]: I0417 23:36:44.604267 3715 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r78ld\" (UniqueName: \"kubernetes.io/projected/7f054ca4-5891-40ee-ba80-c401a6255ea7-kube-api-access-r78ld\") pod \"7f054ca4-5891-40ee-ba80-c401a6255ea7\" (UID: \"7f054ca4-5891-40ee-ba80-c401a6255ea7\") " Apr 17 23:36:44.605010 kubelet[3715]: I0417 23:36:44.604354 3715 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7f054ca4-5891-40ee-ba80-c401a6255ea7-whisker-ca-bundle\") pod \"7f054ca4-5891-40ee-ba80-c401a6255ea7\" (UID: \"7f054ca4-5891-40ee-ba80-c401a6255ea7\") " Apr 17 23:36:44.605010 kubelet[3715]: I0417 23:36:44.604412 3715 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/7f054ca4-5891-40ee-ba80-c401a6255ea7-nginx-config\") pod \"7f054ca4-5891-40ee-ba80-c401a6255ea7\" (UID: \"7f054ca4-5891-40ee-ba80-c401a6255ea7\") " Apr 17 23:36:44.605010 kubelet[3715]: I0417 23:36:44.604454 3715 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7f054ca4-5891-40ee-ba80-c401a6255ea7-whisker-backend-key-pair\") pod \"7f054ca4-5891-40ee-ba80-c401a6255ea7\" (UID: \"7f054ca4-5891-40ee-ba80-c401a6255ea7\") " Apr 17 23:36:44.609816 kubelet[3715]: I0417 23:36:44.609402 3715 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f054ca4-5891-40ee-ba80-c401a6255ea7-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "7f054ca4-5891-40ee-ba80-c401a6255ea7" (UID: "7f054ca4-5891-40ee-ba80-c401a6255ea7"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 17 23:36:44.613729 kubelet[3715]: I0417 23:36:44.613674 3715 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f054ca4-5891-40ee-ba80-c401a6255ea7-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "7f054ca4-5891-40ee-ba80-c401a6255ea7" (UID: "7f054ca4-5891-40ee-ba80-c401a6255ea7"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 17 23:36:44.639282 kubelet[3715]: I0417 23:36:44.639165 3715 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f054ca4-5891-40ee-ba80-c401a6255ea7-kube-api-access-r78ld" (OuterVolumeSpecName: "kube-api-access-r78ld") pod "7f054ca4-5891-40ee-ba80-c401a6255ea7" (UID: "7f054ca4-5891-40ee-ba80-c401a6255ea7"). InnerVolumeSpecName "kube-api-access-r78ld". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 17 23:36:44.643328 kubelet[3715]: I0417 23:36:44.643263 3715 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f054ca4-5891-40ee-ba80-c401a6255ea7-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "7f054ca4-5891-40ee-ba80-c401a6255ea7" (UID: "7f054ca4-5891-40ee-ba80-c401a6255ea7"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 17 23:36:44.647740 systemd[1]: var-lib-kubelet-pods-7f054ca4\x2d5891\x2d40ee\x2dba80\x2dc401a6255ea7-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dr78ld.mount: Deactivated successfully. Apr 17 23:36:44.666001 systemd[1]: var-lib-kubelet-pods-7f054ca4\x2d5891\x2d40ee\x2dba80\x2dc401a6255ea7-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Apr 17 23:36:44.707722 kubelet[3715]: I0417 23:36:44.705277 3715 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-r78ld\" (UniqueName: \"kubernetes.io/projected/7f054ca4-5891-40ee-ba80-c401a6255ea7-kube-api-access-r78ld\") on node \"ip-172-31-31-247\" DevicePath \"\"" Apr 17 23:36:44.707722 kubelet[3715]: I0417 23:36:44.705323 3715 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7f054ca4-5891-40ee-ba80-c401a6255ea7-whisker-ca-bundle\") on node \"ip-172-31-31-247\" DevicePath \"\"" Apr 17 23:36:44.707722 kubelet[3715]: I0417 23:36:44.705350 3715 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/7f054ca4-5891-40ee-ba80-c401a6255ea7-nginx-config\") on node \"ip-172-31-31-247\" DevicePath \"\"" Apr 17 23:36:44.707722 kubelet[3715]: I0417 23:36:44.705376 3715 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7f054ca4-5891-40ee-ba80-c401a6255ea7-whisker-backend-key-pair\") on node \"ip-172-31-31-247\" DevicePath \"\"" Apr 17 23:36:44.724140 kubelet[3715]: I0417 23:36:44.723565 3715 scope.go:117] "RemoveContainer" containerID="daf1ada7aa79185fcdb1f5aa952110e75c1d21b4922234b3b5a2056aebe75372" Apr 17 23:36:44.748400 containerd[2145]: time="2026-04-17T23:36:44.747871316Z" level=info msg="RemoveContainer for \"daf1ada7aa79185fcdb1f5aa952110e75c1d21b4922234b3b5a2056aebe75372\"" Apr 17 23:36:44.773936 containerd[2145]: time="2026-04-17T23:36:44.773727789Z" level=info msg="RemoveContainer for \"daf1ada7aa79185fcdb1f5aa952110e75c1d21b4922234b3b5a2056aebe75372\" returns successfully" Apr 17 23:36:44.774932 kubelet[3715]: I0417 23:36:44.774341 3715 scope.go:117] "RemoveContainer" containerID="42c961f494e6b64ce6f539ae0d4716310fc69d722e415e0aa8d5d9dfcb643509" Apr 17 23:36:44.783884 containerd[2145]: time="2026-04-17T23:36:44.783277041Z" level=info msg="RemoveContainer for \"42c961f494e6b64ce6f539ae0d4716310fc69d722e415e0aa8d5d9dfcb643509\"" Apr 17 23:36:44.800057 containerd[2145]: time="2026-04-17T23:36:44.799845645Z" level=info msg="RemoveContainer for \"42c961f494e6b64ce6f539ae0d4716310fc69d722e415e0aa8d5d9dfcb643509\" returns successfully" Apr 17 23:36:44.911281 kubelet[3715]: I0417 23:36:44.910466 3715 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/56b9f0d3-1d3a-4ca1-b67c-2f806afbf7e1-whisker-ca-bundle\") pod \"whisker-7667b9fb9c-gcgmc\" (UID: \"56b9f0d3-1d3a-4ca1-b67c-2f806afbf7e1\") " pod="calico-system/whisker-7667b9fb9c-gcgmc" Apr 17 23:36:44.912202 kubelet[3715]: I0417 23:36:44.912064 3715 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/56b9f0d3-1d3a-4ca1-b67c-2f806afbf7e1-nginx-config\") pod \"whisker-7667b9fb9c-gcgmc\" (UID: \"56b9f0d3-1d3a-4ca1-b67c-2f806afbf7e1\") " pod="calico-system/whisker-7667b9fb9c-gcgmc" Apr 17 23:36:44.913398 kubelet[3715]: I0417 23:36:44.912640 3715 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/56b9f0d3-1d3a-4ca1-b67c-2f806afbf7e1-whisker-backend-key-pair\") pod \"whisker-7667b9fb9c-gcgmc\" (UID: \"56b9f0d3-1d3a-4ca1-b67c-2f806afbf7e1\") " pod="calico-system/whisker-7667b9fb9c-gcgmc" Apr 17 23:36:44.914278 kubelet[3715]: I0417 23:36:44.914180 3715 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kzf6\" (UniqueName: \"kubernetes.io/projected/56b9f0d3-1d3a-4ca1-b67c-2f806afbf7e1-kube-api-access-7kzf6\") pod \"whisker-7667b9fb9c-gcgmc\" (UID: \"56b9f0d3-1d3a-4ca1-b67c-2f806afbf7e1\") " pod="calico-system/whisker-7667b9fb9c-gcgmc" Apr 17 23:36:45.102754 sshd[6231]: Accepted publickey for core from 4.175.71.9 port 54650 ssh2: RSA SHA256:Y4BPHWm1n8mK0R4k3Nc8+65YIxJqSgtKkzRPVXbpsws Apr 17 23:36:45.126030 containerd[2145]: time="2026-04-17T23:36:45.125477574Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:36:45.133119 containerd[2145]: time="2026-04-17T23:36:45.132000930Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=13766291" Apr 17 23:36:45.135390 containerd[2145]: time="2026-04-17T23:36:45.135330342Z" level=info msg="ImageCreate event name:\"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:36:45.135853 sshd[6231]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:36:45.151209 containerd[2145]: time="2026-04-17T23:36:45.150519690Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:36:45.156364 containerd[2145]: time="2026-04-17T23:36:45.156225846Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"15163768\" in 2.776411789s" Apr 17 23:36:45.156364 containerd[2145]: time="2026-04-17T23:36:45.156360954Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\"" Apr 17 23:36:45.160492 systemd-logind[2124]: New session 8 of user core. Apr 17 23:36:45.170694 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 17 23:36:45.178144 containerd[2145]: time="2026-04-17T23:36:45.178037311Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7667b9fb9c-gcgmc,Uid:56b9f0d3-1d3a-4ca1-b67c-2f806afbf7e1,Namespace:calico-system,Attempt:0,}" Apr 17 23:36:45.189880 containerd[2145]: time="2026-04-17T23:36:45.189188047Z" level=info msg="CreateContainer within sandbox \"7a0d6bf7e4f9356a7718c21a25c720b3b6921c9350e5ce5418b854a7dd81f79f\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Apr 17 23:36:45.267044 containerd[2145]: time="2026-04-17T23:36:45.266934787Z" level=info msg="CreateContainer within sandbox \"7a0d6bf7e4f9356a7718c21a25c720b3b6921c9350e5ce5418b854a7dd81f79f\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"945f2cfed282984534ce2c86ef42ea03bbf00e773ade13b7ba6c2dd2bff6d607\"" Apr 17 23:36:45.273377 containerd[2145]: time="2026-04-17T23:36:45.270833551Z" level=info msg="StartContainer for \"945f2cfed282984534ce2c86ef42ea03bbf00e773ade13b7ba6c2dd2bff6d607\"" Apr 17 23:36:45.466468 containerd[2145]: time="2026-04-17T23:36:45.465358160Z" level=info msg="StartContainer for \"945f2cfed282984534ce2c86ef42ea03bbf00e773ade13b7ba6c2dd2bff6d607\" returns successfully" Apr 17 23:36:45.573523 systemd-networkd[1694]: cali2d729f2b44e: Link UP Apr 17 23:36:45.575929 (udev-worker)[6237]: Network interface NamePolicy= disabled on kernel command line. Apr 17 23:36:45.576055 systemd-networkd[1694]: cali2d729f2b44e: Gained carrier Apr 17 23:36:45.609293 containerd[2145]: 2026-04-17 23:36:45.372 [INFO][6274] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--31--247-k8s-whisker--7667b9fb9c--gcgmc-eth0 whisker-7667b9fb9c- calico-system 56b9f0d3-1d3a-4ca1-b67c-2f806afbf7e1 1133 0 2026-04-17 23:36:44 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:7667b9fb9c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-31-247 whisker-7667b9fb9c-gcgmc eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali2d729f2b44e [] [] }} ContainerID="83bc3d7471e59fe4b766516041d486bdf793328c6e7899e18e2588acb01b8c05" Namespace="calico-system" Pod="whisker-7667b9fb9c-gcgmc" WorkloadEndpoint="ip--172--31--31--247-k8s-whisker--7667b9fb9c--gcgmc-" Apr 17 23:36:45.609293 containerd[2145]: 2026-04-17 23:36:45.372 [INFO][6274] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="83bc3d7471e59fe4b766516041d486bdf793328c6e7899e18e2588acb01b8c05" Namespace="calico-system" Pod="whisker-7667b9fb9c-gcgmc" WorkloadEndpoint="ip--172--31--31--247-k8s-whisker--7667b9fb9c--gcgmc-eth0" Apr 17 23:36:45.609293 containerd[2145]: 2026-04-17 23:36:45.476 [INFO][6310] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="83bc3d7471e59fe4b766516041d486bdf793328c6e7899e18e2588acb01b8c05" HandleID="k8s-pod-network.83bc3d7471e59fe4b766516041d486bdf793328c6e7899e18e2588acb01b8c05" Workload="ip--172--31--31--247-k8s-whisker--7667b9fb9c--gcgmc-eth0" Apr 17 23:36:45.609293 containerd[2145]: 2026-04-17 23:36:45.499 [INFO][6310] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="83bc3d7471e59fe4b766516041d486bdf793328c6e7899e18e2588acb01b8c05" HandleID="k8s-pod-network.83bc3d7471e59fe4b766516041d486bdf793328c6e7899e18e2588acb01b8c05" Workload="ip--172--31--31--247-k8s-whisker--7667b9fb9c--gcgmc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000357b80), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-31-247", "pod":"whisker-7667b9fb9c-gcgmc", "timestamp":"2026-04-17 23:36:45.476977568 +0000 UTC"}, Hostname:"ip-172-31-31-247", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x4000482f20)} Apr 17 23:36:45.609293 containerd[2145]: 2026-04-17 23:36:45.500 [INFO][6310] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:36:45.609293 containerd[2145]: 2026-04-17 23:36:45.501 [INFO][6310] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:36:45.609293 containerd[2145]: 2026-04-17 23:36:45.502 [INFO][6310] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-31-247' Apr 17 23:36:45.609293 containerd[2145]: 2026-04-17 23:36:45.506 [INFO][6310] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.83bc3d7471e59fe4b766516041d486bdf793328c6e7899e18e2588acb01b8c05" host="ip-172-31-31-247" Apr 17 23:36:45.609293 containerd[2145]: 2026-04-17 23:36:45.516 [INFO][6310] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-31-247" Apr 17 23:36:45.609293 containerd[2145]: 2026-04-17 23:36:45.524 [INFO][6310] ipam/ipam.go 526: Trying affinity for 192.168.115.192/26 host="ip-172-31-31-247" Apr 17 23:36:45.609293 containerd[2145]: 2026-04-17 23:36:45.528 [INFO][6310] ipam/ipam.go 160: Attempting to load block cidr=192.168.115.192/26 host="ip-172-31-31-247" Apr 17 23:36:45.609293 containerd[2145]: 2026-04-17 23:36:45.538 [INFO][6310] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.115.192/26 host="ip-172-31-31-247" Apr 17 23:36:45.609293 containerd[2145]: 2026-04-17 23:36:45.538 [INFO][6310] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.115.192/26 handle="k8s-pod-network.83bc3d7471e59fe4b766516041d486bdf793328c6e7899e18e2588acb01b8c05" host="ip-172-31-31-247" Apr 17 23:36:45.609293 containerd[2145]: 2026-04-17 23:36:45.541 [INFO][6310] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.83bc3d7471e59fe4b766516041d486bdf793328c6e7899e18e2588acb01b8c05 Apr 17 23:36:45.609293 containerd[2145]: 2026-04-17 23:36:45.548 [INFO][6310] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.115.192/26 handle="k8s-pod-network.83bc3d7471e59fe4b766516041d486bdf793328c6e7899e18e2588acb01b8c05" host="ip-172-31-31-247" Apr 17 23:36:45.609293 containerd[2145]: 2026-04-17 23:36:45.563 [INFO][6310] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.115.201/26] block=192.168.115.192/26 handle="k8s-pod-network.83bc3d7471e59fe4b766516041d486bdf793328c6e7899e18e2588acb01b8c05" host="ip-172-31-31-247" Apr 17 23:36:45.609293 containerd[2145]: 2026-04-17 23:36:45.563 [INFO][6310] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.115.201/26] handle="k8s-pod-network.83bc3d7471e59fe4b766516041d486bdf793328c6e7899e18e2588acb01b8c05" host="ip-172-31-31-247" Apr 17 23:36:45.609293 containerd[2145]: 2026-04-17 23:36:45.563 [INFO][6310] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:36:45.609293 containerd[2145]: 2026-04-17 23:36:45.563 [INFO][6310] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.115.201/26] IPv6=[] ContainerID="83bc3d7471e59fe4b766516041d486bdf793328c6e7899e18e2588acb01b8c05" HandleID="k8s-pod-network.83bc3d7471e59fe4b766516041d486bdf793328c6e7899e18e2588acb01b8c05" Workload="ip--172--31--31--247-k8s-whisker--7667b9fb9c--gcgmc-eth0" Apr 17 23:36:45.612191 containerd[2145]: 2026-04-17 23:36:45.569 [INFO][6274] cni-plugin/k8s.go 418: Populated endpoint ContainerID="83bc3d7471e59fe4b766516041d486bdf793328c6e7899e18e2588acb01b8c05" Namespace="calico-system" Pod="whisker-7667b9fb9c-gcgmc" WorkloadEndpoint="ip--172--31--31--247-k8s-whisker--7667b9fb9c--gcgmc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--247-k8s-whisker--7667b9fb9c--gcgmc-eth0", GenerateName:"whisker-7667b9fb9c-", Namespace:"calico-system", SelfLink:"", UID:"56b9f0d3-1d3a-4ca1-b67c-2f806afbf7e1", ResourceVersion:"1133", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 36, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7667b9fb9c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-247", ContainerID:"", Pod:"whisker-7667b9fb9c-gcgmc", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.115.201/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali2d729f2b44e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:36:45.612191 containerd[2145]: 2026-04-17 23:36:45.569 [INFO][6274] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.115.201/32] ContainerID="83bc3d7471e59fe4b766516041d486bdf793328c6e7899e18e2588acb01b8c05" Namespace="calico-system" Pod="whisker-7667b9fb9c-gcgmc" WorkloadEndpoint="ip--172--31--31--247-k8s-whisker--7667b9fb9c--gcgmc-eth0" Apr 17 23:36:45.612191 containerd[2145]: 2026-04-17 23:36:45.569 [INFO][6274] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2d729f2b44e ContainerID="83bc3d7471e59fe4b766516041d486bdf793328c6e7899e18e2588acb01b8c05" Namespace="calico-system" Pod="whisker-7667b9fb9c-gcgmc" WorkloadEndpoint="ip--172--31--31--247-k8s-whisker--7667b9fb9c--gcgmc-eth0" Apr 17 23:36:45.612191 containerd[2145]: 2026-04-17 23:36:45.573 [INFO][6274] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="83bc3d7471e59fe4b766516041d486bdf793328c6e7899e18e2588acb01b8c05" Namespace="calico-system" Pod="whisker-7667b9fb9c-gcgmc" WorkloadEndpoint="ip--172--31--31--247-k8s-whisker--7667b9fb9c--gcgmc-eth0" Apr 17 23:36:45.612191 containerd[2145]: 2026-04-17 23:36:45.574 [INFO][6274] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="83bc3d7471e59fe4b766516041d486bdf793328c6e7899e18e2588acb01b8c05" Namespace="calico-system" Pod="whisker-7667b9fb9c-gcgmc" WorkloadEndpoint="ip--172--31--31--247-k8s-whisker--7667b9fb9c--gcgmc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--247-k8s-whisker--7667b9fb9c--gcgmc-eth0", GenerateName:"whisker-7667b9fb9c-", Namespace:"calico-system", SelfLink:"", UID:"56b9f0d3-1d3a-4ca1-b67c-2f806afbf7e1", ResourceVersion:"1133", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 36, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7667b9fb9c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-247", ContainerID:"83bc3d7471e59fe4b766516041d486bdf793328c6e7899e18e2588acb01b8c05", Pod:"whisker-7667b9fb9c-gcgmc", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.115.201/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali2d729f2b44e", MAC:"5e:a7:1c:46:dd:6a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:36:45.612191 containerd[2145]: 2026-04-17 23:36:45.592 [INFO][6274] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="83bc3d7471e59fe4b766516041d486bdf793328c6e7899e18e2588acb01b8c05" Namespace="calico-system" Pod="whisker-7667b9fb9c-gcgmc" WorkloadEndpoint="ip--172--31--31--247-k8s-whisker--7667b9fb9c--gcgmc-eth0" Apr 17 23:36:45.701822 containerd[2145]: time="2026-04-17T23:36:45.701622945Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:36:45.701822 containerd[2145]: time="2026-04-17T23:36:45.701735277Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:36:45.702184 containerd[2145]: time="2026-04-17T23:36:45.701814129Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:36:45.704689 containerd[2145]: time="2026-04-17T23:36:45.704214777Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:36:45.800902 kubelet[3715]: I0417 23:36:45.800591 3715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-m7ffs" podStartSLOduration=22.850668772 podStartE2EDuration="40.80056117s" podCreationTimestamp="2026-04-17 23:36:05 +0000 UTC" firstStartedPulling="2026-04-17 23:36:27.216803665 +0000 UTC m=+52.766595935" lastFinishedPulling="2026-04-17 23:36:45.166696063 +0000 UTC m=+70.716488333" observedRunningTime="2026-04-17 23:36:45.78650677 +0000 UTC m=+71.336299100" watchObservedRunningTime="2026-04-17 23:36:45.80056117 +0000 UTC m=+71.350353464" Apr 17 23:36:45.970987 containerd[2145]: time="2026-04-17T23:36:45.970834535Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7667b9fb9c-gcgmc,Uid:56b9f0d3-1d3a-4ca1-b67c-2f806afbf7e1,Namespace:calico-system,Attempt:0,} returns sandbox id \"83bc3d7471e59fe4b766516041d486bdf793328c6e7899e18e2588acb01b8c05\"" Apr 17 23:36:45.988568 containerd[2145]: time="2026-04-17T23:36:45.986863391Z" level=info msg="CreateContainer within sandbox \"83bc3d7471e59fe4b766516041d486bdf793328c6e7899e18e2588acb01b8c05\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Apr 17 23:36:46.009546 containerd[2145]: time="2026-04-17T23:36:46.009368443Z" level=info msg="CreateContainer within sandbox \"83bc3d7471e59fe4b766516041d486bdf793328c6e7899e18e2588acb01b8c05\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"66dd62e71a5d08f52ffab5f3dd7f6e35b64f406887d8d671d0450934f6386be1\"" Apr 17 23:36:46.010465 containerd[2145]: time="2026-04-17T23:36:46.010408255Z" level=info msg="StartContainer for \"66dd62e71a5d08f52ffab5f3dd7f6e35b64f406887d8d671d0450934f6386be1\"" Apr 17 23:36:46.058804 kubelet[3715]: I0417 23:36:46.058649 3715 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Apr 17 23:36:46.058804 kubelet[3715]: I0417 23:36:46.058735 3715 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Apr 17 23:36:46.153618 sshd[6231]: pam_unix(sshd:session): session closed for user core Apr 17 23:36:46.168769 systemd[1]: sshd@7-172.31.31.247:22-4.175.71.9:54650.service: Deactivated successfully. Apr 17 23:36:46.193557 systemd[1]: session-8.scope: Deactivated successfully. Apr 17 23:36:46.194413 systemd-logind[2124]: Session 8 logged out. Waiting for processes to exit. Apr 17 23:36:46.207573 systemd-logind[2124]: Removed session 8. Apr 17 23:36:46.344391 containerd[2145]: time="2026-04-17T23:36:46.343780292Z" level=info msg="StartContainer for \"66dd62e71a5d08f52ffab5f3dd7f6e35b64f406887d8d671d0450934f6386be1\" returns successfully" Apr 17 23:36:46.357443 containerd[2145]: time="2026-04-17T23:36:46.357169532Z" level=info msg="CreateContainer within sandbox \"83bc3d7471e59fe4b766516041d486bdf793328c6e7899e18e2588acb01b8c05\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Apr 17 23:36:46.379872 containerd[2145]: time="2026-04-17T23:36:46.379773681Z" level=info msg="CreateContainer within sandbox \"83bc3d7471e59fe4b766516041d486bdf793328c6e7899e18e2588acb01b8c05\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"5ea9c5afa3c834ba91c4235c868a512a1d99a06ce106cb739a0584ea62975a47\"" Apr 17 23:36:46.382138 containerd[2145]: time="2026-04-17T23:36:46.381762909Z" level=info msg="StartContainer for \"5ea9c5afa3c834ba91c4235c868a512a1d99a06ce106cb739a0584ea62975a47\"" Apr 17 23:36:46.528536 containerd[2145]: time="2026-04-17T23:36:46.528484221Z" level=info msg="StartContainer for \"5ea9c5afa3c834ba91c4235c868a512a1d99a06ce106cb739a0584ea62975a47\" returns successfully" Apr 17 23:36:46.704287 kubelet[3715]: I0417 23:36:46.704131 3715 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f054ca4-5891-40ee-ba80-c401a6255ea7" path="/var/lib/kubelet/pods/7f054ca4-5891-40ee-ba80-c401a6255ea7/volumes" Apr 17 23:36:47.366329 systemd-networkd[1694]: cali2d729f2b44e: Gained IPv6LL Apr 17 23:36:49.817561 ntpd[2102]: Listen normally on 16 cali2d729f2b44e [fe80::ecee:eeff:feee:eeee%15]:123 Apr 17 23:36:49.818277 ntpd[2102]: 17 Apr 23:36:49 ntpd[2102]: Listen normally on 16 cali2d729f2b44e [fe80::ecee:eeff:feee:eeee%15]:123 Apr 17 23:36:49.818277 ntpd[2102]: 17 Apr 23:36:49 ntpd[2102]: Deleting interface #9 cali95998a6f9d6, fe80::ecee:eeff:feee:eeee%6#123, interface stats: received=0, sent=0, dropped=0, active_time=16 secs Apr 17 23:36:49.817631 ntpd[2102]: Deleting interface #9 cali95998a6f9d6, fe80::ecee:eeff:feee:eeee%6#123, interface stats: received=0, sent=0, dropped=0, active_time=16 secs Apr 17 23:36:51.327372 systemd[1]: Started sshd@8-172.31.31.247:22-4.175.71.9:40160.service - OpenSSH per-connection server daemon (4.175.71.9:40160). Apr 17 23:36:52.321002 sshd[6497]: Accepted publickey for core from 4.175.71.9 port 40160 ssh2: RSA SHA256:Y4BPHWm1n8mK0R4k3Nc8+65YIxJqSgtKkzRPVXbpsws Apr 17 23:36:52.324078 sshd[6497]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:36:52.331971 systemd-logind[2124]: New session 9 of user core. Apr 17 23:36:52.338913 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 17 23:36:53.134177 sshd[6497]: pam_unix(sshd:session): session closed for user core Apr 17 23:36:53.144848 systemd[1]: sshd@8-172.31.31.247:22-4.175.71.9:40160.service: Deactivated successfully. Apr 17 23:36:53.146299 systemd-logind[2124]: Session 9 logged out. Waiting for processes to exit. Apr 17 23:36:53.157887 systemd[1]: session-9.scope: Deactivated successfully. Apr 17 23:36:53.161818 systemd-logind[2124]: Removed session 9. Apr 17 23:36:56.510247 kubelet[3715]: I0417 23:36:56.510149 3715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-7667b9fb9c-gcgmc" podStartSLOduration=12.510122539 podStartE2EDuration="12.510122539s" podCreationTimestamp="2026-04-17 23:36:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:36:46.796030859 +0000 UTC m=+72.345823237" watchObservedRunningTime="2026-04-17 23:36:56.510122539 +0000 UTC m=+82.059914857" Apr 17 23:36:58.311202 systemd[1]: Started sshd@9-172.31.31.247:22-4.175.71.9:50270.service - OpenSSH per-connection server daemon (4.175.71.9:50270). Apr 17 23:36:59.352312 sshd[6561]: Accepted publickey for core from 4.175.71.9 port 50270 ssh2: RSA SHA256:Y4BPHWm1n8mK0R4k3Nc8+65YIxJqSgtKkzRPVXbpsws Apr 17 23:36:59.355608 sshd[6561]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:36:59.363904 systemd-logind[2124]: New session 10 of user core. Apr 17 23:36:59.369733 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 17 23:37:00.189439 sshd[6561]: pam_unix(sshd:session): session closed for user core Apr 17 23:37:00.195483 systemd[1]: sshd@9-172.31.31.247:22-4.175.71.9:50270.service: Deactivated successfully. Apr 17 23:37:00.196346 systemd-logind[2124]: Session 10 logged out. Waiting for processes to exit. Apr 17 23:37:00.205907 systemd[1]: session-10.scope: Deactivated successfully. Apr 17 23:37:00.209430 systemd-logind[2124]: Removed session 10. Apr 17 23:37:05.364578 systemd[1]: Started sshd@10-172.31.31.247:22-4.175.71.9:50286.service - OpenSSH per-connection server daemon (4.175.71.9:50286). Apr 17 23:37:06.418379 sshd[6614]: Accepted publickey for core from 4.175.71.9 port 50286 ssh2: RSA SHA256:Y4BPHWm1n8mK0R4k3Nc8+65YIxJqSgtKkzRPVXbpsws Apr 17 23:37:06.422387 sshd[6614]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:37:06.432359 systemd-logind[2124]: New session 11 of user core. Apr 17 23:37:06.438060 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 17 23:37:06.966873 kubelet[3715]: I0417 23:37:06.966430 3715 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 17 23:37:07.373450 sshd[6614]: pam_unix(sshd:session): session closed for user core Apr 17 23:37:07.389874 systemd[1]: sshd@10-172.31.31.247:22-4.175.71.9:50286.service: Deactivated successfully. Apr 17 23:37:07.391756 systemd-logind[2124]: Session 11 logged out. Waiting for processes to exit. Apr 17 23:37:07.404388 systemd[1]: session-11.scope: Deactivated successfully. Apr 17 23:37:07.407269 systemd-logind[2124]: Removed session 11. Apr 17 23:37:07.557620 systemd[1]: Started sshd@11-172.31.31.247:22-4.175.71.9:41546.service - OpenSSH per-connection server daemon (4.175.71.9:41546). Apr 17 23:37:08.597146 sshd[6641]: Accepted publickey for core from 4.175.71.9 port 41546 ssh2: RSA SHA256:Y4BPHWm1n8mK0R4k3Nc8+65YIxJqSgtKkzRPVXbpsws Apr 17 23:37:08.599522 sshd[6641]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:37:08.608428 systemd-logind[2124]: New session 12 of user core. Apr 17 23:37:08.615613 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 17 23:37:09.525908 sshd[6641]: pam_unix(sshd:session): session closed for user core Apr 17 23:37:09.534463 systemd-logind[2124]: Session 12 logged out. Waiting for processes to exit. Apr 17 23:37:09.535654 systemd[1]: sshd@11-172.31.31.247:22-4.175.71.9:41546.service: Deactivated successfully. Apr 17 23:37:09.544649 systemd[1]: session-12.scope: Deactivated successfully. Apr 17 23:37:09.549022 systemd-logind[2124]: Removed session 12. Apr 17 23:37:09.689708 systemd[1]: Started sshd@12-172.31.31.247:22-4.175.71.9:41554.service - OpenSSH per-connection server daemon (4.175.71.9:41554). Apr 17 23:37:10.698317 sshd[6652]: Accepted publickey for core from 4.175.71.9 port 41554 ssh2: RSA SHA256:Y4BPHWm1n8mK0R4k3Nc8+65YIxJqSgtKkzRPVXbpsws Apr 17 23:37:10.724656 sshd[6652]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:37:10.778991 systemd-logind[2124]: New session 13 of user core. Apr 17 23:37:10.788442 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 17 23:37:11.526853 sshd[6652]: pam_unix(sshd:session): session closed for user core Apr 17 23:37:11.534804 systemd[1]: sshd@12-172.31.31.247:22-4.175.71.9:41554.service: Deactivated successfully. Apr 17 23:37:11.543468 systemd[1]: session-13.scope: Deactivated successfully. Apr 17 23:37:11.545139 systemd-logind[2124]: Session 13 logged out. Waiting for processes to exit. Apr 17 23:37:11.547857 systemd-logind[2124]: Removed session 13. Apr 17 23:37:16.715758 systemd[1]: Started sshd@13-172.31.31.247:22-4.175.71.9:54612.service - OpenSSH per-connection server daemon (4.175.71.9:54612). Apr 17 23:37:17.811266 sshd[6698]: Accepted publickey for core from 4.175.71.9 port 54612 ssh2: RSA SHA256:Y4BPHWm1n8mK0R4k3Nc8+65YIxJqSgtKkzRPVXbpsws Apr 17 23:37:17.818939 sshd[6698]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:37:17.837237 systemd-logind[2124]: New session 14 of user core. Apr 17 23:37:17.845722 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 17 23:37:18.814837 sshd[6698]: pam_unix(sshd:session): session closed for user core Apr 17 23:37:18.827719 systemd[1]: sshd@13-172.31.31.247:22-4.175.71.9:54612.service: Deactivated successfully. Apr 17 23:37:18.842042 systemd[1]: session-14.scope: Deactivated successfully. Apr 17 23:37:18.844243 systemd-logind[2124]: Session 14 logged out. Waiting for processes to exit. Apr 17 23:37:18.847486 systemd-logind[2124]: Removed session 14. Apr 17 23:37:18.961346 kubelet[3715]: I0417 23:37:18.961282 3715 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 17 23:37:18.982760 systemd[1]: Started sshd@14-172.31.31.247:22-4.175.71.9:54626.service - OpenSSH per-connection server daemon (4.175.71.9:54626). Apr 17 23:37:20.025426 sshd[6712]: Accepted publickey for core from 4.175.71.9 port 54626 ssh2: RSA SHA256:Y4BPHWm1n8mK0R4k3Nc8+65YIxJqSgtKkzRPVXbpsws Apr 17 23:37:20.030532 sshd[6712]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:37:20.043285 systemd-logind[2124]: New session 15 of user core. Apr 17 23:37:20.050193 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 17 23:37:21.273862 sshd[6712]: pam_unix(sshd:session): session closed for user core Apr 17 23:37:21.280312 systemd-logind[2124]: Session 15 logged out. Waiting for processes to exit. Apr 17 23:37:21.281863 systemd[1]: sshd@14-172.31.31.247:22-4.175.71.9:54626.service: Deactivated successfully. Apr 17 23:37:21.295922 systemd[1]: session-15.scope: Deactivated successfully. Apr 17 23:37:21.297283 systemd-logind[2124]: Removed session 15. Apr 17 23:37:21.441616 systemd[1]: Started sshd@15-172.31.31.247:22-4.175.71.9:54632.service - OpenSSH per-connection server daemon (4.175.71.9:54632). Apr 17 23:37:22.449433 sshd[6727]: Accepted publickey for core from 4.175.71.9 port 54632 ssh2: RSA SHA256:Y4BPHWm1n8mK0R4k3Nc8+65YIxJqSgtKkzRPVXbpsws Apr 17 23:37:22.452188 sshd[6727]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:37:22.462248 systemd-logind[2124]: New session 16 of user core. Apr 17 23:37:22.469821 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 17 23:37:24.200710 sshd[6727]: pam_unix(sshd:session): session closed for user core Apr 17 23:37:24.207923 systemd-logind[2124]: Session 16 logged out. Waiting for processes to exit. Apr 17 23:37:24.208489 systemd[1]: sshd@15-172.31.31.247:22-4.175.71.9:54632.service: Deactivated successfully. Apr 17 23:37:24.214991 systemd[1]: session-16.scope: Deactivated successfully. Apr 17 23:37:24.221384 systemd-logind[2124]: Removed session 16. Apr 17 23:37:24.370710 systemd[1]: Started sshd@16-172.31.31.247:22-4.175.71.9:54642.service - OpenSSH per-connection server daemon (4.175.71.9:54642). Apr 17 23:37:25.365994 sshd[6754]: Accepted publickey for core from 4.175.71.9 port 54642 ssh2: RSA SHA256:Y4BPHWm1n8mK0R4k3Nc8+65YIxJqSgtKkzRPVXbpsws Apr 17 23:37:25.369858 sshd[6754]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:37:25.379364 systemd-logind[2124]: New session 17 of user core. Apr 17 23:37:25.389769 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 17 23:37:26.568401 sshd[6754]: pam_unix(sshd:session): session closed for user core Apr 17 23:37:26.576609 systemd[1]: sshd@16-172.31.31.247:22-4.175.71.9:54642.service: Deactivated successfully. Apr 17 23:37:26.584760 systemd[1]: session-17.scope: Deactivated successfully. Apr 17 23:37:26.587401 systemd-logind[2124]: Session 17 logged out. Waiting for processes to exit. Apr 17 23:37:26.590288 systemd-logind[2124]: Removed session 17. Apr 17 23:37:26.735647 systemd[1]: Started sshd@17-172.31.31.247:22-4.175.71.9:35382.service - OpenSSH per-connection server daemon (4.175.71.9:35382). Apr 17 23:37:27.735925 sshd[6793]: Accepted publickey for core from 4.175.71.9 port 35382 ssh2: RSA SHA256:Y4BPHWm1n8mK0R4k3Nc8+65YIxJqSgtKkzRPVXbpsws Apr 17 23:37:27.738631 sshd[6793]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:37:27.746678 systemd-logind[2124]: New session 18 of user core. Apr 17 23:37:27.759737 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 17 23:37:28.546428 sshd[6793]: pam_unix(sshd:session): session closed for user core Apr 17 23:37:28.553025 systemd[1]: sshd@17-172.31.31.247:22-4.175.71.9:35382.service: Deactivated successfully. Apr 17 23:37:28.562644 systemd[1]: session-18.scope: Deactivated successfully. Apr 17 23:37:28.564936 systemd-logind[2124]: Session 18 logged out. Waiting for processes to exit. Apr 17 23:37:28.568469 systemd-logind[2124]: Removed session 18. Apr 17 23:37:32.362070 systemd[1]: run-containerd-runc-k8s.io-bf48250da9b899562ec725f2893e4082c4a67e327f3cfc7cedcbbfc37e93fd3e-runc.XqClNl.mount: Deactivated successfully. Apr 17 23:37:33.725531 systemd[1]: Started sshd@18-172.31.31.247:22-4.175.71.9:35392.service - OpenSSH per-connection server daemon (4.175.71.9:35392). Apr 17 23:37:34.758113 containerd[2145]: time="2026-04-17T23:37:34.758035317Z" level=info msg="StopPodSandbox for \"937f6c9f91371a995defeeaab9eaf2ffabf5dfaf295eb595c0e1b8a0ab440621\"" Apr 17 23:37:34.777901 sshd[6828]: Accepted publickey for core from 4.175.71.9 port 35392 ssh2: RSA SHA256:Y4BPHWm1n8mK0R4k3Nc8+65YIxJqSgtKkzRPVXbpsws Apr 17 23:37:34.780483 sshd[6828]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:37:34.795041 systemd-logind[2124]: New session 19 of user core. Apr 17 23:37:34.798863 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 17 23:37:34.916236 containerd[2145]: 2026-04-17 23:37:34.841 [WARNING][6861] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="937f6c9f91371a995defeeaab9eaf2ffabf5dfaf295eb595c0e1b8a0ab440621" WorkloadEndpoint="ip--172--31--31--247-k8s-whisker--7979cdb584--z9jkm-eth0" Apr 17 23:37:34.916236 containerd[2145]: 2026-04-17 23:37:34.842 [INFO][6861] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="937f6c9f91371a995defeeaab9eaf2ffabf5dfaf295eb595c0e1b8a0ab440621" Apr 17 23:37:34.916236 containerd[2145]: 2026-04-17 23:37:34.842 [INFO][6861] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="937f6c9f91371a995defeeaab9eaf2ffabf5dfaf295eb595c0e1b8a0ab440621" iface="eth0" netns="" Apr 17 23:37:34.916236 containerd[2145]: 2026-04-17 23:37:34.842 [INFO][6861] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="937f6c9f91371a995defeeaab9eaf2ffabf5dfaf295eb595c0e1b8a0ab440621" Apr 17 23:37:34.916236 containerd[2145]: 2026-04-17 23:37:34.842 [INFO][6861] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="937f6c9f91371a995defeeaab9eaf2ffabf5dfaf295eb595c0e1b8a0ab440621" Apr 17 23:37:34.916236 containerd[2145]: 2026-04-17 23:37:34.890 [INFO][6870] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="937f6c9f91371a995defeeaab9eaf2ffabf5dfaf295eb595c0e1b8a0ab440621" HandleID="k8s-pod-network.937f6c9f91371a995defeeaab9eaf2ffabf5dfaf295eb595c0e1b8a0ab440621" Workload="ip--172--31--31--247-k8s-whisker--7979cdb584--z9jkm-eth0" Apr 17 23:37:34.916236 containerd[2145]: 2026-04-17 23:37:34.891 [INFO][6870] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:37:34.916236 containerd[2145]: 2026-04-17 23:37:34.891 [INFO][6870] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:37:34.916236 containerd[2145]: 2026-04-17 23:37:34.906 [WARNING][6870] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="937f6c9f91371a995defeeaab9eaf2ffabf5dfaf295eb595c0e1b8a0ab440621" HandleID="k8s-pod-network.937f6c9f91371a995defeeaab9eaf2ffabf5dfaf295eb595c0e1b8a0ab440621" Workload="ip--172--31--31--247-k8s-whisker--7979cdb584--z9jkm-eth0" Apr 17 23:37:34.916236 containerd[2145]: 2026-04-17 23:37:34.906 [INFO][6870] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="937f6c9f91371a995defeeaab9eaf2ffabf5dfaf295eb595c0e1b8a0ab440621" HandleID="k8s-pod-network.937f6c9f91371a995defeeaab9eaf2ffabf5dfaf295eb595c0e1b8a0ab440621" Workload="ip--172--31--31--247-k8s-whisker--7979cdb584--z9jkm-eth0" Apr 17 23:37:34.916236 containerd[2145]: 2026-04-17 23:37:34.908 [INFO][6870] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:37:34.916236 containerd[2145]: 2026-04-17 23:37:34.912 [INFO][6861] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="937f6c9f91371a995defeeaab9eaf2ffabf5dfaf295eb595c0e1b8a0ab440621" Apr 17 23:37:34.916236 containerd[2145]: time="2026-04-17T23:37:34.916018906Z" level=info msg="TearDown network for sandbox \"937f6c9f91371a995defeeaab9eaf2ffabf5dfaf295eb595c0e1b8a0ab440621\" successfully" Apr 17 23:37:34.916236 containerd[2145]: time="2026-04-17T23:37:34.916058962Z" level=info msg="StopPodSandbox for \"937f6c9f91371a995defeeaab9eaf2ffabf5dfaf295eb595c0e1b8a0ab440621\" returns successfully" Apr 17 23:37:34.918416 containerd[2145]: time="2026-04-17T23:37:34.918149830Z" level=info msg="RemovePodSandbox for \"937f6c9f91371a995defeeaab9eaf2ffabf5dfaf295eb595c0e1b8a0ab440621\"" Apr 17 23:37:34.918416 containerd[2145]: time="2026-04-17T23:37:34.918216958Z" level=info msg="Forcibly stopping sandbox \"937f6c9f91371a995defeeaab9eaf2ffabf5dfaf295eb595c0e1b8a0ab440621\"" Apr 17 23:37:35.074412 containerd[2145]: 2026-04-17 23:37:34.992 [WARNING][6885] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="937f6c9f91371a995defeeaab9eaf2ffabf5dfaf295eb595c0e1b8a0ab440621" WorkloadEndpoint="ip--172--31--31--247-k8s-whisker--7979cdb584--z9jkm-eth0" Apr 17 23:37:35.074412 containerd[2145]: 2026-04-17 23:37:34.992 [INFO][6885] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="937f6c9f91371a995defeeaab9eaf2ffabf5dfaf295eb595c0e1b8a0ab440621" Apr 17 23:37:35.074412 containerd[2145]: 2026-04-17 23:37:34.992 [INFO][6885] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="937f6c9f91371a995defeeaab9eaf2ffabf5dfaf295eb595c0e1b8a0ab440621" iface="eth0" netns="" Apr 17 23:37:35.074412 containerd[2145]: 2026-04-17 23:37:34.992 [INFO][6885] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="937f6c9f91371a995defeeaab9eaf2ffabf5dfaf295eb595c0e1b8a0ab440621" Apr 17 23:37:35.074412 containerd[2145]: 2026-04-17 23:37:34.992 [INFO][6885] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="937f6c9f91371a995defeeaab9eaf2ffabf5dfaf295eb595c0e1b8a0ab440621" Apr 17 23:37:35.074412 containerd[2145]: 2026-04-17 23:37:35.038 [INFO][6892] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="937f6c9f91371a995defeeaab9eaf2ffabf5dfaf295eb595c0e1b8a0ab440621" HandleID="k8s-pod-network.937f6c9f91371a995defeeaab9eaf2ffabf5dfaf295eb595c0e1b8a0ab440621" Workload="ip--172--31--31--247-k8s-whisker--7979cdb584--z9jkm-eth0" Apr 17 23:37:35.074412 containerd[2145]: 2026-04-17 23:37:35.038 [INFO][6892] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:37:35.074412 containerd[2145]: 2026-04-17 23:37:35.038 [INFO][6892] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:37:35.074412 containerd[2145]: 2026-04-17 23:37:35.056 [WARNING][6892] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="937f6c9f91371a995defeeaab9eaf2ffabf5dfaf295eb595c0e1b8a0ab440621" HandleID="k8s-pod-network.937f6c9f91371a995defeeaab9eaf2ffabf5dfaf295eb595c0e1b8a0ab440621" Workload="ip--172--31--31--247-k8s-whisker--7979cdb584--z9jkm-eth0" Apr 17 23:37:35.074412 containerd[2145]: 2026-04-17 23:37:35.056 [INFO][6892] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="937f6c9f91371a995defeeaab9eaf2ffabf5dfaf295eb595c0e1b8a0ab440621" HandleID="k8s-pod-network.937f6c9f91371a995defeeaab9eaf2ffabf5dfaf295eb595c0e1b8a0ab440621" Workload="ip--172--31--31--247-k8s-whisker--7979cdb584--z9jkm-eth0" Apr 17 23:37:35.074412 containerd[2145]: 2026-04-17 23:37:35.061 [INFO][6892] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:37:35.074412 containerd[2145]: 2026-04-17 23:37:35.070 [INFO][6885] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="937f6c9f91371a995defeeaab9eaf2ffabf5dfaf295eb595c0e1b8a0ab440621" Apr 17 23:37:35.076145 containerd[2145]: time="2026-04-17T23:37:35.075395370Z" level=info msg="TearDown network for sandbox \"937f6c9f91371a995defeeaab9eaf2ffabf5dfaf295eb595c0e1b8a0ab440621\" successfully" Apr 17 23:37:35.084699 containerd[2145]: time="2026-04-17T23:37:35.084643410Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"937f6c9f91371a995defeeaab9eaf2ffabf5dfaf295eb595c0e1b8a0ab440621\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 17 23:37:35.085017 containerd[2145]: time="2026-04-17T23:37:35.084985842Z" level=info msg="RemovePodSandbox \"937f6c9f91371a995defeeaab9eaf2ffabf5dfaf295eb595c0e1b8a0ab440621\" returns successfully" Apr 17 23:37:35.620838 sshd[6828]: pam_unix(sshd:session): session closed for user core Apr 17 23:37:35.632156 systemd[1]: sshd@18-172.31.31.247:22-4.175.71.9:35392.service: Deactivated successfully. Apr 17 23:37:35.633152 systemd-logind[2124]: Session 19 logged out. Waiting for processes to exit. Apr 17 23:37:35.640635 systemd[1]: session-19.scope: Deactivated successfully. Apr 17 23:37:35.644186 systemd-logind[2124]: Removed session 19. Apr 17 23:37:40.781615 systemd[1]: Started sshd@19-172.31.31.247:22-4.175.71.9:49852.service - OpenSSH per-connection server daemon (4.175.71.9:49852). Apr 17 23:37:41.797717 sshd[6926]: Accepted publickey for core from 4.175.71.9 port 49852 ssh2: RSA SHA256:Y4BPHWm1n8mK0R4k3Nc8+65YIxJqSgtKkzRPVXbpsws Apr 17 23:37:41.800397 sshd[6926]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:37:41.810580 systemd-logind[2124]: New session 20 of user core. Apr 17 23:37:41.815774 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 17 23:37:42.622452 sshd[6926]: pam_unix(sshd:session): session closed for user core Apr 17 23:37:42.629956 systemd[1]: sshd@19-172.31.31.247:22-4.175.71.9:49852.service: Deactivated successfully. Apr 17 23:37:42.631771 systemd-logind[2124]: Session 20 logged out. Waiting for processes to exit. Apr 17 23:37:42.639586 systemd[1]: session-20.scope: Deactivated successfully. Apr 17 23:37:42.642298 systemd-logind[2124]: Removed session 20. Apr 17 23:37:47.795029 systemd[1]: Started sshd@20-172.31.31.247:22-4.175.71.9:50560.service - OpenSSH per-connection server daemon (4.175.71.9:50560). Apr 17 23:37:48.823991 sshd[6943]: Accepted publickey for core from 4.175.71.9 port 50560 ssh2: RSA SHA256:Y4BPHWm1n8mK0R4k3Nc8+65YIxJqSgtKkzRPVXbpsws Apr 17 23:37:48.836778 sshd[6943]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:37:48.854072 systemd-logind[2124]: New session 21 of user core. Apr 17 23:37:48.860662 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 17 23:37:49.663406 sshd[6943]: pam_unix(sshd:session): session closed for user core Apr 17 23:37:49.672691 systemd-logind[2124]: Session 21 logged out. Waiting for processes to exit. Apr 17 23:37:49.673680 systemd[1]: sshd@20-172.31.31.247:22-4.175.71.9:50560.service: Deactivated successfully. Apr 17 23:37:49.686162 systemd[1]: session-21.scope: Deactivated successfully. Apr 17 23:37:49.688764 systemd-logind[2124]: Removed session 21. Apr 17 23:37:54.840204 systemd[1]: Started sshd@21-172.31.31.247:22-4.175.71.9:50574.service - OpenSSH per-connection server daemon (4.175.71.9:50574). Apr 17 23:37:55.899521 sshd[6967]: Accepted publickey for core from 4.175.71.9 port 50574 ssh2: RSA SHA256:Y4BPHWm1n8mK0R4k3Nc8+65YIxJqSgtKkzRPVXbpsws Apr 17 23:37:55.902148 sshd[6967]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:37:55.910656 systemd-logind[2124]: New session 22 of user core. Apr 17 23:37:55.918481 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 17 23:37:56.750444 sshd[6967]: pam_unix(sshd:session): session closed for user core Apr 17 23:37:56.755641 systemd[1]: sshd@21-172.31.31.247:22-4.175.71.9:50574.service: Deactivated successfully. Apr 17 23:37:56.765459 systemd-logind[2124]: Session 22 logged out. Waiting for processes to exit. Apr 17 23:37:56.766608 systemd[1]: session-22.scope: Deactivated successfully. Apr 17 23:37:56.770178 systemd-logind[2124]: Removed session 22. Apr 17 23:38:11.473901 containerd[2145]: time="2026-04-17T23:38:11.473683903Z" level=info msg="shim disconnected" id=c40bc6a700a3c184516cdf70b5e81008ae423025705dca1e374a3e262367eca7 namespace=k8s.io Apr 17 23:38:11.473901 containerd[2145]: time="2026-04-17T23:38:11.473885719Z" level=warning msg="cleaning up after shim disconnected" id=c40bc6a700a3c184516cdf70b5e81008ae423025705dca1e374a3e262367eca7 namespace=k8s.io Apr 17 23:38:11.474702 containerd[2145]: time="2026-04-17T23:38:11.473975719Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:38:11.478229 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c40bc6a700a3c184516cdf70b5e81008ae423025705dca1e374a3e262367eca7-rootfs.mount: Deactivated successfully. Apr 17 23:38:11.881522 containerd[2145]: time="2026-04-17T23:38:11.881361705Z" level=info msg="shim disconnected" id=d6cb68fd9a1e3af8ac0e0daa28ae3787dd817199cd55215402d13fd1ad328be1 namespace=k8s.io Apr 17 23:38:11.881522 containerd[2145]: time="2026-04-17T23:38:11.881443713Z" level=warning msg="cleaning up after shim disconnected" id=d6cb68fd9a1e3af8ac0e0daa28ae3787dd817199cd55215402d13fd1ad328be1 namespace=k8s.io Apr 17 23:38:11.881522 containerd[2145]: time="2026-04-17T23:38:11.881464581Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:38:11.886774 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d6cb68fd9a1e3af8ac0e0daa28ae3787dd817199cd55215402d13fd1ad328be1-rootfs.mount: Deactivated successfully. Apr 17 23:38:12.061866 kubelet[3715]: I0417 23:38:12.061799 3715 scope.go:117] "RemoveContainer" containerID="c40bc6a700a3c184516cdf70b5e81008ae423025705dca1e374a3e262367eca7" Apr 17 23:38:12.065828 kubelet[3715]: I0417 23:38:12.065578 3715 scope.go:117] "RemoveContainer" containerID="d6cb68fd9a1e3af8ac0e0daa28ae3787dd817199cd55215402d13fd1ad328be1" Apr 17 23:38:12.066174 containerd[2145]: time="2026-04-17T23:38:12.066082398Z" level=info msg="CreateContainer within sandbox \"23a4fb56bbfe11ad7ea282986febff40f1d88be2c31df8a445a44aa553578837\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Apr 17 23:38:12.071148 containerd[2145]: time="2026-04-17T23:38:12.070524186Z" level=info msg="CreateContainer within sandbox \"a7959f8454fff2e88890f014648d4387a6d48b9b0176eef86518d9a05198bce8\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Apr 17 23:38:12.103191 containerd[2145]: time="2026-04-17T23:38:12.103111542Z" level=info msg="CreateContainer within sandbox \"23a4fb56bbfe11ad7ea282986febff40f1d88be2c31df8a445a44aa553578837\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"297d61437e3c9bd77eb499f3a5dd10ee633d25a81fba853bd44192d8117ec269\"" Apr 17 23:38:12.106311 containerd[2145]: time="2026-04-17T23:38:12.104230122Z" level=info msg="StartContainer for \"297d61437e3c9bd77eb499f3a5dd10ee633d25a81fba853bd44192d8117ec269\"" Apr 17 23:38:12.108841 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4206385993.mount: Deactivated successfully. Apr 17 23:38:12.114933 containerd[2145]: time="2026-04-17T23:38:12.114550170Z" level=info msg="CreateContainer within sandbox \"a7959f8454fff2e88890f014648d4387a6d48b9b0176eef86518d9a05198bce8\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"7f870ad65d0d28fd174ec5cbf6112723d54df3969dde330f6c47a290ba1b3346\"" Apr 17 23:38:12.116191 containerd[2145]: time="2026-04-17T23:38:12.115892622Z" level=info msg="StartContainer for \"7f870ad65d0d28fd174ec5cbf6112723d54df3969dde330f6c47a290ba1b3346\"" Apr 17 23:38:12.283713 containerd[2145]: time="2026-04-17T23:38:12.283597783Z" level=info msg="StartContainer for \"7f870ad65d0d28fd174ec5cbf6112723d54df3969dde330f6c47a290ba1b3346\" returns successfully" Apr 17 23:38:12.286678 containerd[2145]: time="2026-04-17T23:38:12.283967287Z" level=info msg="StartContainer for \"297d61437e3c9bd77eb499f3a5dd10ee633d25a81fba853bd44192d8117ec269\" returns successfully" Apr 17 23:38:16.655852 containerd[2145]: time="2026-04-17T23:38:16.655544833Z" level=info msg="shim disconnected" id=6f477d52e87bb8761609bf71216f2313bcb3986c5b57b18ad7de9108042dbcbd namespace=k8s.io Apr 17 23:38:16.656442 containerd[2145]: time="2026-04-17T23:38:16.656180953Z" level=warning msg="cleaning up after shim disconnected" id=6f477d52e87bb8761609bf71216f2313bcb3986c5b57b18ad7de9108042dbcbd namespace=k8s.io Apr 17 23:38:16.656442 containerd[2145]: time="2026-04-17T23:38:16.656245765Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:38:16.662871 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6f477d52e87bb8761609bf71216f2313bcb3986c5b57b18ad7de9108042dbcbd-rootfs.mount: Deactivated successfully. Apr 17 23:38:17.090044 kubelet[3715]: I0417 23:38:17.089839 3715 scope.go:117] "RemoveContainer" containerID="6f477d52e87bb8761609bf71216f2313bcb3986c5b57b18ad7de9108042dbcbd" Apr 17 23:38:17.102729 containerd[2145]: time="2026-04-17T23:38:17.102128291Z" level=info msg="CreateContainer within sandbox \"05809f66e18eee642700d83226592e563032b5d0e92bec312bda6fa9549e9d3b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Apr 17 23:38:17.135962 containerd[2145]: time="2026-04-17T23:38:17.135908519Z" level=info msg="CreateContainer within sandbox \"05809f66e18eee642700d83226592e563032b5d0e92bec312bda6fa9549e9d3b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"67ab26dcbf0fe75f7a7b7df99a1b45f19b587695139f7161a868012eec9b7041\"" Apr 17 23:38:17.137671 containerd[2145]: time="2026-04-17T23:38:17.137569991Z" level=info msg="StartContainer for \"67ab26dcbf0fe75f7a7b7df99a1b45f19b587695139f7161a868012eec9b7041\"" Apr 17 23:38:17.263361 containerd[2145]: time="2026-04-17T23:38:17.263291520Z" level=info msg="StartContainer for \"67ab26dcbf0fe75f7a7b7df99a1b45f19b587695139f7161a868012eec9b7041\" returns successfully" Apr 17 23:38:17.557114 kubelet[3715]: E0417 23:38:17.556167 3715 controller.go:195] "Failed to update lease" err="Put \"https://172.31.31.247:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-247?timeout=10s\": context deadline exceeded" Apr 17 23:38:23.846032 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7f870ad65d0d28fd174ec5cbf6112723d54df3969dde330f6c47a290ba1b3346-rootfs.mount: Deactivated successfully. Apr 17 23:38:23.856181 containerd[2145]: time="2026-04-17T23:38:23.855795849Z" level=info msg="shim disconnected" id=7f870ad65d0d28fd174ec5cbf6112723d54df3969dde330f6c47a290ba1b3346 namespace=k8s.io Apr 17 23:38:23.856181 containerd[2145]: time="2026-04-17T23:38:23.855878697Z" level=warning msg="cleaning up after shim disconnected" id=7f870ad65d0d28fd174ec5cbf6112723d54df3969dde330f6c47a290ba1b3346 namespace=k8s.io Apr 17 23:38:23.856181 containerd[2145]: time="2026-04-17T23:38:23.855899517Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:38:24.117385 kubelet[3715]: I0417 23:38:24.117252 3715 scope.go:117] "RemoveContainer" containerID="d6cb68fd9a1e3af8ac0e0daa28ae3787dd817199cd55215402d13fd1ad328be1" Apr 17 23:38:24.118714 kubelet[3715]: I0417 23:38:24.117954 3715 scope.go:117] "RemoveContainer" containerID="7f870ad65d0d28fd174ec5cbf6112723d54df3969dde330f6c47a290ba1b3346" Apr 17 23:38:24.118714 kubelet[3715]: E0417 23:38:24.118229 3715 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-6bf85f8dd-lml4w_tigera-operator(f538a66a-6882-41ac-ad4e-df10301d05bf)\"" pod="tigera-operator/tigera-operator-6bf85f8dd-lml4w" podUID="f538a66a-6882-41ac-ad4e-df10301d05bf" Apr 17 23:38:24.121197 containerd[2145]: time="2026-04-17T23:38:24.120703242Z" level=info msg="RemoveContainer for \"d6cb68fd9a1e3af8ac0e0daa28ae3787dd817199cd55215402d13fd1ad328be1\"" Apr 17 23:38:24.128429 containerd[2145]: time="2026-04-17T23:38:24.128264190Z" level=info msg="RemoveContainer for \"d6cb68fd9a1e3af8ac0e0daa28ae3787dd817199cd55215402d13fd1ad328be1\" returns successfully"