Mar 7 01:09:09.926671 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Mar 6 22:58:19 -00 2026 Mar 7 01:09:09.926704 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=531e046a631dbba7b4aae1b7955ffa961f5ce7d570e89a624d767cf739ab70b5 Mar 7 01:09:09.926723 kernel: BIOS-provided physical RAM map: Mar 7 01:09:09.926734 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Mar 7 01:09:09.926744 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000786cdfff] usable Mar 7 01:09:09.926754 kernel: BIOS-e820: [mem 0x00000000786ce000-0x00000000787cdfff] type 20 Mar 7 01:09:09.926768 kernel: BIOS-e820: [mem 0x00000000787ce000-0x000000007894dfff] reserved Mar 7 01:09:09.926779 kernel: BIOS-e820: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Mar 7 01:09:09.926790 kernel: BIOS-e820: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Mar 7 01:09:09.926804 kernel: BIOS-e820: [mem 0x00000000789de000-0x000000007c97bfff] usable Mar 7 01:09:09.926816 kernel: BIOS-e820: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Mar 7 01:09:09.926826 kernel: NX (Execute Disable) protection: active Mar 7 01:09:09.926837 kernel: APIC: Static calls initialized Mar 7 01:09:09.926864 kernel: efi: EFI v2.7 by EDK II Mar 7 01:09:09.926880 kernel: efi: SMBIOS=0x7886a000 ACPI=0x7895d000 ACPI 2.0=0x7895d014 MEMATTR=0x77015518 Mar 7 01:09:09.926896 kernel: SMBIOS 2.7 present. Mar 7 01:09:09.926910 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Mar 7 01:09:09.926923 kernel: Hypervisor detected: KVM Mar 7 01:09:09.926937 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 7 01:09:09.926951 kernel: kvm-clock: using sched offset of 3704077585 cycles Mar 7 01:09:09.926965 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 7 01:09:09.926980 kernel: tsc: Detected 2499.998 MHz processor Mar 7 01:09:09.926995 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 7 01:09:09.927009 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 7 01:09:09.927023 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Mar 7 01:09:09.927041 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Mar 7 01:09:09.927055 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 7 01:09:09.927081 kernel: Using GB pages for direct mapping Mar 7 01:09:09.927095 kernel: Secure boot disabled Mar 7 01:09:09.927109 kernel: ACPI: Early table checksum verification disabled Mar 7 01:09:09.927123 kernel: ACPI: RSDP 0x000000007895D014 000024 (v02 AMAZON) Mar 7 01:09:09.927138 kernel: ACPI: XSDT 0x000000007895C0E8 00006C (v01 AMAZON AMZNFACP 00000001 01000013) Mar 7 01:09:09.927152 kernel: ACPI: FACP 0x0000000078955000 000114 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Mar 7 01:09:09.927166 kernel: ACPI: DSDT 0x0000000078956000 00115A (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Mar 7 01:09:09.927184 kernel: ACPI: FACS 0x00000000789D0000 000040 Mar 7 01:09:09.927198 kernel: ACPI: WAET 0x000000007895B000 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Mar 7 01:09:09.927213 kernel: ACPI: SLIT 0x000000007895A000 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Mar 7 01:09:09.927227 kernel: ACPI: APIC 0x0000000078959000 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Mar 7 01:09:09.927241 kernel: ACPI: SRAT 0x0000000078958000 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Mar 7 01:09:09.927256 kernel: ACPI: HPET 0x0000000078954000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Mar 7 01:09:09.927277 kernel: ACPI: SSDT 0x0000000078953000 000759 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Mar 7 01:09:09.927296 kernel: ACPI: SSDT 0x0000000078952000 00007F (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Mar 7 01:09:09.927312 kernel: ACPI: BGRT 0x0000000078951000 000038 (v01 AMAZON AMAZON 00000002 01000013) Mar 7 01:09:09.927327 kernel: ACPI: Reserving FACP table memory at [mem 0x78955000-0x78955113] Mar 7 01:09:09.927342 kernel: ACPI: Reserving DSDT table memory at [mem 0x78956000-0x78957159] Mar 7 01:09:09.927358 kernel: ACPI: Reserving FACS table memory at [mem 0x789d0000-0x789d003f] Mar 7 01:09:09.927445 kernel: ACPI: Reserving WAET table memory at [mem 0x7895b000-0x7895b027] Mar 7 01:09:09.927460 kernel: ACPI: Reserving SLIT table memory at [mem 0x7895a000-0x7895a06b] Mar 7 01:09:09.927479 kernel: ACPI: Reserving APIC table memory at [mem 0x78959000-0x78959075] Mar 7 01:09:09.927494 kernel: ACPI: Reserving SRAT table memory at [mem 0x78958000-0x7895809f] Mar 7 01:09:09.927509 kernel: ACPI: Reserving HPET table memory at [mem 0x78954000-0x78954037] Mar 7 01:09:09.927524 kernel: ACPI: Reserving SSDT table memory at [mem 0x78953000-0x78953758] Mar 7 01:09:09.927539 kernel: ACPI: Reserving SSDT table memory at [mem 0x78952000-0x7895207e] Mar 7 01:09:09.927554 kernel: ACPI: Reserving BGRT table memory at [mem 0x78951000-0x78951037] Mar 7 01:09:09.927569 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Mar 7 01:09:09.927584 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Mar 7 01:09:09.927600 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Mar 7 01:09:09.927618 kernel: NUMA: Initialized distance table, cnt=1 Mar 7 01:09:09.927632 kernel: NODE_DATA(0) allocated [mem 0x7a8f0000-0x7a8f5fff] Mar 7 01:09:09.927647 kernel: Zone ranges: Mar 7 01:09:09.927662 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 7 01:09:09.927677 kernel: DMA32 [mem 0x0000000001000000-0x000000007c97bfff] Mar 7 01:09:09.927693 kernel: Normal empty Mar 7 01:09:09.927708 kernel: Movable zone start for each node Mar 7 01:09:09.927723 kernel: Early memory node ranges Mar 7 01:09:09.927738 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Mar 7 01:09:09.927753 kernel: node 0: [mem 0x0000000000100000-0x00000000786cdfff] Mar 7 01:09:09.927772 kernel: node 0: [mem 0x00000000789de000-0x000000007c97bfff] Mar 7 01:09:09.927788 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007c97bfff] Mar 7 01:09:09.927803 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 7 01:09:09.927818 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Mar 7 01:09:09.927834 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Mar 7 01:09:09.927850 kernel: On node 0, zone DMA32: 13956 pages in unavailable ranges Mar 7 01:09:09.927865 kernel: ACPI: PM-Timer IO Port: 0xb008 Mar 7 01:09:09.927881 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 7 01:09:09.927896 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Mar 7 01:09:09.927914 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 7 01:09:09.927929 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 7 01:09:09.927944 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 7 01:09:09.927959 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 7 01:09:09.927974 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 7 01:09:09.927990 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 7 01:09:09.928005 kernel: TSC deadline timer available Mar 7 01:09:09.928020 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Mar 7 01:09:09.928034 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 7 01:09:09.928053 kernel: [mem 0x7ca00000-0xffffffff] available for PCI devices Mar 7 01:09:09.929117 kernel: Booting paravirtualized kernel on KVM Mar 7 01:09:09.929134 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 7 01:09:09.929150 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Mar 7 01:09:09.929166 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u1048576 Mar 7 01:09:09.929181 kernel: pcpu-alloc: s196328 r8192 d28952 u1048576 alloc=1*2097152 Mar 7 01:09:09.929197 kernel: pcpu-alloc: [0] 0 1 Mar 7 01:09:09.929212 kernel: kvm-guest: PV spinlocks enabled Mar 7 01:09:09.929227 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 7 01:09:09.929250 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=531e046a631dbba7b4aae1b7955ffa961f5ce7d570e89a624d767cf739ab70b5 Mar 7 01:09:09.929265 kernel: random: crng init done Mar 7 01:09:09.929281 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 7 01:09:09.929298 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Mar 7 01:09:09.929314 kernel: Fallback order for Node 0: 0 Mar 7 01:09:09.929329 kernel: Built 1 zonelists, mobility grouping on. Total pages: 501318 Mar 7 01:09:09.929345 kernel: Policy zone: DMA32 Mar 7 01:09:09.929360 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 7 01:09:09.929379 kernel: Memory: 1874628K/2037804K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42892K init, 2304K bss, 162916K reserved, 0K cma-reserved) Mar 7 01:09:09.929394 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 7 01:09:09.929410 kernel: Kernel/User page tables isolation: enabled Mar 7 01:09:09.929423 kernel: ftrace: allocating 37996 entries in 149 pages Mar 7 01:09:09.929438 kernel: ftrace: allocated 149 pages with 4 groups Mar 7 01:09:09.929454 kernel: Dynamic Preempt: voluntary Mar 7 01:09:09.929469 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 7 01:09:09.929485 kernel: rcu: RCU event tracing is enabled. Mar 7 01:09:09.929501 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 7 01:09:09.929520 kernel: Trampoline variant of Tasks RCU enabled. Mar 7 01:09:09.929536 kernel: Rude variant of Tasks RCU enabled. Mar 7 01:09:09.929552 kernel: Tracing variant of Tasks RCU enabled. Mar 7 01:09:09.929567 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 7 01:09:09.929580 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 7 01:09:09.929595 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Mar 7 01:09:09.929611 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 7 01:09:09.929642 kernel: Console: colour dummy device 80x25 Mar 7 01:09:09.929658 kernel: printk: console [tty0] enabled Mar 7 01:09:09.929675 kernel: printk: console [ttyS0] enabled Mar 7 01:09:09.929691 kernel: ACPI: Core revision 20230628 Mar 7 01:09:09.929708 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Mar 7 01:09:09.929727 kernel: APIC: Switch to symmetric I/O mode setup Mar 7 01:09:09.929743 kernel: x2apic enabled Mar 7 01:09:09.929760 kernel: APIC: Switched APIC routing to: physical x2apic Mar 7 01:09:09.929777 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Mar 7 01:09:09.929794 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499998) Mar 7 01:09:09.929813 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Mar 7 01:09:09.929830 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Mar 7 01:09:09.929846 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 7 01:09:09.929862 kernel: Spectre V2 : Mitigation: Retpolines Mar 7 01:09:09.929879 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Mar 7 01:09:09.929895 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Mar 7 01:09:09.929911 kernel: RETBleed: Vulnerable Mar 7 01:09:09.929927 kernel: Speculative Store Bypass: Vulnerable Mar 7 01:09:09.929943 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Mar 7 01:09:09.929960 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Mar 7 01:09:09.929979 kernel: GDS: Unknown: Dependent on hypervisor status Mar 7 01:09:09.929995 kernel: active return thunk: its_return_thunk Mar 7 01:09:09.930011 kernel: ITS: Mitigation: Aligned branch/return thunks Mar 7 01:09:09.930028 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 7 01:09:09.930044 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 7 01:09:09.930077 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 7 01:09:09.930094 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Mar 7 01:09:09.930110 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Mar 7 01:09:09.930127 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Mar 7 01:09:09.930143 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Mar 7 01:09:09.930160 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Mar 7 01:09:09.930180 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Mar 7 01:09:09.930196 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 7 01:09:09.930212 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Mar 7 01:09:09.930229 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Mar 7 01:09:09.930245 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Mar 7 01:09:09.930261 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Mar 7 01:09:09.930277 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Mar 7 01:09:09.930293 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Mar 7 01:09:09.930310 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Mar 7 01:09:09.930326 kernel: Freeing SMP alternatives memory: 32K Mar 7 01:09:09.930342 kernel: pid_max: default: 32768 minimum: 301 Mar 7 01:09:09.930358 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 7 01:09:09.930379 kernel: landlock: Up and running. Mar 7 01:09:09.930394 kernel: SELinux: Initializing. Mar 7 01:09:09.930411 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Mar 7 01:09:09.930427 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Mar 7 01:09:09.930444 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Mar 7 01:09:09.930460 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 7 01:09:09.930477 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 7 01:09:09.930494 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 7 01:09:09.930511 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Mar 7 01:09:09.930530 kernel: signal: max sigframe size: 3632 Mar 7 01:09:09.930546 kernel: rcu: Hierarchical SRCU implementation. Mar 7 01:09:09.930563 kernel: rcu: Max phase no-delay instances is 400. Mar 7 01:09:09.930580 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 7 01:09:09.930596 kernel: smp: Bringing up secondary CPUs ... Mar 7 01:09:09.930613 kernel: smpboot: x86: Booting SMP configuration: Mar 7 01:09:09.930629 kernel: .... node #0, CPUs: #1 Mar 7 01:09:09.930646 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Mar 7 01:09:09.930663 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Mar 7 01:09:09.930683 kernel: smp: Brought up 1 node, 2 CPUs Mar 7 01:09:09.930699 kernel: smpboot: Max logical packages: 1 Mar 7 01:09:09.930716 kernel: smpboot: Total of 2 processors activated (9999.99 BogoMIPS) Mar 7 01:09:09.930732 kernel: devtmpfs: initialized Mar 7 01:09:09.930749 kernel: x86/mm: Memory block size: 128MB Mar 7 01:09:09.930765 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7895e000-0x789ddfff] (524288 bytes) Mar 7 01:09:09.930782 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 7 01:09:09.930798 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 7 01:09:09.930815 kernel: pinctrl core: initialized pinctrl subsystem Mar 7 01:09:09.930834 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 7 01:09:09.930851 kernel: audit: initializing netlink subsys (disabled) Mar 7 01:09:09.930867 kernel: audit: type=2000 audit(1772845750.459:1): state=initialized audit_enabled=0 res=1 Mar 7 01:09:09.930883 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 7 01:09:09.930900 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 7 01:09:09.930916 kernel: cpuidle: using governor menu Mar 7 01:09:09.930933 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 7 01:09:09.930949 kernel: dca service started, version 1.12.1 Mar 7 01:09:09.930966 kernel: PCI: Using configuration type 1 for base access Mar 7 01:09:09.930986 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 7 01:09:09.931003 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 7 01:09:09.931020 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 7 01:09:09.931037 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 7 01:09:09.931053 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 7 01:09:09.932082 kernel: ACPI: Added _OSI(Module Device) Mar 7 01:09:09.932100 kernel: ACPI: Added _OSI(Processor Device) Mar 7 01:09:09.932116 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 7 01:09:09.932132 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Mar 7 01:09:09.932154 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 7 01:09:09.932171 kernel: ACPI: Interpreter enabled Mar 7 01:09:09.932187 kernel: ACPI: PM: (supports S0 S5) Mar 7 01:09:09.932202 kernel: ACPI: Using IOAPIC for interrupt routing Mar 7 01:09:09.932218 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 7 01:09:09.932234 kernel: PCI: Using E820 reservations for host bridge windows Mar 7 01:09:09.932249 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Mar 7 01:09:09.932265 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 7 01:09:09.932492 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Mar 7 01:09:09.932657 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Mar 7 01:09:09.932794 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Mar 7 01:09:09.932812 kernel: acpiphp: Slot [3] registered Mar 7 01:09:09.932828 kernel: acpiphp: Slot [4] registered Mar 7 01:09:09.932842 kernel: acpiphp: Slot [5] registered Mar 7 01:09:09.932856 kernel: acpiphp: Slot [6] registered Mar 7 01:09:09.932871 kernel: acpiphp: Slot [7] registered Mar 7 01:09:09.932891 kernel: acpiphp: Slot [8] registered Mar 7 01:09:09.932906 kernel: acpiphp: Slot [9] registered Mar 7 01:09:09.932920 kernel: acpiphp: Slot [10] registered Mar 7 01:09:09.932934 kernel: acpiphp: Slot [11] registered Mar 7 01:09:09.932949 kernel: acpiphp: Slot [12] registered Mar 7 01:09:09.932963 kernel: acpiphp: Slot [13] registered Mar 7 01:09:09.932977 kernel: acpiphp: Slot [14] registered Mar 7 01:09:09.932991 kernel: acpiphp: Slot [15] registered Mar 7 01:09:09.933005 kernel: acpiphp: Slot [16] registered Mar 7 01:09:09.933019 kernel: acpiphp: Slot [17] registered Mar 7 01:09:09.933036 kernel: acpiphp: Slot [18] registered Mar 7 01:09:09.933050 kernel: acpiphp: Slot [19] registered Mar 7 01:09:09.933078 kernel: acpiphp: Slot [20] registered Mar 7 01:09:09.933092 kernel: acpiphp: Slot [21] registered Mar 7 01:09:09.933107 kernel: acpiphp: Slot [22] registered Mar 7 01:09:09.933120 kernel: acpiphp: Slot [23] registered Mar 7 01:09:09.933134 kernel: acpiphp: Slot [24] registered Mar 7 01:09:09.933148 kernel: acpiphp: Slot [25] registered Mar 7 01:09:09.933162 kernel: acpiphp: Slot [26] registered Mar 7 01:09:09.933181 kernel: acpiphp: Slot [27] registered Mar 7 01:09:09.933196 kernel: acpiphp: Slot [28] registered Mar 7 01:09:09.933210 kernel: acpiphp: Slot [29] registered Mar 7 01:09:09.933224 kernel: acpiphp: Slot [30] registered Mar 7 01:09:09.933239 kernel: acpiphp: Slot [31] registered Mar 7 01:09:09.933253 kernel: PCI host bridge to bus 0000:00 Mar 7 01:09:09.933387 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 7 01:09:09.933508 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 7 01:09:09.933632 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 7 01:09:09.933748 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Mar 7 01:09:09.935202 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x2000ffffffff window] Mar 7 01:09:09.935341 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 7 01:09:09.935513 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Mar 7 01:09:09.935669 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Mar 7 01:09:09.935846 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Mar 7 01:09:09.936008 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Mar 7 01:09:09.936172 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Mar 7 01:09:09.936310 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Mar 7 01:09:09.936447 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Mar 7 01:09:09.936581 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Mar 7 01:09:09.936715 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Mar 7 01:09:09.936849 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Mar 7 01:09:09.936999 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Mar 7 01:09:09.937151 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x80000000-0x803fffff pref] Mar 7 01:09:09.937287 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Mar 7 01:09:09.937420 kernel: pci 0000:00:03.0: BAR 0: assigned to efifb Mar 7 01:09:09.937556 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 7 01:09:09.937697 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Mar 7 01:09:09.937838 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80404000-0x80407fff] Mar 7 01:09:09.937981 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Mar 7 01:09:09.938129 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80400000-0x80403fff] Mar 7 01:09:09.938149 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 7 01:09:09.938165 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 7 01:09:09.938182 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 7 01:09:09.938197 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 7 01:09:09.938213 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Mar 7 01:09:09.938232 kernel: iommu: Default domain type: Translated Mar 7 01:09:09.938248 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 7 01:09:09.938264 kernel: efivars: Registered efivars operations Mar 7 01:09:09.938279 kernel: PCI: Using ACPI for IRQ routing Mar 7 01:09:09.938295 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 7 01:09:09.938310 kernel: e820: reserve RAM buffer [mem 0x786ce000-0x7bffffff] Mar 7 01:09:09.938325 kernel: e820: reserve RAM buffer [mem 0x7c97c000-0x7fffffff] Mar 7 01:09:09.938457 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Mar 7 01:09:09.938592 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Mar 7 01:09:09.938731 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 7 01:09:09.938750 kernel: vgaarb: loaded Mar 7 01:09:09.938766 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Mar 7 01:09:09.938781 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Mar 7 01:09:09.938797 kernel: clocksource: Switched to clocksource kvm-clock Mar 7 01:09:09.938812 kernel: VFS: Disk quotas dquot_6.6.0 Mar 7 01:09:09.938827 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 7 01:09:09.938843 kernel: pnp: PnP ACPI init Mar 7 01:09:09.938858 kernel: pnp: PnP ACPI: found 5 devices Mar 7 01:09:09.938878 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 7 01:09:09.938894 kernel: NET: Registered PF_INET protocol family Mar 7 01:09:09.938909 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 7 01:09:09.938925 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Mar 7 01:09:09.938941 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 7 01:09:09.938956 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Mar 7 01:09:09.938972 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Mar 7 01:09:09.938988 kernel: TCP: Hash tables configured (established 16384 bind 16384) Mar 7 01:09:09.939003 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Mar 7 01:09:09.939022 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Mar 7 01:09:09.939038 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 7 01:09:09.939053 kernel: NET: Registered PF_XDP protocol family Mar 7 01:09:09.940474 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 7 01:09:09.940607 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 7 01:09:09.940732 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 7 01:09:09.940849 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Mar 7 01:09:09.940968 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x2000ffffffff window] Mar 7 01:09:09.941156 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Mar 7 01:09:09.941182 kernel: PCI: CLS 0 bytes, default 64 Mar 7 01:09:09.941200 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Mar 7 01:09:09.941218 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Mar 7 01:09:09.941235 kernel: clocksource: Switched to clocksource tsc Mar 7 01:09:09.941253 kernel: Initialise system trusted keyrings Mar 7 01:09:09.941270 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Mar 7 01:09:09.941289 kernel: Key type asymmetric registered Mar 7 01:09:09.941306 kernel: Asymmetric key parser 'x509' registered Mar 7 01:09:09.941331 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 7 01:09:09.941349 kernel: io scheduler mq-deadline registered Mar 7 01:09:09.941366 kernel: io scheduler kyber registered Mar 7 01:09:09.941384 kernel: io scheduler bfq registered Mar 7 01:09:09.941401 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 7 01:09:09.941418 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 7 01:09:09.941436 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 7 01:09:09.941455 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 7 01:09:09.941472 kernel: i8042: Warning: Keylock active Mar 7 01:09:09.941496 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 7 01:09:09.941513 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 7 01:09:09.941723 kernel: rtc_cmos 00:00: RTC can wake from S4 Mar 7 01:09:09.941909 kernel: rtc_cmos 00:00: registered as rtc0 Mar 7 01:09:09.942129 kernel: rtc_cmos 00:00: setting system clock to 2026-03-07T01:09:09 UTC (1772845749) Mar 7 01:09:09.942279 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Mar 7 01:09:09.942299 kernel: intel_pstate: CPU model not supported Mar 7 01:09:09.942319 kernel: efifb: probing for efifb Mar 7 01:09:09.942335 kernel: efifb: framebuffer at 0x80000000, using 1920k, total 1920k Mar 7 01:09:09.942350 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 Mar 7 01:09:09.942367 kernel: efifb: scrolling: redraw Mar 7 01:09:09.942382 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Mar 7 01:09:09.942398 kernel: Console: switching to colour frame buffer device 100x37 Mar 7 01:09:09.942414 kernel: fb0: EFI VGA frame buffer device Mar 7 01:09:09.942429 kernel: pstore: Using crash dump compression: deflate Mar 7 01:09:09.942445 kernel: pstore: Registered efi_pstore as persistent store backend Mar 7 01:09:09.942460 kernel: NET: Registered PF_INET6 protocol family Mar 7 01:09:09.942479 kernel: Segment Routing with IPv6 Mar 7 01:09:09.942495 kernel: In-situ OAM (IOAM) with IPv6 Mar 7 01:09:09.942510 kernel: NET: Registered PF_PACKET protocol family Mar 7 01:09:09.942525 kernel: Key type dns_resolver registered Mar 7 01:09:09.942541 kernel: IPI shorthand broadcast: enabled Mar 7 01:09:09.942582 kernel: sched_clock: Marking stable (455001925, 129011212)->(672345792, -88332655) Mar 7 01:09:09.942602 kernel: registered taskstats version 1 Mar 7 01:09:09.942619 kernel: Loading compiled-in X.509 certificates Mar 7 01:09:09.942634 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: da286e6f6c247ee6f65a875c513de7da57782e90' Mar 7 01:09:09.942654 kernel: Key type .fscrypt registered Mar 7 01:09:09.942670 kernel: Key type fscrypt-provisioning registered Mar 7 01:09:09.942686 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 7 01:09:09.942702 kernel: ima: Allocated hash algorithm: sha1 Mar 7 01:09:09.942719 kernel: ima: No architecture policies found Mar 7 01:09:09.942735 kernel: clk: Disabling unused clocks Mar 7 01:09:09.942751 kernel: Freeing unused kernel image (initmem) memory: 42892K Mar 7 01:09:09.942767 kernel: Write protecting the kernel read-only data: 36864k Mar 7 01:09:09.942785 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Mar 7 01:09:09.942804 kernel: Run /init as init process Mar 7 01:09:09.942821 kernel: with arguments: Mar 7 01:09:09.942837 kernel: /init Mar 7 01:09:09.942852 kernel: with environment: Mar 7 01:09:09.942869 kernel: HOME=/ Mar 7 01:09:09.942885 kernel: TERM=linux Mar 7 01:09:09.942904 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 7 01:09:09.942924 systemd[1]: Detected virtualization amazon. Mar 7 01:09:09.942943 systemd[1]: Detected architecture x86-64. Mar 7 01:09:09.942960 systemd[1]: Running in initrd. Mar 7 01:09:09.942976 systemd[1]: No hostname configured, using default hostname. Mar 7 01:09:09.942991 systemd[1]: Hostname set to . Mar 7 01:09:09.943008 systemd[1]: Initializing machine ID from VM UUID. Mar 7 01:09:09.943025 systemd[1]: Queued start job for default target initrd.target. Mar 7 01:09:09.943041 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 7 01:09:09.943058 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 7 01:09:09.943248 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 7 01:09:09.943265 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 7 01:09:09.943281 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 7 01:09:09.943300 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 7 01:09:09.943323 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 7 01:09:09.943340 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 7 01:09:09.943358 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 7 01:09:09.943384 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 7 01:09:09.943401 systemd[1]: Reached target paths.target - Path Units. Mar 7 01:09:09.943418 systemd[1]: Reached target slices.target - Slice Units. Mar 7 01:09:09.943436 systemd[1]: Reached target swap.target - Swaps. Mar 7 01:09:09.943452 systemd[1]: Reached target timers.target - Timer Units. Mar 7 01:09:09.943471 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 7 01:09:09.943489 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 7 01:09:09.943504 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 7 01:09:09.943519 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 7 01:09:09.943535 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 7 01:09:09.943552 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 7 01:09:09.943569 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 7 01:09:09.943585 systemd[1]: Reached target sockets.target - Socket Units. Mar 7 01:09:09.943605 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 7 01:09:09.943622 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 7 01:09:09.943637 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 7 01:09:09.943655 systemd[1]: Starting systemd-fsck-usr.service... Mar 7 01:09:09.943673 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 7 01:09:09.943690 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 7 01:09:09.943706 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:09:09.943725 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 7 01:09:09.943777 systemd-journald[179]: Collecting audit messages is disabled. Mar 7 01:09:09.943817 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 7 01:09:09.943832 systemd[1]: Finished systemd-fsck-usr.service. Mar 7 01:09:09.943852 systemd-journald[179]: Journal started Mar 7 01:09:09.943884 systemd-journald[179]: Runtime Journal (/run/log/journal/ec28486c3141364d272395b81fa7a6bd) is 4.7M, max 38.2M, 33.4M free. Mar 7 01:09:09.951702 systemd-modules-load[180]: Inserted module 'overlay' Mar 7 01:09:09.954197 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 7 01:09:09.960083 systemd[1]: Started systemd-journald.service - Journal Service. Mar 7 01:09:09.961148 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:09:09.971327 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 7 01:09:09.983301 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 7 01:09:09.986297 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 7 01:09:09.996540 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 01:09:10.000787 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 7 01:09:10.005011 systemd-modules-load[180]: Inserted module 'br_netfilter' Mar 7 01:09:10.008128 kernel: Bridge firewalling registered Mar 7 01:09:10.008166 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 7 01:09:10.006798 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 7 01:09:10.017355 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 7 01:09:10.020864 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 7 01:09:10.024101 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 7 01:09:10.035397 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 7 01:09:10.039466 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 7 01:09:10.046209 dracut-cmdline[205]: dracut-dracut-053 Mar 7 01:09:10.051054 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 7 01:09:10.052627 dracut-cmdline[205]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=531e046a631dbba7b4aae1b7955ffa961f5ce7d570e89a624d767cf739ab70b5 Mar 7 01:09:10.062255 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 7 01:09:10.109707 systemd-resolved[230]: Positive Trust Anchors: Mar 7 01:09:10.109730 systemd-resolved[230]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 7 01:09:10.109792 systemd-resolved[230]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 7 01:09:10.119503 systemd-resolved[230]: Defaulting to hostname 'linux'. Mar 7 01:09:10.120886 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 7 01:09:10.121590 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 7 01:09:10.145095 kernel: SCSI subsystem initialized Mar 7 01:09:10.154084 kernel: Loading iSCSI transport class v2.0-870. Mar 7 01:09:10.165087 kernel: iscsi: registered transport (tcp) Mar 7 01:09:10.187470 kernel: iscsi: registered transport (qla4xxx) Mar 7 01:09:10.187556 kernel: QLogic iSCSI HBA Driver Mar 7 01:09:10.225979 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 7 01:09:10.230236 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 7 01:09:10.267310 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 7 01:09:10.267473 kernel: device-mapper: uevent: version 1.0.3 Mar 7 01:09:10.269118 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 7 01:09:10.310104 kernel: raid6: avx512x4 gen() 17400 MB/s Mar 7 01:09:10.328089 kernel: raid6: avx512x2 gen() 17187 MB/s Mar 7 01:09:10.346089 kernel: raid6: avx512x1 gen() 16718 MB/s Mar 7 01:09:10.364085 kernel: raid6: avx2x4 gen() 16261 MB/s Mar 7 01:09:10.382087 kernel: raid6: avx2x2 gen() 15873 MB/s Mar 7 01:09:10.400340 kernel: raid6: avx2x1 gen() 11699 MB/s Mar 7 01:09:10.400404 kernel: raid6: using algorithm avx512x4 gen() 17400 MB/s Mar 7 01:09:10.419288 kernel: raid6: .... xor() 7816 MB/s, rmw enabled Mar 7 01:09:10.419345 kernel: raid6: using avx512x2 recovery algorithm Mar 7 01:09:10.441103 kernel: xor: automatically using best checksumming function avx Mar 7 01:09:10.601093 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 7 01:09:10.611542 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 7 01:09:10.616287 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 7 01:09:10.640629 systemd-udevd[398]: Using default interface naming scheme 'v255'. Mar 7 01:09:10.645842 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 7 01:09:10.654317 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 7 01:09:10.672867 dracut-pre-trigger[404]: rd.md=0: removing MD RAID activation Mar 7 01:09:10.703461 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 7 01:09:10.713310 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 7 01:09:10.764605 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 7 01:09:10.773488 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 7 01:09:10.804747 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 7 01:09:10.806704 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 7 01:09:10.808582 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 7 01:09:10.809646 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 7 01:09:10.818342 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 7 01:09:10.838439 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 7 01:09:10.858080 kernel: cryptd: max_cpu_qlen set to 1000 Mar 7 01:09:10.883378 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 7 01:09:10.883463 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 01:09:10.886244 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 7 01:09:10.894720 kernel: AVX2 version of gcm_enc/dec engaged. Mar 7 01:09:10.894756 kernel: AES CTR mode by8 optimization enabled Mar 7 01:09:10.886801 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 7 01:09:10.886887 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:09:10.887537 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:09:10.898794 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:09:10.915094 kernel: ena 0000:00:05.0: ENA device version: 0.10 Mar 7 01:09:10.915307 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Mar 7 01:09:10.918279 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Mar 7 01:09:10.916910 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 7 01:09:10.917103 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:09:10.928198 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80400000, mac addr 06:f0:93:55:0b:95 Mar 7 01:09:10.931429 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:09:10.933510 (udev-worker)[454]: Network interface NamePolicy= disabled on kernel command line. Mar 7 01:09:10.959106 kernel: nvme nvme0: pci function 0000:00:04.0 Mar 7 01:09:10.961680 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Mar 7 01:09:10.961961 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:09:10.970310 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 7 01:09:10.973083 kernel: nvme nvme0: 2/0/0 default/read/poll queues Mar 7 01:09:10.988106 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 7 01:09:10.988181 kernel: GPT:9289727 != 33554431 Mar 7 01:09:10.988205 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 7 01:09:10.988226 kernel: GPT:9289727 != 33554431 Mar 7 01:09:10.988245 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 7 01:09:10.988263 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 7 01:09:10.989965 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 01:09:11.055095 kernel: BTRFS: device fsid 3bed8db9-42ad-4483-9cc8-1ad17a6cd948 devid 1 transid 34 /dev/nvme0n1p3 scanned by (udev-worker) (457) Mar 7 01:09:11.088773 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Mar 7 01:09:11.095418 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/nvme0n1p6 scanned by (udev-worker) (446) Mar 7 01:09:11.116906 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Mar 7 01:09:11.119165 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Mar 7 01:09:11.129290 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 7 01:09:11.144322 disk-uuid[625]: Primary Header is updated. Mar 7 01:09:11.144322 disk-uuid[625]: Secondary Entries is updated. Mar 7 01:09:11.144322 disk-uuid[625]: Secondary Header is updated. Mar 7 01:09:11.168857 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Mar 7 01:09:11.176095 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Mar 7 01:09:12.167754 disk-uuid[631]: The operation has completed successfully. Mar 7 01:09:12.168959 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 7 01:09:12.316793 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 7 01:09:12.316933 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 7 01:09:12.339245 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 7 01:09:12.343250 sh[892]: Success Mar 7 01:09:12.364272 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Mar 7 01:09:12.473221 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 7 01:09:12.481779 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 7 01:09:12.485722 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 7 01:09:12.526362 kernel: BTRFS info (device dm-0): first mount of filesystem 3bed8db9-42ad-4483-9cc8-1ad17a6cd948 Mar 7 01:09:12.526436 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 7 01:09:12.526471 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 7 01:09:12.530048 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 7 01:09:12.530121 kernel: BTRFS info (device dm-0): using free space tree Mar 7 01:09:12.555088 kernel: BTRFS info (device dm-0): enabling ssd optimizations Mar 7 01:09:12.568477 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 7 01:09:12.569717 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 7 01:09:12.579290 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 7 01:09:12.584296 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 7 01:09:12.607097 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:09:12.607166 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Mar 7 01:09:12.607189 kernel: BTRFS info (device nvme0n1p6): using free space tree Mar 7 01:09:12.627939 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Mar 7 01:09:12.642816 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:09:12.642375 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 7 01:09:12.650409 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 7 01:09:12.659371 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 7 01:09:12.694216 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 7 01:09:12.714845 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 7 01:09:12.738679 systemd-networkd[1087]: lo: Link UP Mar 7 01:09:12.738689 systemd-networkd[1087]: lo: Gained carrier Mar 7 01:09:12.740411 systemd-networkd[1087]: Enumeration completed Mar 7 01:09:12.740830 systemd-networkd[1087]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 01:09:12.740835 systemd-networkd[1087]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 7 01:09:12.741398 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 7 01:09:12.748141 systemd-networkd[1087]: eth0: Link UP Mar 7 01:09:12.748148 systemd-networkd[1087]: eth0: Gained carrier Mar 7 01:09:12.748162 systemd-networkd[1087]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 01:09:12.748574 systemd[1]: Reached target network.target - Network. Mar 7 01:09:12.766590 systemd-networkd[1087]: eth0: DHCPv4 address 172.31.29.156/20, gateway 172.31.16.1 acquired from 172.31.16.1 Mar 7 01:09:12.816487 ignition[1035]: Ignition 2.19.0 Mar 7 01:09:12.816502 ignition[1035]: Stage: fetch-offline Mar 7 01:09:12.816777 ignition[1035]: no configs at "/usr/lib/ignition/base.d" Mar 7 01:09:12.816791 ignition[1035]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 7 01:09:12.818885 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 7 01:09:12.817231 ignition[1035]: Ignition finished successfully Mar 7 01:09:12.827301 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 7 01:09:12.843012 ignition[1098]: Ignition 2.19.0 Mar 7 01:09:12.843025 ignition[1098]: Stage: fetch Mar 7 01:09:12.843732 ignition[1098]: no configs at "/usr/lib/ignition/base.d" Mar 7 01:09:12.843749 ignition[1098]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 7 01:09:12.843882 ignition[1098]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 7 01:09:12.862842 ignition[1098]: PUT result: OK Mar 7 01:09:12.865498 ignition[1098]: parsed url from cmdline: "" Mar 7 01:09:12.865510 ignition[1098]: no config URL provided Mar 7 01:09:12.865520 ignition[1098]: reading system config file "/usr/lib/ignition/user.ign" Mar 7 01:09:12.865534 ignition[1098]: no config at "/usr/lib/ignition/user.ign" Mar 7 01:09:12.865556 ignition[1098]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 7 01:09:12.866435 ignition[1098]: PUT result: OK Mar 7 01:09:12.866479 ignition[1098]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Mar 7 01:09:12.867571 ignition[1098]: GET result: OK Mar 7 01:09:12.867685 ignition[1098]: parsing config with SHA512: 06d74d0fe45d39afc006c64a6446e8b017c424c49a390699f83b48a9a0b28d1c41d5a388d67908af06ca2f952d97d012981a70e1f30ae94c4a7eee9b4e435611 Mar 7 01:09:12.871933 unknown[1098]: fetched base config from "system" Mar 7 01:09:12.871947 unknown[1098]: fetched base config from "system" Mar 7 01:09:12.872523 ignition[1098]: fetch: fetch complete Mar 7 01:09:12.871958 unknown[1098]: fetched user config from "aws" Mar 7 01:09:12.872531 ignition[1098]: fetch: fetch passed Mar 7 01:09:12.872586 ignition[1098]: Ignition finished successfully Mar 7 01:09:12.875706 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 7 01:09:12.880277 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 7 01:09:12.896536 ignition[1104]: Ignition 2.19.0 Mar 7 01:09:12.896550 ignition[1104]: Stage: kargs Mar 7 01:09:12.896990 ignition[1104]: no configs at "/usr/lib/ignition/base.d" Mar 7 01:09:12.897004 ignition[1104]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 7 01:09:12.897147 ignition[1104]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 7 01:09:12.898027 ignition[1104]: PUT result: OK Mar 7 01:09:12.900524 ignition[1104]: kargs: kargs passed Mar 7 01:09:12.900601 ignition[1104]: Ignition finished successfully Mar 7 01:09:12.902509 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 7 01:09:12.907287 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 7 01:09:12.922956 ignition[1110]: Ignition 2.19.0 Mar 7 01:09:12.922970 ignition[1110]: Stage: disks Mar 7 01:09:12.923576 ignition[1110]: no configs at "/usr/lib/ignition/base.d" Mar 7 01:09:12.923590 ignition[1110]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 7 01:09:12.923738 ignition[1110]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 7 01:09:12.924747 ignition[1110]: PUT result: OK Mar 7 01:09:12.930680 ignition[1110]: disks: disks passed Mar 7 01:09:12.930755 ignition[1110]: Ignition finished successfully Mar 7 01:09:12.932074 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 7 01:09:12.933014 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 7 01:09:12.933667 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 7 01:09:12.934010 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 7 01:09:12.934554 systemd[1]: Reached target sysinit.target - System Initialization. Mar 7 01:09:12.935094 systemd[1]: Reached target basic.target - Basic System. Mar 7 01:09:12.947310 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 7 01:09:12.983226 systemd-fsck[1118]: ROOT: clean, 14/553520 files, 52654/553472 blocks Mar 7 01:09:12.986534 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 7 01:09:12.992188 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 7 01:09:13.106097 kernel: EXT4-fs (nvme0n1p9): mounted filesystem aab0506b-de72-4dd2-9393-24d7958f49a5 r/w with ordered data mode. Quota mode: none. Mar 7 01:09:13.106365 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 7 01:09:13.107518 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 7 01:09:13.120196 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 7 01:09:13.123201 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 7 01:09:13.125692 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 7 01:09:13.125768 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 7 01:09:13.125801 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 7 01:09:13.140935 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 7 01:09:13.142087 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/nvme0n1p6 scanned by mount (1137) Mar 7 01:09:13.143137 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:09:13.143177 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Mar 7 01:09:13.143198 kernel: BTRFS info (device nvme0n1p6): using free space tree Mar 7 01:09:13.157289 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 7 01:09:13.162093 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Mar 7 01:09:13.163577 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 7 01:09:13.235303 initrd-setup-root[1161]: cut: /sysroot/etc/passwd: No such file or directory Mar 7 01:09:13.251857 initrd-setup-root[1168]: cut: /sysroot/etc/group: No such file or directory Mar 7 01:09:13.259095 initrd-setup-root[1175]: cut: /sysroot/etc/shadow: No such file or directory Mar 7 01:09:13.263822 initrd-setup-root[1182]: cut: /sysroot/etc/gshadow: No such file or directory Mar 7 01:09:13.364108 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 7 01:09:13.367196 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 7 01:09:13.374341 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 7 01:09:13.386093 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:09:13.414684 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 7 01:09:13.418091 ignition[1250]: INFO : Ignition 2.19.0 Mar 7 01:09:13.418091 ignition[1250]: INFO : Stage: mount Mar 7 01:09:13.420011 ignition[1250]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 7 01:09:13.420011 ignition[1250]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 7 01:09:13.420011 ignition[1250]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 7 01:09:13.421609 ignition[1250]: INFO : PUT result: OK Mar 7 01:09:13.422704 ignition[1250]: INFO : mount: mount passed Mar 7 01:09:13.423198 ignition[1250]: INFO : Ignition finished successfully Mar 7 01:09:13.424603 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 7 01:09:13.431246 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 7 01:09:13.521519 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 7 01:09:13.529271 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 7 01:09:13.546099 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 scanned by mount (1261) Mar 7 01:09:13.546164 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:09:13.549835 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Mar 7 01:09:13.549902 kernel: BTRFS info (device nvme0n1p6): using free space tree Mar 7 01:09:13.556530 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Mar 7 01:09:13.558095 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 7 01:09:13.579087 ignition[1278]: INFO : Ignition 2.19.0 Mar 7 01:09:13.579087 ignition[1278]: INFO : Stage: files Mar 7 01:09:13.580446 ignition[1278]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 7 01:09:13.580446 ignition[1278]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 7 01:09:13.580446 ignition[1278]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 7 01:09:13.581684 ignition[1278]: INFO : PUT result: OK Mar 7 01:09:13.584446 ignition[1278]: DEBUG : files: compiled without relabeling support, skipping Mar 7 01:09:13.585179 ignition[1278]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 7 01:09:13.585179 ignition[1278]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 7 01:09:13.588869 ignition[1278]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 7 01:09:13.589655 ignition[1278]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 7 01:09:13.589655 ignition[1278]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 7 01:09:13.589387 unknown[1278]: wrote ssh authorized keys file for user: core Mar 7 01:09:13.592181 ignition[1278]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 7 01:09:13.592181 ignition[1278]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Mar 7 01:09:13.674823 ignition[1278]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 7 01:09:13.858232 ignition[1278]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 7 01:09:13.858232 ignition[1278]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Mar 7 01:09:13.860644 ignition[1278]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Mar 7 01:09:13.860644 ignition[1278]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 7 01:09:13.860644 ignition[1278]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 7 01:09:13.860644 ignition[1278]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 7 01:09:13.860644 ignition[1278]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 7 01:09:13.860644 ignition[1278]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 7 01:09:13.860644 ignition[1278]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 7 01:09:13.860644 ignition[1278]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 7 01:09:13.860644 ignition[1278]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 7 01:09:13.860644 ignition[1278]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 7 01:09:13.860644 ignition[1278]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 7 01:09:13.860644 ignition[1278]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 7 01:09:13.860644 ignition[1278]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.4-x86-64.raw: attempt #1 Mar 7 01:09:14.332549 ignition[1278]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Mar 7 01:09:14.570229 systemd-networkd[1087]: eth0: Gained IPv6LL Mar 7 01:09:15.465751 ignition[1278]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 7 01:09:15.465751 ignition[1278]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Mar 7 01:09:15.469407 ignition[1278]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 7 01:09:15.469407 ignition[1278]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 7 01:09:15.469407 ignition[1278]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Mar 7 01:09:15.469407 ignition[1278]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Mar 7 01:09:15.469407 ignition[1278]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Mar 7 01:09:15.469407 ignition[1278]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 7 01:09:15.469407 ignition[1278]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 7 01:09:15.469407 ignition[1278]: INFO : files: files passed Mar 7 01:09:15.469407 ignition[1278]: INFO : Ignition finished successfully Mar 7 01:09:15.469498 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 7 01:09:15.475373 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 7 01:09:15.482294 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 7 01:09:15.487491 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 7 01:09:15.487619 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 7 01:09:15.499662 initrd-setup-root-after-ignition[1306]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 7 01:09:15.499662 initrd-setup-root-after-ignition[1306]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 7 01:09:15.502906 initrd-setup-root-after-ignition[1310]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 7 01:09:15.503522 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 7 01:09:15.505418 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 7 01:09:15.510306 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 7 01:09:15.542963 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 7 01:09:15.543163 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 7 01:09:15.544407 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 7 01:09:15.545449 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 7 01:09:15.546246 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 7 01:09:15.555331 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 7 01:09:15.569093 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 7 01:09:15.574284 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 7 01:09:15.593218 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 7 01:09:15.593883 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 7 01:09:15.594857 systemd[1]: Stopped target timers.target - Timer Units. Mar 7 01:09:15.595763 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 7 01:09:15.595936 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 7 01:09:15.597048 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 7 01:09:15.597870 systemd[1]: Stopped target basic.target - Basic System. Mar 7 01:09:15.598635 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 7 01:09:15.599467 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 7 01:09:15.600265 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 7 01:09:15.601021 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 7 01:09:15.601794 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 7 01:09:15.602582 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 7 01:09:15.603815 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 7 01:09:15.604573 systemd[1]: Stopped target swap.target - Swaps. Mar 7 01:09:15.605290 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 7 01:09:15.605469 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 7 01:09:15.606552 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 7 01:09:15.607425 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 7 01:09:15.608052 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 7 01:09:15.608199 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 7 01:09:15.608887 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 7 01:09:15.609113 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 7 01:09:15.610421 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 7 01:09:15.610597 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 7 01:09:15.611321 systemd[1]: ignition-files.service: Deactivated successfully. Mar 7 01:09:15.611540 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 7 01:09:15.618316 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 7 01:09:15.620311 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 7 01:09:15.622143 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 7 01:09:15.622393 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 7 01:09:15.624375 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 7 01:09:15.624580 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 7 01:09:15.632347 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 7 01:09:15.633165 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 7 01:09:15.643868 ignition[1330]: INFO : Ignition 2.19.0 Mar 7 01:09:15.646041 ignition[1330]: INFO : Stage: umount Mar 7 01:09:15.646041 ignition[1330]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 7 01:09:15.646041 ignition[1330]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 7 01:09:15.646041 ignition[1330]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 7 01:09:15.649234 ignition[1330]: INFO : PUT result: OK Mar 7 01:09:15.652221 ignition[1330]: INFO : umount: umount passed Mar 7 01:09:15.652221 ignition[1330]: INFO : Ignition finished successfully Mar 7 01:09:15.654676 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 7 01:09:15.654803 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 7 01:09:15.655969 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 7 01:09:15.658105 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 7 01:09:15.659237 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 7 01:09:15.659304 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 7 01:09:15.660845 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 7 01:09:15.660900 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 7 01:09:15.661853 systemd[1]: Stopped target network.target - Network. Mar 7 01:09:15.662186 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 7 01:09:15.662248 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 7 01:09:15.662882 systemd[1]: Stopped target paths.target - Path Units. Mar 7 01:09:15.663561 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 7 01:09:15.665119 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 7 01:09:15.665499 systemd[1]: Stopped target slices.target - Slice Units. Mar 7 01:09:15.666387 systemd[1]: Stopped target sockets.target - Socket Units. Mar 7 01:09:15.666998 systemd[1]: iscsid.socket: Deactivated successfully. Mar 7 01:09:15.667074 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 7 01:09:15.667759 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 7 01:09:15.667817 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 7 01:09:15.668371 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 7 01:09:15.668446 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 7 01:09:15.669023 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 7 01:09:15.669102 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 7 01:09:15.669830 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 7 01:09:15.670479 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 7 01:09:15.672862 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 7 01:09:15.673121 systemd-networkd[1087]: eth0: DHCPv6 lease lost Mar 7 01:09:15.674542 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 7 01:09:15.674677 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 7 01:09:15.676046 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 7 01:09:15.676140 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 7 01:09:15.685284 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 7 01:09:15.687167 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 7 01:09:15.687254 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 7 01:09:15.688160 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 7 01:09:15.690425 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 7 01:09:15.690563 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 7 01:09:15.703586 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 7 01:09:15.703856 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 7 01:09:15.705162 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 7 01:09:15.705288 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 7 01:09:15.708040 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 7 01:09:15.708136 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 7 01:09:15.708987 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 7 01:09:15.709036 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 7 01:09:15.709638 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 7 01:09:15.709702 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 7 01:09:15.710743 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 7 01:09:15.710803 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 7 01:09:15.712009 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 7 01:09:15.712085 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 01:09:15.721324 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 7 01:09:15.721841 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 7 01:09:15.721924 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 7 01:09:15.722561 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 7 01:09:15.722615 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 7 01:09:15.723221 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 7 01:09:15.723282 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 7 01:09:15.724013 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 7 01:09:15.726623 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 7 01:09:15.727780 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 7 01:09:15.727843 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 7 01:09:15.729105 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 7 01:09:15.729165 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 7 01:09:15.730751 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 7 01:09:15.730793 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:09:15.731738 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 7 01:09:15.731854 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 7 01:09:15.801365 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 7 01:09:15.801512 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 7 01:09:15.803152 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 7 01:09:15.803796 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 7 01:09:15.803890 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 7 01:09:15.809273 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 7 01:09:15.819254 systemd[1]: Switching root. Mar 7 01:09:15.849510 systemd-journald[179]: Journal stopped Mar 7 01:09:17.089055 systemd-journald[179]: Received SIGTERM from PID 1 (systemd). Mar 7 01:09:17.090349 kernel: SELinux: policy capability network_peer_controls=1 Mar 7 01:09:17.090375 kernel: SELinux: policy capability open_perms=1 Mar 7 01:09:17.090398 kernel: SELinux: policy capability extended_socket_class=1 Mar 7 01:09:17.090417 kernel: SELinux: policy capability always_check_network=0 Mar 7 01:09:17.090438 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 7 01:09:17.090457 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 7 01:09:17.090476 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 7 01:09:17.090496 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 7 01:09:17.090518 kernel: audit: type=1403 audit(1772845756.022:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 7 01:09:17.090545 systemd[1]: Successfully loaded SELinux policy in 41.353ms. Mar 7 01:09:17.090573 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.767ms. Mar 7 01:09:17.090595 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 7 01:09:17.090614 systemd[1]: Detected virtualization amazon. Mar 7 01:09:17.090633 systemd[1]: Detected architecture x86-64. Mar 7 01:09:17.090654 systemd[1]: Detected first boot. Mar 7 01:09:17.090672 systemd[1]: Initializing machine ID from VM UUID. Mar 7 01:09:17.090698 zram_generator::config[1374]: No configuration found. Mar 7 01:09:17.090721 systemd[1]: Populated /etc with preset unit settings. Mar 7 01:09:17.090741 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 7 01:09:17.090760 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 7 01:09:17.090782 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 7 01:09:17.090802 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 7 01:09:17.090822 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 7 01:09:17.090841 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 7 01:09:17.090861 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 7 01:09:17.090885 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 7 01:09:17.090905 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 7 01:09:17.090925 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 7 01:09:17.090945 systemd[1]: Created slice user.slice - User and Session Slice. Mar 7 01:09:17.090964 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 7 01:09:17.090982 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 7 01:09:17.091002 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 7 01:09:17.091027 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 7 01:09:17.091049 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 7 01:09:17.092357 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 7 01:09:17.092390 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 7 01:09:17.092414 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 7 01:09:17.092447 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 7 01:09:17.092470 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 7 01:09:17.092494 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 7 01:09:17.092517 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 7 01:09:17.092545 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 7 01:09:17.092567 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 7 01:09:17.092590 systemd[1]: Reached target slices.target - Slice Units. Mar 7 01:09:17.092613 systemd[1]: Reached target swap.target - Swaps. Mar 7 01:09:17.092635 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 7 01:09:17.092660 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 7 01:09:17.092682 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 7 01:09:17.092704 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 7 01:09:17.092727 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 7 01:09:17.092749 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 7 01:09:17.092775 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 7 01:09:17.092797 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 7 01:09:17.092820 systemd[1]: Mounting media.mount - External Media Directory... Mar 7 01:09:17.092844 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 01:09:17.092867 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 7 01:09:17.092889 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 7 01:09:17.092911 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 7 01:09:17.092935 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 7 01:09:17.092962 systemd[1]: Reached target machines.target - Containers. Mar 7 01:09:17.092985 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 7 01:09:17.093008 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 7 01:09:17.093033 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 7 01:09:17.093056 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 7 01:09:17.095138 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 7 01:09:17.095162 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 7 01:09:17.095185 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 7 01:09:17.095212 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 7 01:09:17.095232 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 7 01:09:17.095255 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 7 01:09:17.095275 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 7 01:09:17.095296 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 7 01:09:17.095316 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 7 01:09:17.095345 systemd[1]: Stopped systemd-fsck-usr.service. Mar 7 01:09:17.095366 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 7 01:09:17.095387 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 7 01:09:17.095411 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 7 01:09:17.095432 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 7 01:09:17.095460 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 7 01:09:17.095480 systemd[1]: verity-setup.service: Deactivated successfully. Mar 7 01:09:17.095502 systemd[1]: Stopped verity-setup.service. Mar 7 01:09:17.095522 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 01:09:17.095544 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 7 01:09:17.095565 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 7 01:09:17.095585 kernel: fuse: init (API version 7.39) Mar 7 01:09:17.095611 systemd[1]: Mounted media.mount - External Media Directory. Mar 7 01:09:17.095633 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 7 01:09:17.095653 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 7 01:09:17.095674 kernel: loop: module loaded Mar 7 01:09:17.095695 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 7 01:09:17.095719 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 7 01:09:17.095741 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 7 01:09:17.095763 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 7 01:09:17.095784 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 7 01:09:17.095804 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 7 01:09:17.095826 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 7 01:09:17.095850 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 7 01:09:17.095876 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 7 01:09:17.095897 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 7 01:09:17.095918 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 7 01:09:17.095939 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 7 01:09:17.095999 systemd-journald[1463]: Collecting audit messages is disabled. Mar 7 01:09:17.096043 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 7 01:09:17.102128 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 7 01:09:17.102164 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 7 01:09:17.102185 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 7 01:09:17.102204 kernel: ACPI: bus type drm_connector registered Mar 7 01:09:17.102225 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 7 01:09:17.102249 systemd-journald[1463]: Journal started Mar 7 01:09:17.102295 systemd-journald[1463]: Runtime Journal (/run/log/journal/ec28486c3141364d272395b81fa7a6bd) is 4.7M, max 38.2M, 33.4M free. Mar 7 01:09:16.687171 systemd[1]: Queued start job for default target multi-user.target. Mar 7 01:09:16.705872 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Mar 7 01:09:16.706313 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 7 01:09:17.111088 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 7 01:09:17.118086 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 7 01:09:17.121096 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 7 01:09:17.126099 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Mar 7 01:09:17.140102 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 7 01:09:17.152107 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 7 01:09:17.159086 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 7 01:09:17.172103 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 7 01:09:17.177094 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 7 01:09:17.183729 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 7 01:09:17.187112 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 7 01:09:17.197898 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 7 01:09:17.203092 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 7 01:09:17.215110 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 7 01:09:17.226108 systemd[1]: Started systemd-journald.service - Journal Service. Mar 7 01:09:17.233514 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 7 01:09:17.234624 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 7 01:09:17.234804 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 7 01:09:17.239532 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 7 01:09:17.241912 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 7 01:09:17.242730 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 7 01:09:17.246198 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 7 01:09:17.252501 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 7 01:09:17.293756 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 7 01:09:17.303091 kernel: loop0: detected capacity change from 0 to 61336 Mar 7 01:09:17.310289 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 7 01:09:17.320299 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Mar 7 01:09:17.323222 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 7 01:09:17.326132 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 7 01:09:17.342684 systemd-tmpfiles[1486]: ACLs are not supported, ignoring. Mar 7 01:09:17.342712 systemd-tmpfiles[1486]: ACLs are not supported, ignoring. Mar 7 01:09:17.354375 systemd-journald[1463]: Time spent on flushing to /var/log/journal/ec28486c3141364d272395b81fa7a6bd is 81.404ms for 990 entries. Mar 7 01:09:17.354375 systemd-journald[1463]: System Journal (/var/log/journal/ec28486c3141364d272395b81fa7a6bd) is 8.0M, max 195.6M, 187.6M free. Mar 7 01:09:17.450460 systemd-journald[1463]: Received client request to flush runtime journal. Mar 7 01:09:17.450532 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 7 01:09:17.450563 kernel: loop1: detected capacity change from 0 to 219192 Mar 7 01:09:17.356412 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 7 01:09:17.363272 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 7 01:09:17.405977 udevadm[1514]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Mar 7 01:09:17.454973 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 7 01:09:17.472643 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 7 01:09:17.480345 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 7 01:09:17.486452 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 7 01:09:17.491521 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Mar 7 01:09:17.520769 systemd-tmpfiles[1524]: ACLs are not supported, ignoring. Mar 7 01:09:17.520798 systemd-tmpfiles[1524]: ACLs are not supported, ignoring. Mar 7 01:09:17.528330 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 7 01:09:17.686100 kernel: loop2: detected capacity change from 0 to 142488 Mar 7 01:09:17.777196 kernel: loop3: detected capacity change from 0 to 140768 Mar 7 01:09:17.862082 kernel: loop4: detected capacity change from 0 to 61336 Mar 7 01:09:17.899089 kernel: loop5: detected capacity change from 0 to 219192 Mar 7 01:09:17.944601 kernel: loop6: detected capacity change from 0 to 142488 Mar 7 01:09:17.988664 kernel: loop7: detected capacity change from 0 to 140768 Mar 7 01:09:18.022211 (sd-merge)[1531]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Mar 7 01:09:18.025372 (sd-merge)[1531]: Merged extensions into '/usr'. Mar 7 01:09:18.032299 systemd[1]: Reloading requested from client PID 1485 ('systemd-sysext') (unit systemd-sysext.service)... Mar 7 01:09:18.032499 systemd[1]: Reloading... Mar 7 01:09:18.133675 ldconfig[1481]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 7 01:09:18.150093 zram_generator::config[1557]: No configuration found. Mar 7 01:09:18.326810 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 7 01:09:18.380925 systemd[1]: Reloading finished in 347 ms. Mar 7 01:09:18.410975 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 7 01:09:18.412278 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 7 01:09:18.413000 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 7 01:09:18.421265 systemd[1]: Starting ensure-sysext.service... Mar 7 01:09:18.424275 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 7 01:09:18.434499 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 7 01:09:18.444954 systemd[1]: Reloading requested from client PID 1610 ('systemctl') (unit ensure-sysext.service)... Mar 7 01:09:18.444975 systemd[1]: Reloading... Mar 7 01:09:18.475470 systemd-udevd[1612]: Using default interface naming scheme 'v255'. Mar 7 01:09:18.491152 systemd-tmpfiles[1611]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 7 01:09:18.491761 systemd-tmpfiles[1611]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 7 01:09:18.493615 systemd-tmpfiles[1611]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 7 01:09:18.494282 systemd-tmpfiles[1611]: ACLs are not supported, ignoring. Mar 7 01:09:18.495214 systemd-tmpfiles[1611]: ACLs are not supported, ignoring. Mar 7 01:09:18.499422 systemd-tmpfiles[1611]: Detected autofs mount point /boot during canonicalization of boot. Mar 7 01:09:18.499536 systemd-tmpfiles[1611]: Skipping /boot Mar 7 01:09:18.527506 systemd-tmpfiles[1611]: Detected autofs mount point /boot during canonicalization of boot. Mar 7 01:09:18.527655 systemd-tmpfiles[1611]: Skipping /boot Mar 7 01:09:18.590167 zram_generator::config[1648]: No configuration found. Mar 7 01:09:18.679223 (udev-worker)[1642]: Network interface NamePolicy= disabled on kernel command line. Mar 7 01:09:18.749990 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Mar 7 01:09:18.777088 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Mar 7 01:09:18.783088 kernel: ACPI: button: Power Button [PWRF] Mar 7 01:09:18.783183 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input3 Mar 7 01:09:18.785081 kernel: ACPI: button: Sleep Button [SLPF] Mar 7 01:09:18.820084 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input4 Mar 7 01:09:18.891083 kernel: mousedev: PS/2 mouse device common for all mice Mar 7 01:09:18.912094 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 34 scanned by (udev-worker) (1640) Mar 7 01:09:18.920625 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 7 01:09:19.058954 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 7 01:09:19.059214 systemd[1]: Reloading finished in 613 ms. Mar 7 01:09:19.079942 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 7 01:09:19.082642 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 7 01:09:19.116764 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 7 01:09:19.125582 systemd[1]: Finished ensure-sysext.service. Mar 7 01:09:19.143789 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Mar 7 01:09:19.144560 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 01:09:19.150357 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 7 01:09:19.156318 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 7 01:09:19.159538 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 7 01:09:19.167372 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 7 01:09:19.169656 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 7 01:09:19.180303 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 7 01:09:19.185293 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 7 01:09:19.195722 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 7 01:09:19.198550 lvm[1806]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 7 01:09:19.197315 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 7 01:09:19.207276 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 7 01:09:19.220960 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 7 01:09:19.229564 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 7 01:09:19.235158 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 7 01:09:19.236816 systemd[1]: Reached target time-set.target - System Time Set. Mar 7 01:09:19.245041 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 7 01:09:19.255120 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:09:19.255921 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 01:09:19.257865 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 7 01:09:19.258685 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 7 01:09:19.260832 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 7 01:09:19.261021 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 7 01:09:19.262467 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 7 01:09:19.263253 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 7 01:09:19.264425 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 7 01:09:19.265270 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 7 01:09:19.273392 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 7 01:09:19.273477 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 7 01:09:19.283340 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 7 01:09:19.285156 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 7 01:09:19.288689 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 7 01:09:19.296372 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 7 01:09:19.298053 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 7 01:09:19.322378 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 7 01:09:19.333114 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 7 01:09:19.339914 lvm[1833]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 7 01:09:19.340328 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 7 01:09:19.370201 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 7 01:09:19.378035 augenrules[1846]: No rules Mar 7 01:09:19.382212 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 7 01:09:19.395418 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 7 01:09:19.396517 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 7 01:09:19.398671 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 7 01:09:19.405258 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 7 01:09:19.494853 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:09:19.503184 systemd-resolved[1819]: Positive Trust Anchors: Mar 7 01:09:19.503201 systemd-resolved[1819]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 7 01:09:19.503262 systemd-resolved[1819]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 7 01:09:19.509802 systemd-resolved[1819]: Defaulting to hostname 'linux'. Mar 7 01:09:19.512043 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 7 01:09:19.512541 systemd-networkd[1818]: lo: Link UP Mar 7 01:09:19.512547 systemd-networkd[1818]: lo: Gained carrier Mar 7 01:09:19.512819 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 7 01:09:19.513462 systemd[1]: Reached target sysinit.target - System Initialization. Mar 7 01:09:19.514176 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 7 01:09:19.514711 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 7 01:09:19.515028 systemd-networkd[1818]: Enumeration completed Mar 7 01:09:19.515450 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 7 01:09:19.516039 systemd-networkd[1818]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 01:09:19.516055 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 7 01:09:19.516562 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 7 01:09:19.516687 systemd-networkd[1818]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 7 01:09:19.517050 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 7 01:09:19.517104 systemd[1]: Reached target paths.target - Path Units. Mar 7 01:09:19.517653 systemd[1]: Reached target timers.target - Timer Units. Mar 7 01:09:19.518835 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 7 01:09:19.521017 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 7 01:09:19.523218 systemd-networkd[1818]: eth0: Link UP Mar 7 01:09:19.523456 systemd-networkd[1818]: eth0: Gained carrier Mar 7 01:09:19.523478 systemd-networkd[1818]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 01:09:19.530170 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 7 01:09:19.531641 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 7 01:09:19.532333 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 7 01:09:19.532818 systemd[1]: Reached target network.target - Network. Mar 7 01:09:19.533224 systemd[1]: Reached target sockets.target - Socket Units. Mar 7 01:09:19.533579 systemd[1]: Reached target basic.target - Basic System. Mar 7 01:09:19.533958 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 7 01:09:19.533994 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 7 01:09:19.535134 systemd[1]: Starting containerd.service - containerd container runtime... Mar 7 01:09:19.537129 systemd-networkd[1818]: eth0: DHCPv4 address 172.31.29.156/20, gateway 172.31.16.1 acquired from 172.31.16.1 Mar 7 01:09:19.539250 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Mar 7 01:09:19.549280 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 7 01:09:19.554196 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 7 01:09:19.556242 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 7 01:09:19.557566 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 7 01:09:19.569271 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 7 01:09:19.577366 systemd[1]: Started ntpd.service - Network Time Service. Mar 7 01:09:19.581963 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 7 01:09:19.602182 systemd[1]: Starting setup-oem.service - Setup OEM... Mar 7 01:09:19.617716 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 7 01:09:19.623656 jq[1869]: false Mar 7 01:09:19.624277 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 7 01:09:19.639295 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 7 01:09:19.648286 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 7 01:09:19.650170 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 7 01:09:19.650861 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 7 01:09:19.658277 systemd[1]: Starting update-engine.service - Update Engine... Mar 7 01:09:19.667674 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 7 01:09:19.673549 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 7 01:09:19.675400 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 7 01:09:19.701093 coreos-metadata[1867]: Mar 07 01:09:19.698 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Mar 7 01:09:19.701093 coreos-metadata[1867]: Mar 07 01:09:19.700 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Mar 7 01:09:19.707144 coreos-metadata[1867]: Mar 07 01:09:19.702 INFO Fetch successful Mar 7 01:09:19.707144 coreos-metadata[1867]: Mar 07 01:09:19.702 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Mar 7 01:09:19.707144 coreos-metadata[1867]: Mar 07 01:09:19.706 INFO Fetch successful Mar 7 01:09:19.707144 coreos-metadata[1867]: Mar 07 01:09:19.706 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Mar 7 01:09:19.724185 coreos-metadata[1867]: Mar 07 01:09:19.724 INFO Fetch successful Mar 7 01:09:19.724185 coreos-metadata[1867]: Mar 07 01:09:19.724 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Mar 7 01:09:19.731651 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 7 01:09:19.731937 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 7 01:09:19.733890 dbus-daemon[1868]: [system] SELinux support is enabled Mar 7 01:09:19.740505 coreos-metadata[1867]: Mar 07 01:09:19.734 INFO Fetch successful Mar 7 01:09:19.740505 coreos-metadata[1867]: Mar 07 01:09:19.734 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Mar 7 01:09:19.740505 coreos-metadata[1867]: Mar 07 01:09:19.738 INFO Fetch failed with 404: resource not found Mar 7 01:09:19.740505 coreos-metadata[1867]: Mar 07 01:09:19.738 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Mar 7 01:09:19.740505 coreos-metadata[1867]: Mar 07 01:09:19.740 INFO Fetch successful Mar 7 01:09:19.740505 coreos-metadata[1867]: Mar 07 01:09:19.740 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Mar 7 01:09:19.734235 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 7 01:09:19.747260 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 7 01:09:19.752590 dbus-daemon[1868]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1818 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Mar 7 01:09:19.753298 coreos-metadata[1867]: Mar 07 01:09:19.750 INFO Fetch successful Mar 7 01:09:19.753298 coreos-metadata[1867]: Mar 07 01:09:19.750 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Mar 7 01:09:19.748794 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 7 01:09:19.749528 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 7 01:09:19.749552 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 7 01:09:19.756213 coreos-metadata[1867]: Mar 07 01:09:19.755 INFO Fetch successful Mar 7 01:09:19.756213 coreos-metadata[1867]: Mar 07 01:09:19.755 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Mar 7 01:09:19.756213 coreos-metadata[1867]: Mar 07 01:09:19.756 INFO Fetch successful Mar 7 01:09:19.756213 coreos-metadata[1867]: Mar 07 01:09:19.756 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Mar 7 01:09:19.756448 update_engine[1883]: I20260307 01:09:19.751540 1883 main.cc:92] Flatcar Update Engine starting Mar 7 01:09:19.757631 dbus-daemon[1868]: [system] Successfully activated service 'org.freedesktop.systemd1' Mar 7 01:09:19.767594 jq[1886]: true Mar 7 01:09:19.772755 coreos-metadata[1867]: Mar 07 01:09:19.766 INFO Fetch successful Mar 7 01:09:19.771941 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Mar 7 01:09:19.774680 update_engine[1883]: I20260307 01:09:19.774037 1883 update_check_scheduler.cc:74] Next update check in 4m14s Mar 7 01:09:19.777418 systemd[1]: Started update-engine.service - Update Engine. Mar 7 01:09:19.787258 extend-filesystems[1870]: Found loop4 Mar 7 01:09:19.787258 extend-filesystems[1870]: Found loop5 Mar 7 01:09:19.787258 extend-filesystems[1870]: Found loop6 Mar 7 01:09:19.787258 extend-filesystems[1870]: Found loop7 Mar 7 01:09:19.787258 extend-filesystems[1870]: Found nvme0n1 Mar 7 01:09:19.787363 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 7 01:09:19.792712 ntpd[1872]: 7 Mar 01:09:19 ntpd[1872]: ntpd 4.2.8p17@1.4004-o Fri Mar 6 22:16:32 UTC 2026 (1): Starting Mar 7 01:09:19.792712 ntpd[1872]: 7 Mar 01:09:19 ntpd[1872]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Mar 7 01:09:19.792712 ntpd[1872]: 7 Mar 01:09:19 ntpd[1872]: ---------------------------------------------------- Mar 7 01:09:19.792712 ntpd[1872]: 7 Mar 01:09:19 ntpd[1872]: ntp-4 is maintained by Network Time Foundation, Mar 7 01:09:19.792712 ntpd[1872]: 7 Mar 01:09:19 ntpd[1872]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Mar 7 01:09:19.792712 ntpd[1872]: 7 Mar 01:09:19 ntpd[1872]: corporation. Support and training for ntp-4 are Mar 7 01:09:19.792712 ntpd[1872]: 7 Mar 01:09:19 ntpd[1872]: available at https://www.nwtime.org/support Mar 7 01:09:19.792712 ntpd[1872]: 7 Mar 01:09:19 ntpd[1872]: ---------------------------------------------------- Mar 7 01:09:19.790318 ntpd[1872]: ntpd 4.2.8p17@1.4004-o Fri Mar 6 22:16:32 UTC 2026 (1): Starting Mar 7 01:09:19.790342 ntpd[1872]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Mar 7 01:09:19.790353 ntpd[1872]: ---------------------------------------------------- Mar 7 01:09:19.790364 ntpd[1872]: ntp-4 is maintained by Network Time Foundation, Mar 7 01:09:19.790374 ntpd[1872]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Mar 7 01:09:19.790384 ntpd[1872]: corporation. Support and training for ntp-4 are Mar 7 01:09:19.790395 ntpd[1872]: available at https://www.nwtime.org/support Mar 7 01:09:19.790404 ntpd[1872]: ---------------------------------------------------- Mar 7 01:09:19.794266 (ntainerd)[1890]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 7 01:09:19.814202 ntpd[1872]: 7 Mar 01:09:19 ntpd[1872]: proto: precision = 0.076 usec (-24) Mar 7 01:09:19.814202 ntpd[1872]: 7 Mar 01:09:19 ntpd[1872]: basedate set to 2026-02-22 Mar 7 01:09:19.814202 ntpd[1872]: 7 Mar 01:09:19 ntpd[1872]: gps base set to 2026-02-22 (week 2407) Mar 7 01:09:19.814381 extend-filesystems[1870]: Found nvme0n1p1 Mar 7 01:09:19.814381 extend-filesystems[1870]: Found nvme0n1p2 Mar 7 01:09:19.814381 extend-filesystems[1870]: Found nvme0n1p3 Mar 7 01:09:19.814381 extend-filesystems[1870]: Found usr Mar 7 01:09:19.814381 extend-filesystems[1870]: Found nvme0n1p4 Mar 7 01:09:19.814381 extend-filesystems[1870]: Found nvme0n1p6 Mar 7 01:09:19.814381 extend-filesystems[1870]: Found nvme0n1p7 Mar 7 01:09:19.814381 extend-filesystems[1870]: Found nvme0n1p9 Mar 7 01:09:19.814381 extend-filesystems[1870]: Checking size of /dev/nvme0n1p9 Mar 7 01:09:19.811679 ntpd[1872]: proto: precision = 0.076 usec (-24) Mar 7 01:09:19.817475 systemd[1]: motdgen.service: Deactivated successfully. Mar 7 01:09:19.851974 tar[1888]: linux-amd64/LICENSE Mar 7 01:09:19.851974 tar[1888]: linux-amd64/helm Mar 7 01:09:19.856123 ntpd[1872]: 7 Mar 01:09:19 ntpd[1872]: Listen and drop on 0 v6wildcard [::]:123 Mar 7 01:09:19.856123 ntpd[1872]: 7 Mar 01:09:19 ntpd[1872]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Mar 7 01:09:19.856123 ntpd[1872]: 7 Mar 01:09:19 ntpd[1872]: Listen normally on 2 lo 127.0.0.1:123 Mar 7 01:09:19.856123 ntpd[1872]: 7 Mar 01:09:19 ntpd[1872]: Listen normally on 3 eth0 172.31.29.156:123 Mar 7 01:09:19.856123 ntpd[1872]: 7 Mar 01:09:19 ntpd[1872]: Listen normally on 4 lo [::1]:123 Mar 7 01:09:19.856123 ntpd[1872]: 7 Mar 01:09:19 ntpd[1872]: bind(21) AF_INET6 fe80::4f0:93ff:fe55:b95%2#123 flags 0x11 failed: Cannot assign requested address Mar 7 01:09:19.856123 ntpd[1872]: 7 Mar 01:09:19 ntpd[1872]: unable to create socket on eth0 (5) for fe80::4f0:93ff:fe55:b95%2#123 Mar 7 01:09:19.856123 ntpd[1872]: 7 Mar 01:09:19 ntpd[1872]: failed to init interface for address fe80::4f0:93ff:fe55:b95%2 Mar 7 01:09:19.856123 ntpd[1872]: 7 Mar 01:09:19 ntpd[1872]: Listening on routing socket on fd #21 for interface updates Mar 7 01:09:19.856123 ntpd[1872]: 7 Mar 01:09:19 ntpd[1872]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 7 01:09:19.856123 ntpd[1872]: 7 Mar 01:09:19 ntpd[1872]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 7 01:09:19.812710 ntpd[1872]: basedate set to 2026-02-22 Mar 7 01:09:19.817708 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 7 01:09:19.812728 ntpd[1872]: gps base set to 2026-02-22 (week 2407) Mar 7 01:09:19.845271 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Mar 7 01:09:19.832054 ntpd[1872]: Listen and drop on 0 v6wildcard [::]:123 Mar 7 01:09:19.851459 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 7 01:09:19.834154 ntpd[1872]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Mar 7 01:09:19.859522 systemd[1]: Finished setup-oem.service - Setup OEM. Mar 7 01:09:19.834868 ntpd[1872]: Listen normally on 2 lo 127.0.0.1:123 Mar 7 01:09:19.834914 ntpd[1872]: Listen normally on 3 eth0 172.31.29.156:123 Mar 7 01:09:19.834959 ntpd[1872]: Listen normally on 4 lo [::1]:123 Mar 7 01:09:19.835015 ntpd[1872]: bind(21) AF_INET6 fe80::4f0:93ff:fe55:b95%2#123 flags 0x11 failed: Cannot assign requested address Mar 7 01:09:19.835039 ntpd[1872]: unable to create socket on eth0 (5) for fe80::4f0:93ff:fe55:b95%2#123 Mar 7 01:09:19.835056 ntpd[1872]: failed to init interface for address fe80::4f0:93ff:fe55:b95%2 Mar 7 01:09:19.839887 ntpd[1872]: Listening on routing socket on fd #21 for interface updates Mar 7 01:09:19.841411 ntpd[1872]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 7 01:09:19.841450 ntpd[1872]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 7 01:09:19.883084 extend-filesystems[1870]: Resized partition /dev/nvme0n1p9 Mar 7 01:09:19.890323 extend-filesystems[1926]: resize2fs 1.47.1 (20-May-2024) Mar 7 01:09:19.902224 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 3587067 blocks Mar 7 01:09:19.902921 jq[1907]: true Mar 7 01:09:19.993097 systemd-logind[1880]: Watching system buttons on /dev/input/event1 (Power Button) Mar 7 01:09:19.993158 systemd-logind[1880]: Watching system buttons on /dev/input/event2 (Sleep Button) Mar 7 01:09:19.993188 systemd-logind[1880]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 7 01:09:19.993442 systemd-logind[1880]: New seat seat0. Mar 7 01:09:19.996929 systemd[1]: Started systemd-logind.service - User Login Management. Mar 7 01:09:20.078024 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 34 scanned by (udev-worker) (1640) Mar 7 01:09:20.186955 bash[1949]: Updated "/home/core/.ssh/authorized_keys" Mar 7 01:09:20.186287 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 7 01:09:20.203202 systemd[1]: Starting sshkeys.service... Mar 7 01:09:20.231144 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 3587067 Mar 7 01:09:20.275300 extend-filesystems[1926]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Mar 7 01:09:20.275300 extend-filesystems[1926]: old_desc_blocks = 1, new_desc_blocks = 2 Mar 7 01:09:20.275300 extend-filesystems[1926]: The filesystem on /dev/nvme0n1p9 is now 3587067 (4k) blocks long. Mar 7 01:09:20.277554 extend-filesystems[1870]: Resized filesystem in /dev/nvme0n1p9 Mar 7 01:09:20.276317 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 7 01:09:20.277116 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 7 01:09:20.278377 locksmithd[1909]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 7 01:09:20.290796 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Mar 7 01:09:20.301288 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Mar 7 01:09:20.327689 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Mar 7 01:09:20.327376 dbus-daemon[1868]: [system] Successfully activated service 'org.freedesktop.hostname1' Mar 7 01:09:20.331740 dbus-daemon[1868]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=1905 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Mar 7 01:09:20.340604 systemd[1]: Starting polkit.service - Authorization Manager... Mar 7 01:09:20.417685 polkitd[2028]: Started polkitd version 121 Mar 7 01:09:20.463136 polkitd[2028]: Loading rules from directory /etc/polkit-1/rules.d Mar 7 01:09:20.463249 polkitd[2028]: Loading rules from directory /usr/share/polkit-1/rules.d Mar 7 01:09:20.464793 polkitd[2028]: Finished loading, compiling and executing 2 rules Mar 7 01:09:20.471498 dbus-daemon[1868]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Mar 7 01:09:20.481515 systemd[1]: Started polkit.service - Authorization Manager. Mar 7 01:09:20.487293 containerd[1890]: time="2026-03-07T01:09:20.485957091Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Mar 7 01:09:20.490119 polkitd[2028]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Mar 7 01:09:20.548804 systemd-hostnamed[1905]: Hostname set to (transient) Mar 7 01:09:20.548938 systemd-resolved[1819]: System hostname changed to 'ip-172-31-29-156'. Mar 7 01:09:20.575217 coreos-metadata[2026]: Mar 07 01:09:20.573 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Mar 7 01:09:20.583521 coreos-metadata[2026]: Mar 07 01:09:20.578 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Mar 7 01:09:20.584973 coreos-metadata[2026]: Mar 07 01:09:20.584 INFO Fetch successful Mar 7 01:09:20.584973 coreos-metadata[2026]: Mar 07 01:09:20.584 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Mar 7 01:09:20.587128 coreos-metadata[2026]: Mar 07 01:09:20.586 INFO Fetch successful Mar 7 01:09:20.592731 unknown[2026]: wrote ssh authorized keys file for user: core Mar 7 01:09:20.638186 update-ssh-keys[2066]: Updated "/home/core/.ssh/authorized_keys" Mar 7 01:09:20.639851 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Mar 7 01:09:20.649131 systemd[1]: Finished sshkeys.service. Mar 7 01:09:20.659152 containerd[1890]: time="2026-03-07T01:09:20.659099467Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 7 01:09:20.661320 containerd[1890]: time="2026-03-07T01:09:20.661271207Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 7 01:09:20.661444 containerd[1890]: time="2026-03-07T01:09:20.661427100Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 7 01:09:20.661520 containerd[1890]: time="2026-03-07T01:09:20.661507112Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 7 01:09:20.661776 containerd[1890]: time="2026-03-07T01:09:20.661757318Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 7 01:09:20.661857 containerd[1890]: time="2026-03-07T01:09:20.661842321Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 7 01:09:20.661986 containerd[1890]: time="2026-03-07T01:09:20.661967581Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 7 01:09:20.662051 containerd[1890]: time="2026-03-07T01:09:20.662037916Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 7 01:09:20.662402 containerd[1890]: time="2026-03-07T01:09:20.662380675Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 7 01:09:20.662469 containerd[1890]: time="2026-03-07T01:09:20.662457398Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 7 01:09:20.662546 containerd[1890]: time="2026-03-07T01:09:20.662531476Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 7 01:09:20.662957 containerd[1890]: time="2026-03-07T01:09:20.662594262Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 7 01:09:20.662957 containerd[1890]: time="2026-03-07T01:09:20.662688979Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 7 01:09:20.662957 containerd[1890]: time="2026-03-07T01:09:20.662923067Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 7 01:09:20.663279 containerd[1890]: time="2026-03-07T01:09:20.663256579Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 7 01:09:20.663359 containerd[1890]: time="2026-03-07T01:09:20.663344773Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 7 01:09:20.663513 containerd[1890]: time="2026-03-07T01:09:20.663497279Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 7 01:09:20.663651 containerd[1890]: time="2026-03-07T01:09:20.663637522Z" level=info msg="metadata content store policy set" policy=shared Mar 7 01:09:20.668330 containerd[1890]: time="2026-03-07T01:09:20.667972589Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 7 01:09:20.670174 containerd[1890]: time="2026-03-07T01:09:20.670120234Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 7 01:09:20.670248 containerd[1890]: time="2026-03-07T01:09:20.670184774Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 7 01:09:20.670248 containerd[1890]: time="2026-03-07T01:09:20.670207440Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 7 01:09:20.670248 containerd[1890]: time="2026-03-07T01:09:20.670229429Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 7 01:09:20.670444 containerd[1890]: time="2026-03-07T01:09:20.670421523Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 7 01:09:20.670979 containerd[1890]: time="2026-03-07T01:09:20.670872516Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 7 01:09:20.671040 containerd[1890]: time="2026-03-07T01:09:20.671011489Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 7 01:09:20.671117 containerd[1890]: time="2026-03-07T01:09:20.671035825Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 7 01:09:20.671117 containerd[1890]: time="2026-03-07T01:09:20.671055405Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 7 01:09:20.671191 containerd[1890]: time="2026-03-07T01:09:20.671102225Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 7 01:09:20.671191 containerd[1890]: time="2026-03-07T01:09:20.671140611Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 7 01:09:20.671191 containerd[1890]: time="2026-03-07T01:09:20.671160302Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 7 01:09:20.671191 containerd[1890]: time="2026-03-07T01:09:20.671182955Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 7 01:09:20.671351 containerd[1890]: time="2026-03-07T01:09:20.671206045Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 7 01:09:20.671351 containerd[1890]: time="2026-03-07T01:09:20.671227233Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 7 01:09:20.671351 containerd[1890]: time="2026-03-07T01:09:20.671246104Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 7 01:09:20.671351 containerd[1890]: time="2026-03-07T01:09:20.671264598Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 7 01:09:20.671351 containerd[1890]: time="2026-03-07T01:09:20.671299274Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 7 01:09:20.671351 containerd[1890]: time="2026-03-07T01:09:20.671328159Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 7 01:09:20.671351 containerd[1890]: time="2026-03-07T01:09:20.671348246Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 7 01:09:20.671603 containerd[1890]: time="2026-03-07T01:09:20.671370556Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 7 01:09:20.671603 containerd[1890]: time="2026-03-07T01:09:20.671389918Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 7 01:09:20.671603 containerd[1890]: time="2026-03-07T01:09:20.671409893Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 7 01:09:20.671603 containerd[1890]: time="2026-03-07T01:09:20.671427745Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 7 01:09:20.671603 containerd[1890]: time="2026-03-07T01:09:20.671448058Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 7 01:09:20.671603 containerd[1890]: time="2026-03-07T01:09:20.671467862Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 7 01:09:20.671603 containerd[1890]: time="2026-03-07T01:09:20.671489375Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 7 01:09:20.671603 containerd[1890]: time="2026-03-07T01:09:20.671508820Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 7 01:09:20.671603 containerd[1890]: time="2026-03-07T01:09:20.671537547Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 7 01:09:20.671603 containerd[1890]: time="2026-03-07T01:09:20.671556924Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 7 01:09:20.671603 containerd[1890]: time="2026-03-07T01:09:20.671591431Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 7 01:09:20.671998 containerd[1890]: time="2026-03-07T01:09:20.671625031Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 7 01:09:20.671998 containerd[1890]: time="2026-03-07T01:09:20.671645194Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 7 01:09:20.671998 containerd[1890]: time="2026-03-07T01:09:20.671661735Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 7 01:09:20.675114 containerd[1890]: time="2026-03-07T01:09:20.674099438Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 7 01:09:20.675114 containerd[1890]: time="2026-03-07T01:09:20.674224434Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 7 01:09:20.675114 containerd[1890]: time="2026-03-07T01:09:20.674246166Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 7 01:09:20.675114 containerd[1890]: time="2026-03-07T01:09:20.674266204Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 7 01:09:20.675114 containerd[1890]: time="2026-03-07T01:09:20.674282728Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 7 01:09:20.675114 containerd[1890]: time="2026-03-07T01:09:20.674303033Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 7 01:09:20.675114 containerd[1890]: time="2026-03-07T01:09:20.674318156Z" level=info msg="NRI interface is disabled by configuration." Mar 7 01:09:20.675114 containerd[1890]: time="2026-03-07T01:09:20.674333516Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 7 01:09:20.675452 containerd[1890]: time="2026-03-07T01:09:20.674727331Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 7 01:09:20.675452 containerd[1890]: time="2026-03-07T01:09:20.674814969Z" level=info msg="Connect containerd service" Mar 7 01:09:20.675452 containerd[1890]: time="2026-03-07T01:09:20.674861525Z" level=info msg="using legacy CRI server" Mar 7 01:09:20.675452 containerd[1890]: time="2026-03-07T01:09:20.674871445Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 7 01:09:20.675452 containerd[1890]: time="2026-03-07T01:09:20.675004585Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 7 01:09:20.676257 containerd[1890]: time="2026-03-07T01:09:20.675882845Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 7 01:09:20.678209 containerd[1890]: time="2026-03-07T01:09:20.678156831Z" level=info msg="Start subscribing containerd event" Mar 7 01:09:20.678267 containerd[1890]: time="2026-03-07T01:09:20.678228935Z" level=info msg="Start recovering state" Mar 7 01:09:20.678718 containerd[1890]: time="2026-03-07T01:09:20.678308458Z" level=info msg="Start event monitor" Mar 7 01:09:20.678718 containerd[1890]: time="2026-03-07T01:09:20.678333639Z" level=info msg="Start snapshots syncer" Mar 7 01:09:20.678718 containerd[1890]: time="2026-03-07T01:09:20.678346924Z" level=info msg="Start cni network conf syncer for default" Mar 7 01:09:20.678718 containerd[1890]: time="2026-03-07T01:09:20.678358572Z" level=info msg="Start streaming server" Mar 7 01:09:20.678898 containerd[1890]: time="2026-03-07T01:09:20.678874500Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 7 01:09:20.678960 containerd[1890]: time="2026-03-07T01:09:20.678939009Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 7 01:09:20.679746 containerd[1890]: time="2026-03-07T01:09:20.679009482Z" level=info msg="containerd successfully booted in 0.211514s" Mar 7 01:09:20.679170 systemd[1]: Started containerd.service - containerd container runtime. Mar 7 01:09:20.790805 ntpd[1872]: bind(24) AF_INET6 fe80::4f0:93ff:fe55:b95%2#123 flags 0x11 failed: Cannot assign requested address Mar 7 01:09:20.791474 ntpd[1872]: 7 Mar 01:09:20 ntpd[1872]: bind(24) AF_INET6 fe80::4f0:93ff:fe55:b95%2#123 flags 0x11 failed: Cannot assign requested address Mar 7 01:09:20.791474 ntpd[1872]: 7 Mar 01:09:20 ntpd[1872]: unable to create socket on eth0 (6) for fe80::4f0:93ff:fe55:b95%2#123 Mar 7 01:09:20.791474 ntpd[1872]: 7 Mar 01:09:20 ntpd[1872]: failed to init interface for address fe80::4f0:93ff:fe55:b95%2 Mar 7 01:09:20.790854 ntpd[1872]: unable to create socket on eth0 (6) for fe80::4f0:93ff:fe55:b95%2#123 Mar 7 01:09:20.790872 ntpd[1872]: failed to init interface for address fe80::4f0:93ff:fe55:b95%2 Mar 7 01:09:20.893617 sshd_keygen[1916]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 7 01:09:20.929377 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 7 01:09:20.940228 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 7 01:09:20.952633 systemd[1]: issuegen.service: Deactivated successfully. Mar 7 01:09:20.952894 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 7 01:09:20.962467 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 7 01:09:20.979218 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 7 01:09:20.990563 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 7 01:09:20.993907 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 7 01:09:20.994959 systemd[1]: Reached target getty.target - Login Prompts. Mar 7 01:09:21.087042 tar[1888]: linux-amd64/README.md Mar 7 01:09:21.098420 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 7 01:09:21.162292 systemd-networkd[1818]: eth0: Gained IPv6LL Mar 7 01:09:21.165466 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 7 01:09:21.166779 systemd[1]: Reached target network-online.target - Network is Online. Mar 7 01:09:21.171537 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Mar 7 01:09:21.176258 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:09:21.183788 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 7 01:09:21.221608 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 7 01:09:21.244119 amazon-ssm-agent[2092]: Initializing new seelog logger Mar 7 01:09:21.244119 amazon-ssm-agent[2092]: New Seelog Logger Creation Complete Mar 7 01:09:21.244119 amazon-ssm-agent[2092]: 2026/03/07 01:09:21 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 7 01:09:21.244119 amazon-ssm-agent[2092]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 7 01:09:21.244119 amazon-ssm-agent[2092]: 2026/03/07 01:09:21 processing appconfig overrides Mar 7 01:09:21.244637 amazon-ssm-agent[2092]: 2026/03/07 01:09:21 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 7 01:09:21.244637 amazon-ssm-agent[2092]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 7 01:09:21.244637 amazon-ssm-agent[2092]: 2026/03/07 01:09:21 processing appconfig overrides Mar 7 01:09:21.245405 amazon-ssm-agent[2092]: 2026/03/07 01:09:21 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 7 01:09:21.245405 amazon-ssm-agent[2092]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 7 01:09:21.245405 amazon-ssm-agent[2092]: 2026/03/07 01:09:21 processing appconfig overrides Mar 7 01:09:21.245574 amazon-ssm-agent[2092]: 2026-03-07 01:09:21 INFO Proxy environment variables: Mar 7 01:09:21.247700 amazon-ssm-agent[2092]: 2026/03/07 01:09:21 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 7 01:09:21.247700 amazon-ssm-agent[2092]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 7 01:09:21.247855 amazon-ssm-agent[2092]: 2026/03/07 01:09:21 processing appconfig overrides Mar 7 01:09:21.346522 amazon-ssm-agent[2092]: 2026-03-07 01:09:21 INFO https_proxy: Mar 7 01:09:21.446144 amazon-ssm-agent[2092]: 2026-03-07 01:09:21 INFO http_proxy: Mar 7 01:09:21.530293 amazon-ssm-agent[2092]: 2026-03-07 01:09:21 INFO no_proxy: Mar 7 01:09:21.530293 amazon-ssm-agent[2092]: 2026-03-07 01:09:21 INFO Checking if agent identity type OnPrem can be assumed Mar 7 01:09:21.530293 amazon-ssm-agent[2092]: 2026-03-07 01:09:21 INFO Checking if agent identity type EC2 can be assumed Mar 7 01:09:21.530293 amazon-ssm-agent[2092]: 2026-03-07 01:09:21 INFO Agent will take identity from EC2 Mar 7 01:09:21.530293 amazon-ssm-agent[2092]: 2026-03-07 01:09:21 INFO [amazon-ssm-agent] using named pipe channel for IPC Mar 7 01:09:21.530600 amazon-ssm-agent[2092]: 2026-03-07 01:09:21 INFO [amazon-ssm-agent] using named pipe channel for IPC Mar 7 01:09:21.530600 amazon-ssm-agent[2092]: 2026-03-07 01:09:21 INFO [amazon-ssm-agent] using named pipe channel for IPC Mar 7 01:09:21.530600 amazon-ssm-agent[2092]: 2026-03-07 01:09:21 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Mar 7 01:09:21.530600 amazon-ssm-agent[2092]: 2026-03-07 01:09:21 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Mar 7 01:09:21.530600 amazon-ssm-agent[2092]: 2026-03-07 01:09:21 INFO [amazon-ssm-agent] Starting Core Agent Mar 7 01:09:21.530600 amazon-ssm-agent[2092]: 2026-03-07 01:09:21 INFO [amazon-ssm-agent] registrar detected. Attempting registration Mar 7 01:09:21.530600 amazon-ssm-agent[2092]: 2026-03-07 01:09:21 INFO [Registrar] Starting registrar module Mar 7 01:09:21.530600 amazon-ssm-agent[2092]: 2026-03-07 01:09:21 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Mar 7 01:09:21.530600 amazon-ssm-agent[2092]: 2026-03-07 01:09:21 INFO [EC2Identity] EC2 registration was successful. Mar 7 01:09:21.530600 amazon-ssm-agent[2092]: 2026-03-07 01:09:21 INFO [CredentialRefresher] credentialRefresher has started Mar 7 01:09:21.530600 amazon-ssm-agent[2092]: 2026-03-07 01:09:21 INFO [CredentialRefresher] Starting credentials refresher loop Mar 7 01:09:21.530600 amazon-ssm-agent[2092]: 2026-03-07 01:09:21 INFO EC2RoleProvider Successfully connected with instance profile role credentials Mar 7 01:09:21.545109 amazon-ssm-agent[2092]: 2026-03-07 01:09:21 INFO [CredentialRefresher] Next credential rotation will be in 32.2999896985 minutes Mar 7 01:09:22.549339 amazon-ssm-agent[2092]: 2026-03-07 01:09:22 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Mar 7 01:09:22.650149 amazon-ssm-agent[2092]: 2026-03-07 01:09:22 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2112) started Mar 7 01:09:22.750390 amazon-ssm-agent[2092]: 2026-03-07 01:09:22 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Mar 7 01:09:23.790795 ntpd[1872]: Listen normally on 7 eth0 [fe80::4f0:93ff:fe55:b95%2]:123 Mar 7 01:09:23.791202 ntpd[1872]: 7 Mar 01:09:23 ntpd[1872]: Listen normally on 7 eth0 [fe80::4f0:93ff:fe55:b95%2]:123 Mar 7 01:09:23.979085 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:09:23.980840 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 7 01:09:23.982463 systemd[1]: Startup finished in 584ms (kernel) + 6.326s (initrd) + 7.999s (userspace) = 14.910s. Mar 7 01:09:23.993570 (kubelet)[2128]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 01:09:24.274184 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 7 01:09:24.280441 systemd[1]: Started sshd@0-172.31.29.156:22-68.220.241.50:56362.service - OpenSSH per-connection server daemon (68.220.241.50:56362). Mar 7 01:09:24.767076 sshd[2138]: Accepted publickey for core from 68.220.241.50 port 56362 ssh2: RSA SHA256:0PS0FBgqn6GWl/nQsMeHlwIixP16R4Q8OHmWUJZFPy8 Mar 7 01:09:24.769284 sshd[2138]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:09:24.780952 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 7 01:09:24.786505 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 7 01:09:24.790744 systemd-logind[1880]: New session 1 of user core. Mar 7 01:09:24.808122 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 7 01:09:24.814799 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 7 01:09:24.829215 (systemd)[2142]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 7 01:09:24.990286 systemd[2142]: Queued start job for default target default.target. Mar 7 01:09:24.995393 systemd[2142]: Created slice app.slice - User Application Slice. Mar 7 01:09:24.995988 systemd[2142]: Reached target paths.target - Paths. Mar 7 01:09:24.996015 systemd[2142]: Reached target timers.target - Timers. Mar 7 01:09:24.999202 systemd[2142]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 7 01:09:25.016421 systemd[2142]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 7 01:09:25.016587 systemd[2142]: Reached target sockets.target - Sockets. Mar 7 01:09:25.016611 systemd[2142]: Reached target basic.target - Basic System. Mar 7 01:09:25.016668 systemd[2142]: Reached target default.target - Main User Target. Mar 7 01:09:25.016708 systemd[2142]: Startup finished in 178ms. Mar 7 01:09:25.017462 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 7 01:09:25.028040 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 7 01:09:25.183517 kubelet[2128]: E0307 01:09:25.183457 2128 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 01:09:25.186419 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 01:09:25.186639 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 01:09:25.186996 systemd[1]: kubelet.service: Consumed 1.020s CPU time. Mar 7 01:09:25.391541 systemd[1]: Started sshd@1-172.31.29.156:22-68.220.241.50:56366.service - OpenSSH per-connection server daemon (68.220.241.50:56366). Mar 7 01:09:25.872547 sshd[2155]: Accepted publickey for core from 68.220.241.50 port 56366 ssh2: RSA SHA256:0PS0FBgqn6GWl/nQsMeHlwIixP16R4Q8OHmWUJZFPy8 Mar 7 01:09:25.874133 sshd[2155]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:09:25.879909 systemd-logind[1880]: New session 2 of user core. Mar 7 01:09:25.885328 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 7 01:09:26.223809 sshd[2155]: pam_unix(sshd:session): session closed for user core Mar 7 01:09:26.227500 systemd[1]: sshd@1-172.31.29.156:22-68.220.241.50:56366.service: Deactivated successfully. Mar 7 01:09:26.229642 systemd[1]: session-2.scope: Deactivated successfully. Mar 7 01:09:26.231046 systemd-logind[1880]: Session 2 logged out. Waiting for processes to exit. Mar 7 01:09:26.232494 systemd-logind[1880]: Removed session 2. Mar 7 01:09:26.315522 systemd[1]: Started sshd@2-172.31.29.156:22-68.220.241.50:56382.service - OpenSSH per-connection server daemon (68.220.241.50:56382). Mar 7 01:09:28.346380 systemd-resolved[1819]: Clock change detected. Flushing caches. Mar 7 01:09:28.357754 sshd[2162]: Accepted publickey for core from 68.220.241.50 port 56382 ssh2: RSA SHA256:0PS0FBgqn6GWl/nQsMeHlwIixP16R4Q8OHmWUJZFPy8 Mar 7 01:09:28.359313 sshd[2162]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:09:28.364561 systemd-logind[1880]: New session 3 of user core. Mar 7 01:09:28.376255 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 7 01:09:28.709961 sshd[2162]: pam_unix(sshd:session): session closed for user core Mar 7 01:09:28.721422 systemd-logind[1880]: Session 3 logged out. Waiting for processes to exit. Mar 7 01:09:28.723514 systemd[1]: sshd@2-172.31.29.156:22-68.220.241.50:56382.service: Deactivated successfully. Mar 7 01:09:28.727382 systemd[1]: session-3.scope: Deactivated successfully. Mar 7 01:09:28.733835 systemd-logind[1880]: Removed session 3. Mar 7 01:09:28.799394 systemd[1]: Started sshd@3-172.31.29.156:22-68.220.241.50:56396.service - OpenSSH per-connection server daemon (68.220.241.50:56396). Mar 7 01:09:29.278153 sshd[2169]: Accepted publickey for core from 68.220.241.50 port 56396 ssh2: RSA SHA256:0PS0FBgqn6GWl/nQsMeHlwIixP16R4Q8OHmWUJZFPy8 Mar 7 01:09:29.279677 sshd[2169]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:09:29.284044 systemd-logind[1880]: New session 4 of user core. Mar 7 01:09:29.294287 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 7 01:09:29.627810 sshd[2169]: pam_unix(sshd:session): session closed for user core Mar 7 01:09:29.631816 systemd-logind[1880]: Session 4 logged out. Waiting for processes to exit. Mar 7 01:09:29.633164 systemd[1]: sshd@3-172.31.29.156:22-68.220.241.50:56396.service: Deactivated successfully. Mar 7 01:09:29.635395 systemd[1]: session-4.scope: Deactivated successfully. Mar 7 01:09:29.636584 systemd-logind[1880]: Removed session 4. Mar 7 01:09:29.719378 systemd[1]: Started sshd@4-172.31.29.156:22-68.220.241.50:56400.service - OpenSSH per-connection server daemon (68.220.241.50:56400). Mar 7 01:09:30.207557 sshd[2176]: Accepted publickey for core from 68.220.241.50 port 56400 ssh2: RSA SHA256:0PS0FBgqn6GWl/nQsMeHlwIixP16R4Q8OHmWUJZFPy8 Mar 7 01:09:30.208286 sshd[2176]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:09:30.212835 systemd-logind[1880]: New session 5 of user core. Mar 7 01:09:30.219236 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 7 01:09:30.498033 sudo[2179]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 7 01:09:30.498582 sudo[2179]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 7 01:09:30.514732 sudo[2179]: pam_unix(sudo:session): session closed for user root Mar 7 01:09:30.593794 sshd[2176]: pam_unix(sshd:session): session closed for user core Mar 7 01:09:30.598961 systemd[1]: sshd@4-172.31.29.156:22-68.220.241.50:56400.service: Deactivated successfully. Mar 7 01:09:30.600854 systemd[1]: session-5.scope: Deactivated successfully. Mar 7 01:09:30.601650 systemd-logind[1880]: Session 5 logged out. Waiting for processes to exit. Mar 7 01:09:30.602869 systemd-logind[1880]: Removed session 5. Mar 7 01:09:30.684344 systemd[1]: Started sshd@5-172.31.29.156:22-68.220.241.50:56412.service - OpenSSH per-connection server daemon (68.220.241.50:56412). Mar 7 01:09:31.171566 sshd[2184]: Accepted publickey for core from 68.220.241.50 port 56412 ssh2: RSA SHA256:0PS0FBgqn6GWl/nQsMeHlwIixP16R4Q8OHmWUJZFPy8 Mar 7 01:09:31.173167 sshd[2184]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:09:31.178661 systemd-logind[1880]: New session 6 of user core. Mar 7 01:09:31.185257 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 7 01:09:31.450091 sudo[2188]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 7 01:09:31.450609 sudo[2188]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 7 01:09:31.454572 sudo[2188]: pam_unix(sudo:session): session closed for user root Mar 7 01:09:31.459927 sudo[2187]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Mar 7 01:09:31.460329 sudo[2187]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 7 01:09:31.481409 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Mar 7 01:09:31.483675 auditctl[2191]: No rules Mar 7 01:09:31.484094 systemd[1]: audit-rules.service: Deactivated successfully. Mar 7 01:09:31.484310 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Mar 7 01:09:31.487405 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 7 01:09:31.516603 augenrules[2209]: No rules Mar 7 01:09:31.518018 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 7 01:09:31.519203 sudo[2187]: pam_unix(sudo:session): session closed for user root Mar 7 01:09:31.597735 sshd[2184]: pam_unix(sshd:session): session closed for user core Mar 7 01:09:31.601083 systemd[1]: sshd@5-172.31.29.156:22-68.220.241.50:56412.service: Deactivated successfully. Mar 7 01:09:31.603387 systemd[1]: session-6.scope: Deactivated successfully. Mar 7 01:09:31.604917 systemd-logind[1880]: Session 6 logged out. Waiting for processes to exit. Mar 7 01:09:31.606029 systemd-logind[1880]: Removed session 6. Mar 7 01:09:31.686280 systemd[1]: Started sshd@6-172.31.29.156:22-68.220.241.50:56418.service - OpenSSH per-connection server daemon (68.220.241.50:56418). Mar 7 01:09:32.170881 sshd[2217]: Accepted publickey for core from 68.220.241.50 port 56418 ssh2: RSA SHA256:0PS0FBgqn6GWl/nQsMeHlwIixP16R4Q8OHmWUJZFPy8 Mar 7 01:09:32.172369 sshd[2217]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:09:32.177424 systemd-logind[1880]: New session 7 of user core. Mar 7 01:09:32.187229 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 7 01:09:32.444611 sudo[2220]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 7 01:09:32.445033 sudo[2220]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 7 01:09:32.812363 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 7 01:09:32.814373 (dockerd)[2235]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 7 01:09:33.177903 dockerd[2235]: time="2026-03-07T01:09:33.177768912Z" level=info msg="Starting up" Mar 7 01:09:33.327807 dockerd[2235]: time="2026-03-07T01:09:33.327755234Z" level=info msg="Loading containers: start." Mar 7 01:09:33.446037 kernel: Initializing XFRM netlink socket Mar 7 01:09:33.476713 (udev-worker)[2258]: Network interface NamePolicy= disabled on kernel command line. Mar 7 01:09:33.536963 systemd-networkd[1818]: docker0: Link UP Mar 7 01:09:33.556609 dockerd[2235]: time="2026-03-07T01:09:33.556560626Z" level=info msg="Loading containers: done." Mar 7 01:09:33.581229 dockerd[2235]: time="2026-03-07T01:09:33.581169004Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 7 01:09:33.581436 dockerd[2235]: time="2026-03-07T01:09:33.581299936Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Mar 7 01:09:33.581495 dockerd[2235]: time="2026-03-07T01:09:33.581446212Z" level=info msg="Daemon has completed initialization" Mar 7 01:09:33.628575 dockerd[2235]: time="2026-03-07T01:09:33.628192329Z" level=info msg="API listen on /run/docker.sock" Mar 7 01:09:33.628469 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 7 01:09:35.436069 containerd[1890]: time="2026-03-07T01:09:35.436028832Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.5\"" Mar 7 01:09:36.032527 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2986661823.mount: Deactivated successfully. Mar 7 01:09:36.991888 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 7 01:09:36.998342 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:09:37.214168 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:09:37.221523 (kubelet)[2436]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 01:09:37.266385 kubelet[2436]: E0307 01:09:37.266148 2436 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 01:09:37.270689 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 01:09:37.270895 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 01:09:38.161105 containerd[1890]: time="2026-03-07T01:09:38.161048883Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:09:38.165026 containerd[1890]: time="2026-03-07T01:09:38.164228019Z" level=info msg="ImageCreate event name:\"sha256:364ea2876e41b29691964751b6217cd2e343433690fbe16a5c6a236042684df3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:09:38.165026 containerd[1890]: time="2026-03-07T01:09:38.164298741Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.5: active requests=0, bytes read=27074497" Mar 7 01:09:38.170759 containerd[1890]: time="2026-03-07T01:09:38.170710601Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:c548633fcd3b4aad59b70815be4c8be54a0fddaddc3fcffa9371eedb0e96417a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:09:38.171919 containerd[1890]: time="2026-03-07T01:09:38.171877346Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.5\" with image id \"sha256:364ea2876e41b29691964751b6217cd2e343433690fbe16a5c6a236042684df3\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:c548633fcd3b4aad59b70815be4c8be54a0fddaddc3fcffa9371eedb0e96417a\", size \"27071096\" in 2.735803608s" Mar 7 01:09:38.172097 containerd[1890]: time="2026-03-07T01:09:38.172075611Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.5\" returns image reference \"sha256:364ea2876e41b29691964751b6217cd2e343433690fbe16a5c6a236042684df3\"" Mar 7 01:09:38.173151 containerd[1890]: time="2026-03-07T01:09:38.173123058Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.5\"" Mar 7 01:09:39.827512 containerd[1890]: time="2026-03-07T01:09:39.827458531Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:09:39.829967 containerd[1890]: time="2026-03-07T01:09:39.829889083Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.5: active requests=0, bytes read=21165823" Mar 7 01:09:39.836166 containerd[1890]: time="2026-03-07T01:09:39.836082299Z" level=info msg="ImageCreate event name:\"sha256:8926c34822743bb97f9003f92c30127bfeaad8bed71cd36f1c861ed8fda2c154\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:09:39.843035 containerd[1890]: time="2026-03-07T01:09:39.842969989Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f0426100c873816560c520d542fa28999a98dad909edd04365f3b0eead790da3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:09:39.844707 containerd[1890]: time="2026-03-07T01:09:39.844648289Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.5\" with image id \"sha256:8926c34822743bb97f9003f92c30127bfeaad8bed71cd36f1c861ed8fda2c154\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f0426100c873816560c520d542fa28999a98dad909edd04365f3b0eead790da3\", size \"22822771\" in 1.671485018s" Mar 7 01:09:39.844884 containerd[1890]: time="2026-03-07T01:09:39.844861867Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.5\" returns image reference \"sha256:8926c34822743bb97f9003f92c30127bfeaad8bed71cd36f1c861ed8fda2c154\"" Mar 7 01:09:39.845448 containerd[1890]: time="2026-03-07T01:09:39.845424128Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.5\"" Mar 7 01:09:41.228882 containerd[1890]: time="2026-03-07T01:09:41.228832270Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:09:41.230146 containerd[1890]: time="2026-03-07T01:09:41.230097325Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.5: active requests=0, bytes read=15729824" Mar 7 01:09:41.231029 containerd[1890]: time="2026-03-07T01:09:41.230900252Z" level=info msg="ImageCreate event name:\"sha256:f6b3520b1732b4980b2528fe5622e62be26bb6a8d38da81349cb6ccd3a1e6d65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:09:41.234406 containerd[1890]: time="2026-03-07T01:09:41.234362479Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b67b0d627c8e99ffa362bd4d9a60ca9a6c449e363a5f88d2aa8c224bd84ca51d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:09:41.235384 containerd[1890]: time="2026-03-07T01:09:41.235201452Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.5\" with image id \"sha256:f6b3520b1732b4980b2528fe5622e62be26bb6a8d38da81349cb6ccd3a1e6d65\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b67b0d627c8e99ffa362bd4d9a60ca9a6c449e363a5f88d2aa8c224bd84ca51d\", size \"17386790\" in 1.388894297s" Mar 7 01:09:41.235384 containerd[1890]: time="2026-03-07T01:09:41.235241510Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.5\" returns image reference \"sha256:f6b3520b1732b4980b2528fe5622e62be26bb6a8d38da81349cb6ccd3a1e6d65\"" Mar 7 01:09:41.235858 containerd[1890]: time="2026-03-07T01:09:41.235827234Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.5\"" Mar 7 01:09:42.242934 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount167186346.mount: Deactivated successfully. Mar 7 01:09:42.674549 containerd[1890]: time="2026-03-07T01:09:42.674060450Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:09:42.675533 containerd[1890]: time="2026-03-07T01:09:42.675383125Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.5: active requests=0, bytes read=25861770" Mar 7 01:09:42.676638 containerd[1890]: time="2026-03-07T01:09:42.676541674Z" level=info msg="ImageCreate event name:\"sha256:38728cde323c302ed9eca4f1b7c0080d17db50144e39398fcf901d9df13f0c3e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:09:42.679477 containerd[1890]: time="2026-03-07T01:09:42.678708822Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8a22a3bf452d07af3b5a3064b089d2ad6579d5dd3b850386e05cc0f36dc3f4cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:09:42.679477 containerd[1890]: time="2026-03-07T01:09:42.679330505Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.5\" with image id \"sha256:38728cde323c302ed9eca4f1b7c0080d17db50144e39398fcf901d9df13f0c3e\", repo tag \"registry.k8s.io/kube-proxy:v1.34.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:8a22a3bf452d07af3b5a3064b089d2ad6579d5dd3b850386e05cc0f36dc3f4cf\", size \"25860789\" in 1.443468259s" Mar 7 01:09:42.679477 containerd[1890]: time="2026-03-07T01:09:42.679366883Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.5\" returns image reference \"sha256:38728cde323c302ed9eca4f1b7c0080d17db50144e39398fcf901d9df13f0c3e\"" Mar 7 01:09:42.679852 containerd[1890]: time="2026-03-07T01:09:42.679829046Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Mar 7 01:09:43.146455 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1438232355.mount: Deactivated successfully. Mar 7 01:09:44.191713 containerd[1890]: time="2026-03-07T01:09:44.191660339Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:09:44.193116 containerd[1890]: time="2026-03-07T01:09:44.193069511Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388007" Mar 7 01:09:44.193867 containerd[1890]: time="2026-03-07T01:09:44.193809158Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:09:44.197612 containerd[1890]: time="2026-03-07T01:09:44.197155941Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:09:44.198558 containerd[1890]: time="2026-03-07T01:09:44.198515825Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 1.518566402s" Mar 7 01:09:44.198654 containerd[1890]: time="2026-03-07T01:09:44.198560138Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Mar 7 01:09:44.199687 containerd[1890]: time="2026-03-07T01:09:44.199657416Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Mar 7 01:09:44.653325 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1806588926.mount: Deactivated successfully. Mar 7 01:09:44.658390 containerd[1890]: time="2026-03-07T01:09:44.658345478Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:09:44.659291 containerd[1890]: time="2026-03-07T01:09:44.659242584Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321218" Mar 7 01:09:44.660326 containerd[1890]: time="2026-03-07T01:09:44.660075860Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:09:44.662470 containerd[1890]: time="2026-03-07T01:09:44.662414170Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:09:44.663787 containerd[1890]: time="2026-03-07T01:09:44.663179702Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 463.485687ms" Mar 7 01:09:44.663787 containerd[1890]: time="2026-03-07T01:09:44.663216672Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Mar 7 01:09:44.664029 containerd[1890]: time="2026-03-07T01:09:44.663994882Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\"" Mar 7 01:09:45.122676 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3844515144.mount: Deactivated successfully. Mar 7 01:09:46.344413 containerd[1890]: time="2026-03-07T01:09:46.344353528Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:09:46.346524 containerd[1890]: time="2026-03-07T01:09:46.346287996Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.5-0: active requests=0, bytes read=22860674" Mar 7 01:09:46.348925 containerd[1890]: time="2026-03-07T01:09:46.348660679Z" level=info msg="ImageCreate event name:\"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:09:46.353223 containerd[1890]: time="2026-03-07T01:09:46.352868416Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:09:46.354450 containerd[1890]: time="2026-03-07T01:09:46.354293537Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.5-0\" with image id \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\", repo tag \"registry.k8s.io/etcd:3.6.5-0\", repo digest \"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\", size \"22871747\" in 1.690165787s" Mar 7 01:09:46.354450 containerd[1890]: time="2026-03-07T01:09:46.354337360Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\" returns image reference \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\"" Mar 7 01:09:47.521446 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 7 01:09:47.530135 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:09:47.820279 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:09:47.822315 (kubelet)[2609]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 01:09:47.880555 kubelet[2609]: E0307 01:09:47.880508 2609 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 01:09:47.883661 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 01:09:47.883882 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 01:09:50.494968 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:09:50.502354 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:09:50.539361 systemd[1]: Reloading requested from client PID 2623 ('systemctl') (unit session-7.scope)... Mar 7 01:09:50.539382 systemd[1]: Reloading... Mar 7 01:09:50.666040 zram_generator::config[2660]: No configuration found. Mar 7 01:09:50.819311 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 7 01:09:50.905663 systemd[1]: Reloading finished in 365 ms. Mar 7 01:09:50.959339 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 7 01:09:50.959456 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 7 01:09:50.959768 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:09:50.966401 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:09:51.160370 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:09:51.173518 (kubelet)[2727]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 7 01:09:51.224888 kubelet[2727]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 7 01:09:51.224888 kubelet[2727]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 7 01:09:51.225328 kubelet[2727]: I0307 01:09:51.224950 2727 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 7 01:09:51.433416 kubelet[2727]: I0307 01:09:51.428256 2727 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Mar 7 01:09:51.433416 kubelet[2727]: I0307 01:09:51.428288 2727 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 7 01:09:51.433416 kubelet[2727]: I0307 01:09:51.428318 2727 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 7 01:09:51.433416 kubelet[2727]: I0307 01:09:51.428331 2727 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 7 01:09:51.433416 kubelet[2727]: I0307 01:09:51.428880 2727 server.go:956] "Client rotation is on, will bootstrap in background" Mar 7 01:09:51.440963 kubelet[2727]: I0307 01:09:51.440931 2727 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 7 01:09:51.444668 kubelet[2727]: E0307 01:09:51.444638 2727 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 7 01:09:51.444801 kubelet[2727]: I0307 01:09:51.444709 2727 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Mar 7 01:09:51.447662 kubelet[2727]: I0307 01:09:51.447635 2727 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 7 01:09:51.448614 kubelet[2727]: E0307 01:09:51.448584 2727 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.29.156:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.29.156:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 7 01:09:51.448971 kubelet[2727]: I0307 01:09:51.448934 2727 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 7 01:09:51.449175 kubelet[2727]: I0307 01:09:51.448976 2727 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-29-156","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 7 01:09:51.449307 kubelet[2727]: I0307 01:09:51.449188 2727 topology_manager.go:138] "Creating topology manager with none policy" Mar 7 01:09:51.449307 kubelet[2727]: I0307 01:09:51.449211 2727 container_manager_linux.go:306] "Creating device plugin manager" Mar 7 01:09:51.449396 kubelet[2727]: I0307 01:09:51.449323 2727 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Mar 7 01:09:51.465980 kubelet[2727]: I0307 01:09:51.465930 2727 state_mem.go:36] "Initialized new in-memory state store" Mar 7 01:09:51.466703 kubelet[2727]: I0307 01:09:51.466362 2727 kubelet.go:475] "Attempting to sync node with API server" Mar 7 01:09:51.466703 kubelet[2727]: I0307 01:09:51.466394 2727 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 7 01:09:51.466703 kubelet[2727]: I0307 01:09:51.466422 2727 kubelet.go:387] "Adding apiserver pod source" Mar 7 01:09:51.466703 kubelet[2727]: I0307 01:09:51.466436 2727 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 7 01:09:51.468077 kubelet[2727]: E0307 01:09:51.467929 2727 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.29.156:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-29-156&limit=500&resourceVersion=0\": dial tcp 172.31.29.156:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 7 01:09:51.470571 kubelet[2727]: E0307 01:09:51.470548 2727 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.29.156:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.29.156:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 7 01:09:51.471247 kubelet[2727]: I0307 01:09:51.471230 2727 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 7 01:09:51.472321 kubelet[2727]: I0307 01:09:51.472276 2727 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 7 01:09:51.472897 kubelet[2727]: I0307 01:09:51.472393 2727 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 7 01:09:51.472897 kubelet[2727]: W0307 01:09:51.472443 2727 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 7 01:09:51.477226 kubelet[2727]: I0307 01:09:51.477153 2727 server.go:1262] "Started kubelet" Mar 7 01:09:51.478369 kubelet[2727]: I0307 01:09:51.478338 2727 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 7 01:09:51.479723 kubelet[2727]: I0307 01:09:51.479707 2727 server.go:310] "Adding debug handlers to kubelet server" Mar 7 01:09:51.482230 kubelet[2727]: I0307 01:09:51.482076 2727 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 7 01:09:51.482977 kubelet[2727]: I0307 01:09:51.482377 2727 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 7 01:09:51.482977 kubelet[2727]: I0307 01:09:51.482743 2727 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 7 01:09:51.485882 kubelet[2727]: E0307 01:09:51.482902 2727 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.29.156:6443/api/v1/namespaces/default/events\": dial tcp 172.31.29.156:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-29-156.189a69daba8a25ba default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-29-156,UID:ip-172-31-29-156,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-29-156,},FirstTimestamp:2026-03-07 01:09:51.477114298 +0000 UTC m=+0.295861612,LastTimestamp:2026-03-07 01:09:51.477114298 +0000 UTC m=+0.295861612,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-29-156,}" Mar 7 01:09:51.488644 kubelet[2727]: I0307 01:09:51.488625 2727 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 7 01:09:51.489639 kubelet[2727]: I0307 01:09:51.489590 2727 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 7 01:09:51.496514 kubelet[2727]: E0307 01:09:51.496449 2727 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 7 01:09:51.497023 kubelet[2727]: E0307 01:09:51.496981 2727 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ip-172-31-29-156\" not found" Mar 7 01:09:51.497219 kubelet[2727]: I0307 01:09:51.497208 2727 volume_manager.go:313] "Starting Kubelet Volume Manager" Mar 7 01:09:51.498444 kubelet[2727]: I0307 01:09:51.498411 2727 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 7 01:09:51.504079 kubelet[2727]: I0307 01:09:51.498541 2727 reconciler.go:29] "Reconciler: start to sync state" Mar 7 01:09:51.504079 kubelet[2727]: E0307 01:09:51.499139 2727 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.29.156:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.29.156:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 7 01:09:51.504079 kubelet[2727]: E0307 01:09:51.499785 2727 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.29.156:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-156?timeout=10s\": dial tcp 172.31.29.156:6443: connect: connection refused" interval="200ms" Mar 7 01:09:51.504079 kubelet[2727]: I0307 01:09:51.500105 2727 factory.go:223] Registration of the systemd container factory successfully Mar 7 01:09:51.504079 kubelet[2727]: I0307 01:09:51.501222 2727 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 7 01:09:51.504079 kubelet[2727]: I0307 01:09:51.502968 2727 factory.go:223] Registration of the containerd container factory successfully Mar 7 01:09:51.524206 kubelet[2727]: I0307 01:09:51.524161 2727 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 7 01:09:51.525736 kubelet[2727]: I0307 01:09:51.525704 2727 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 7 01:09:51.525736 kubelet[2727]: I0307 01:09:51.525735 2727 status_manager.go:244] "Starting to sync pod status with apiserver" Mar 7 01:09:51.525878 kubelet[2727]: I0307 01:09:51.525769 2727 kubelet.go:2428] "Starting kubelet main sync loop" Mar 7 01:09:51.525878 kubelet[2727]: E0307 01:09:51.525822 2727 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 7 01:09:51.533592 kubelet[2727]: E0307 01:09:51.533554 2727 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.29.156:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.29.156:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 7 01:09:51.541064 kubelet[2727]: I0307 01:09:51.541036 2727 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 7 01:09:51.541064 kubelet[2727]: I0307 01:09:51.541058 2727 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 7 01:09:51.541231 kubelet[2727]: I0307 01:09:51.541078 2727 state_mem.go:36] "Initialized new in-memory state store" Mar 7 01:09:51.545608 kubelet[2727]: I0307 01:09:51.545580 2727 policy_none.go:49] "None policy: Start" Mar 7 01:09:51.545608 kubelet[2727]: I0307 01:09:51.545603 2727 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 7 01:09:51.545828 kubelet[2727]: I0307 01:09:51.545617 2727 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 7 01:09:51.549437 kubelet[2727]: I0307 01:09:51.549403 2727 policy_none.go:47] "Start" Mar 7 01:09:51.554345 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 7 01:09:51.564657 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 7 01:09:51.568176 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 7 01:09:51.577513 kubelet[2727]: E0307 01:09:51.576799 2727 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 7 01:09:51.577513 kubelet[2727]: I0307 01:09:51.577053 2727 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 7 01:09:51.577513 kubelet[2727]: I0307 01:09:51.577068 2727 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 7 01:09:51.577513 kubelet[2727]: I0307 01:09:51.577328 2727 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 7 01:09:51.578794 kubelet[2727]: E0307 01:09:51.578620 2727 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 7 01:09:51.578794 kubelet[2727]: E0307 01:09:51.578664 2727 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-29-156\" not found" Mar 7 01:09:51.639496 systemd[1]: Created slice kubepods-burstable-pod622872b6f0395c57dfefd3510adba692.slice - libcontainer container kubepods-burstable-pod622872b6f0395c57dfefd3510adba692.slice. Mar 7 01:09:51.659158 kubelet[2727]: E0307 01:09:51.658864 2727 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-29-156\" not found" node="ip-172-31-29-156" Mar 7 01:09:51.662414 systemd[1]: Created slice kubepods-burstable-pod0d5457358c539a54dc80ba3206b3b637.slice - libcontainer container kubepods-burstable-pod0d5457358c539a54dc80ba3206b3b637.slice. Mar 7 01:09:51.664887 kubelet[2727]: E0307 01:09:51.664860 2727 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-29-156\" not found" node="ip-172-31-29-156" Mar 7 01:09:51.667727 systemd[1]: Created slice kubepods-burstable-pod732da558d8b253226c7633d21d8b9cd4.slice - libcontainer container kubepods-burstable-pod732da558d8b253226c7633d21d8b9cd4.slice. Mar 7 01:09:51.669756 kubelet[2727]: E0307 01:09:51.669729 2727 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-29-156\" not found" node="ip-172-31-29-156" Mar 7 01:09:51.679201 kubelet[2727]: I0307 01:09:51.679163 2727 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-29-156" Mar 7 01:09:51.679529 kubelet[2727]: E0307 01:09:51.679494 2727 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.29.156:6443/api/v1/nodes\": dial tcp 172.31.29.156:6443: connect: connection refused" node="ip-172-31-29-156" Mar 7 01:09:51.700402 kubelet[2727]: E0307 01:09:51.700265 2727 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.29.156:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-156?timeout=10s\": dial tcp 172.31.29.156:6443: connect: connection refused" interval="400ms" Mar 7 01:09:51.799852 kubelet[2727]: I0307 01:09:51.799801 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/0d5457358c539a54dc80ba3206b3b637-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-29-156\" (UID: \"0d5457358c539a54dc80ba3206b3b637\") " pod="kube-system/kube-controller-manager-ip-172-31-29-156" Mar 7 01:09:51.799852 kubelet[2727]: I0307 01:09:51.799858 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0d5457358c539a54dc80ba3206b3b637-kubeconfig\") pod \"kube-controller-manager-ip-172-31-29-156\" (UID: \"0d5457358c539a54dc80ba3206b3b637\") " pod="kube-system/kube-controller-manager-ip-172-31-29-156" Mar 7 01:09:51.800155 kubelet[2727]: I0307 01:09:51.799894 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0d5457358c539a54dc80ba3206b3b637-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-29-156\" (UID: \"0d5457358c539a54dc80ba3206b3b637\") " pod="kube-system/kube-controller-manager-ip-172-31-29-156" Mar 7 01:09:51.800155 kubelet[2727]: I0307 01:09:51.799949 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/622872b6f0395c57dfefd3510adba692-k8s-certs\") pod \"kube-apiserver-ip-172-31-29-156\" (UID: \"622872b6f0395c57dfefd3510adba692\") " pod="kube-system/kube-apiserver-ip-172-31-29-156" Mar 7 01:09:51.800155 kubelet[2727]: I0307 01:09:51.799971 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/622872b6f0395c57dfefd3510adba692-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-29-156\" (UID: \"622872b6f0395c57dfefd3510adba692\") " pod="kube-system/kube-apiserver-ip-172-31-29-156" Mar 7 01:09:51.800155 kubelet[2727]: I0307 01:09:51.800031 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0d5457358c539a54dc80ba3206b3b637-ca-certs\") pod \"kube-controller-manager-ip-172-31-29-156\" (UID: \"0d5457358c539a54dc80ba3206b3b637\") " pod="kube-system/kube-controller-manager-ip-172-31-29-156" Mar 7 01:09:51.800155 kubelet[2727]: I0307 01:09:51.800053 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0d5457358c539a54dc80ba3206b3b637-k8s-certs\") pod \"kube-controller-manager-ip-172-31-29-156\" (UID: \"0d5457358c539a54dc80ba3206b3b637\") " pod="kube-system/kube-controller-manager-ip-172-31-29-156" Mar 7 01:09:51.800353 kubelet[2727]: I0307 01:09:51.800076 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/732da558d8b253226c7633d21d8b9cd4-kubeconfig\") pod \"kube-scheduler-ip-172-31-29-156\" (UID: \"732da558d8b253226c7633d21d8b9cd4\") " pod="kube-system/kube-scheduler-ip-172-31-29-156" Mar 7 01:09:51.800353 kubelet[2727]: I0307 01:09:51.800124 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/622872b6f0395c57dfefd3510adba692-ca-certs\") pod \"kube-apiserver-ip-172-31-29-156\" (UID: \"622872b6f0395c57dfefd3510adba692\") " pod="kube-system/kube-apiserver-ip-172-31-29-156" Mar 7 01:09:51.881340 kubelet[2727]: I0307 01:09:51.881310 2727 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-29-156" Mar 7 01:09:51.881727 kubelet[2727]: E0307 01:09:51.881693 2727 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.29.156:6443/api/v1/nodes\": dial tcp 172.31.29.156:6443: connect: connection refused" node="ip-172-31-29-156" Mar 7 01:09:51.964206 containerd[1890]: time="2026-03-07T01:09:51.964087197Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-29-156,Uid:622872b6f0395c57dfefd3510adba692,Namespace:kube-system,Attempt:0,}" Mar 7 01:09:51.975250 containerd[1890]: time="2026-03-07T01:09:51.974963836Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-29-156,Uid:732da558d8b253226c7633d21d8b9cd4,Namespace:kube-system,Attempt:0,}" Mar 7 01:09:51.975250 containerd[1890]: time="2026-03-07T01:09:51.974963859Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-29-156,Uid:0d5457358c539a54dc80ba3206b3b637,Namespace:kube-system,Attempt:0,}" Mar 7 01:09:52.101738 kubelet[2727]: E0307 01:09:52.101693 2727 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.29.156:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-156?timeout=10s\": dial tcp 172.31.29.156:6443: connect: connection refused" interval="800ms" Mar 7 01:09:52.137946 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Mar 7 01:09:52.284105 kubelet[2727]: I0307 01:09:52.283934 2727 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-29-156" Mar 7 01:09:52.284696 kubelet[2727]: E0307 01:09:52.284325 2727 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.29.156:6443/api/v1/nodes\": dial tcp 172.31.29.156:6443: connect: connection refused" node="ip-172-31-29-156" Mar 7 01:09:52.318494 kubelet[2727]: E0307 01:09:52.318458 2727 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.29.156:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.29.156:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 7 01:09:52.337944 kubelet[2727]: E0307 01:09:52.337896 2727 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.29.156:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-29-156&limit=500&resourceVersion=0\": dial tcp 172.31.29.156:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 7 01:09:52.357283 kubelet[2727]: E0307 01:09:52.357239 2727 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.29.156:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.29.156:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 7 01:09:52.368866 kubelet[2727]: E0307 01:09:52.368823 2727 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.29.156:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.29.156:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 7 01:09:52.502983 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2936113089.mount: Deactivated successfully. Mar 7 01:09:52.519441 containerd[1890]: time="2026-03-07T01:09:52.519383175Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 01:09:52.521159 containerd[1890]: time="2026-03-07T01:09:52.521096365Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Mar 7 01:09:52.523281 containerd[1890]: time="2026-03-07T01:09:52.523243465Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 01:09:52.525275 containerd[1890]: time="2026-03-07T01:09:52.525236080Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 01:09:52.527190 containerd[1890]: time="2026-03-07T01:09:52.527129603Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 7 01:09:52.529578 containerd[1890]: time="2026-03-07T01:09:52.529532608Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 01:09:52.531337 containerd[1890]: time="2026-03-07T01:09:52.531089520Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 7 01:09:52.534749 containerd[1890]: time="2026-03-07T01:09:52.534640064Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 01:09:52.537042 containerd[1890]: time="2026-03-07T01:09:52.536254854Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 561.068609ms" Mar 7 01:09:52.538222 containerd[1890]: time="2026-03-07T01:09:52.538151474Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 573.976951ms" Mar 7 01:09:52.542930 containerd[1890]: time="2026-03-07T01:09:52.542883510Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 567.716416ms" Mar 7 01:09:52.745621 containerd[1890]: time="2026-03-07T01:09:52.745513506Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:09:52.745621 containerd[1890]: time="2026-03-07T01:09:52.745592410Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:09:52.749063 containerd[1890]: time="2026-03-07T01:09:52.748572733Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:09:52.751121 containerd[1890]: time="2026-03-07T01:09:52.748722354Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:09:52.751121 containerd[1890]: time="2026-03-07T01:09:52.748958818Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:09:52.751121 containerd[1890]: time="2026-03-07T01:09:52.749229677Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:09:52.751121 containerd[1890]: time="2026-03-07T01:09:52.749290513Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:09:52.751121 containerd[1890]: time="2026-03-07T01:09:52.749466567Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:09:52.759547 containerd[1890]: time="2026-03-07T01:09:52.758648298Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:09:52.759547 containerd[1890]: time="2026-03-07T01:09:52.758718316Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:09:52.759547 containerd[1890]: time="2026-03-07T01:09:52.758742384Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:09:52.759547 containerd[1890]: time="2026-03-07T01:09:52.758847543Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:09:52.786307 systemd[1]: Started cri-containerd-00c678f6c3e027b768d62384fc5246d8cd40b5879ce1fb9c191b57a866b35d93.scope - libcontainer container 00c678f6c3e027b768d62384fc5246d8cd40b5879ce1fb9c191b57a866b35d93. Mar 7 01:09:52.800373 systemd[1]: Started cri-containerd-144703bab25d537d7469841d731817f086a52d45af6b6a0aff734182f19d09d7.scope - libcontainer container 144703bab25d537d7469841d731817f086a52d45af6b6a0aff734182f19d09d7. Mar 7 01:09:52.817559 systemd[1]: Started cri-containerd-5c43528995d89973cd80ab1e761aedd97d63fc3a2c691dcb931f81ba24c6f692.scope - libcontainer container 5c43528995d89973cd80ab1e761aedd97d63fc3a2c691dcb931f81ba24c6f692. Mar 7 01:09:52.903446 kubelet[2727]: E0307 01:09:52.903385 2727 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.29.156:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-156?timeout=10s\": dial tcp 172.31.29.156:6443: connect: connection refused" interval="1.6s" Mar 7 01:09:52.911641 containerd[1890]: time="2026-03-07T01:09:52.911175863Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-29-156,Uid:732da558d8b253226c7633d21d8b9cd4,Namespace:kube-system,Attempt:0,} returns sandbox id \"144703bab25d537d7469841d731817f086a52d45af6b6a0aff734182f19d09d7\"" Mar 7 01:09:52.913360 containerd[1890]: time="2026-03-07T01:09:52.913260369Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-29-156,Uid:0d5457358c539a54dc80ba3206b3b637,Namespace:kube-system,Attempt:0,} returns sandbox id \"5c43528995d89973cd80ab1e761aedd97d63fc3a2c691dcb931f81ba24c6f692\"" Mar 7 01:09:52.916036 containerd[1890]: time="2026-03-07T01:09:52.915845654Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-29-156,Uid:622872b6f0395c57dfefd3510adba692,Namespace:kube-system,Attempt:0,} returns sandbox id \"00c678f6c3e027b768d62384fc5246d8cd40b5879ce1fb9c191b57a866b35d93\"" Mar 7 01:09:52.927512 containerd[1890]: time="2026-03-07T01:09:52.927178543Z" level=info msg="CreateContainer within sandbox \"5c43528995d89973cd80ab1e761aedd97d63fc3a2c691dcb931f81ba24c6f692\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 7 01:09:52.932336 containerd[1890]: time="2026-03-07T01:09:52.931932269Z" level=info msg="CreateContainer within sandbox \"00c678f6c3e027b768d62384fc5246d8cd40b5879ce1fb9c191b57a866b35d93\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 7 01:09:52.935384 containerd[1890]: time="2026-03-07T01:09:52.935347795Z" level=info msg="CreateContainer within sandbox \"144703bab25d537d7469841d731817f086a52d45af6b6a0aff734182f19d09d7\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 7 01:09:52.979962 containerd[1890]: time="2026-03-07T01:09:52.979909449Z" level=info msg="CreateContainer within sandbox \"5c43528995d89973cd80ab1e761aedd97d63fc3a2c691dcb931f81ba24c6f692\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"ce289c94248eb7a06a50c4ed907dcec6161377d6dd7480c437ee4e9146b7c615\"" Mar 7 01:09:52.980747 containerd[1890]: time="2026-03-07T01:09:52.980714433Z" level=info msg="StartContainer for \"ce289c94248eb7a06a50c4ed907dcec6161377d6dd7480c437ee4e9146b7c615\"" Mar 7 01:09:52.982427 containerd[1890]: time="2026-03-07T01:09:52.982380491Z" level=info msg="CreateContainer within sandbox \"00c678f6c3e027b768d62384fc5246d8cd40b5879ce1fb9c191b57a866b35d93\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"7bbac371659117ff94ac1c71ad9bdf3faa8cad4fec9c5a65db30c7dbb6fb07de\"" Mar 7 01:09:52.988359 containerd[1890]: time="2026-03-07T01:09:52.988046099Z" level=info msg="CreateContainer within sandbox \"144703bab25d537d7469841d731817f086a52d45af6b6a0aff734182f19d09d7\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"dbf2459c4a4ab21c402605302fff902fdca2b20577d696f86b65769a910deda0\"" Mar 7 01:09:52.990302 containerd[1890]: time="2026-03-07T01:09:52.990254180Z" level=info msg="StartContainer for \"7bbac371659117ff94ac1c71ad9bdf3faa8cad4fec9c5a65db30c7dbb6fb07de\"" Mar 7 01:09:52.998055 containerd[1890]: time="2026-03-07T01:09:52.997087307Z" level=info msg="StartContainer for \"dbf2459c4a4ab21c402605302fff902fdca2b20577d696f86b65769a910deda0\"" Mar 7 01:09:53.027241 systemd[1]: Started cri-containerd-ce289c94248eb7a06a50c4ed907dcec6161377d6dd7480c437ee4e9146b7c615.scope - libcontainer container ce289c94248eb7a06a50c4ed907dcec6161377d6dd7480c437ee4e9146b7c615. Mar 7 01:09:53.044195 systemd[1]: Started cri-containerd-7bbac371659117ff94ac1c71ad9bdf3faa8cad4fec9c5a65db30c7dbb6fb07de.scope - libcontainer container 7bbac371659117ff94ac1c71ad9bdf3faa8cad4fec9c5a65db30c7dbb6fb07de. Mar 7 01:09:53.066242 systemd[1]: Started cri-containerd-dbf2459c4a4ab21c402605302fff902fdca2b20577d696f86b65769a910deda0.scope - libcontainer container dbf2459c4a4ab21c402605302fff902fdca2b20577d696f86b65769a910deda0. Mar 7 01:09:53.088022 kubelet[2727]: I0307 01:09:53.087393 2727 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-29-156" Mar 7 01:09:53.089568 kubelet[2727]: E0307 01:09:53.089538 2727 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.29.156:6443/api/v1/nodes\": dial tcp 172.31.29.156:6443: connect: connection refused" node="ip-172-31-29-156" Mar 7 01:09:53.133031 containerd[1890]: time="2026-03-07T01:09:53.131236628Z" level=info msg="StartContainer for \"ce289c94248eb7a06a50c4ed907dcec6161377d6dd7480c437ee4e9146b7c615\" returns successfully" Mar 7 01:09:53.148823 containerd[1890]: time="2026-03-07T01:09:53.148775335Z" level=info msg="StartContainer for \"7bbac371659117ff94ac1c71ad9bdf3faa8cad4fec9c5a65db30c7dbb6fb07de\" returns successfully" Mar 7 01:09:53.195015 containerd[1890]: time="2026-03-07T01:09:53.194955487Z" level=info msg="StartContainer for \"dbf2459c4a4ab21c402605302fff902fdca2b20577d696f86b65769a910deda0\" returns successfully" Mar 7 01:09:53.552680 kubelet[2727]: E0307 01:09:53.552644 2727 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-29-156\" not found" node="ip-172-31-29-156" Mar 7 01:09:53.556681 kubelet[2727]: E0307 01:09:53.556647 2727 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-29-156\" not found" node="ip-172-31-29-156" Mar 7 01:09:53.562548 kubelet[2727]: E0307 01:09:53.562513 2727 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-29-156\" not found" node="ip-172-31-29-156" Mar 7 01:09:54.564528 kubelet[2727]: E0307 01:09:54.564491 2727 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-29-156\" not found" node="ip-172-31-29-156" Mar 7 01:09:54.565269 kubelet[2727]: E0307 01:09:54.565245 2727 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-29-156\" not found" node="ip-172-31-29-156" Mar 7 01:09:54.692424 kubelet[2727]: I0307 01:09:54.692395 2727 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-29-156" Mar 7 01:09:55.722078 kubelet[2727]: E0307 01:09:55.722032 2727 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-29-156\" not found" node="ip-172-31-29-156" Mar 7 01:09:55.899909 kubelet[2727]: I0307 01:09:55.899868 2727 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-29-156" Mar 7 01:09:55.903430 kubelet[2727]: I0307 01:09:55.901423 2727 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-29-156" Mar 7 01:09:55.921775 kubelet[2727]: E0307 01:09:55.921736 2727 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-29-156\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-29-156" Mar 7 01:09:55.921775 kubelet[2727]: I0307 01:09:55.921775 2727 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-29-156" Mar 7 01:09:55.925657 kubelet[2727]: E0307 01:09:55.924666 2727 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-29-156\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-29-156" Mar 7 01:09:55.925657 kubelet[2727]: I0307 01:09:55.924696 2727 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-29-156" Mar 7 01:09:55.929078 kubelet[2727]: E0307 01:09:55.929047 2727 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-29-156\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-29-156" Mar 7 01:09:56.474483 kubelet[2727]: I0307 01:09:56.474442 2727 apiserver.go:52] "Watching apiserver" Mar 7 01:09:56.499023 kubelet[2727]: I0307 01:09:56.498976 2727 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 7 01:09:57.841171 kubelet[2727]: I0307 01:09:57.841135 2727 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-29-156" Mar 7 01:09:58.149909 systemd[1]: Reloading requested from client PID 3014 ('systemctl') (unit session-7.scope)... Mar 7 01:09:58.149929 systemd[1]: Reloading... Mar 7 01:09:58.272060 zram_generator::config[3054]: No configuration found. Mar 7 01:09:58.394056 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 7 01:09:58.495789 systemd[1]: Reloading finished in 345 ms. Mar 7 01:09:58.544891 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:09:58.560025 systemd[1]: kubelet.service: Deactivated successfully. Mar 7 01:09:58.560256 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:09:58.567369 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:09:58.794401 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:09:58.801793 (kubelet)[3114]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 7 01:09:58.874052 kubelet[3114]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 7 01:09:58.874052 kubelet[3114]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 7 01:09:58.874052 kubelet[3114]: I0307 01:09:58.873607 3114 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 7 01:09:58.882208 kubelet[3114]: I0307 01:09:58.882170 3114 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Mar 7 01:09:58.882208 kubelet[3114]: I0307 01:09:58.882196 3114 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 7 01:09:58.882402 kubelet[3114]: I0307 01:09:58.882224 3114 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 7 01:09:58.882402 kubelet[3114]: I0307 01:09:58.882236 3114 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 7 01:09:58.882978 kubelet[3114]: I0307 01:09:58.882947 3114 server.go:956] "Client rotation is on, will bootstrap in background" Mar 7 01:09:58.887025 kubelet[3114]: I0307 01:09:58.886057 3114 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 7 01:09:58.893323 kubelet[3114]: I0307 01:09:58.893286 3114 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 7 01:09:58.896640 kubelet[3114]: E0307 01:09:58.896597 3114 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 7 01:09:58.896769 kubelet[3114]: I0307 01:09:58.896662 3114 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Mar 7 01:09:58.899492 kubelet[3114]: I0307 01:09:58.899453 3114 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 7 01:09:58.900456 kubelet[3114]: I0307 01:09:58.900411 3114 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 7 01:09:58.900658 kubelet[3114]: I0307 01:09:58.900457 3114 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-29-156","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 7 01:09:58.900658 kubelet[3114]: I0307 01:09:58.900645 3114 topology_manager.go:138] "Creating topology manager with none policy" Mar 7 01:09:58.900658 kubelet[3114]: I0307 01:09:58.900661 3114 container_manager_linux.go:306] "Creating device plugin manager" Mar 7 01:09:58.900887 kubelet[3114]: I0307 01:09:58.900705 3114 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Mar 7 01:09:58.901021 kubelet[3114]: I0307 01:09:58.900979 3114 state_mem.go:36] "Initialized new in-memory state store" Mar 7 01:09:58.901208 kubelet[3114]: I0307 01:09:58.901191 3114 kubelet.go:475] "Attempting to sync node with API server" Mar 7 01:09:58.901279 kubelet[3114]: I0307 01:09:58.901210 3114 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 7 01:09:58.901279 kubelet[3114]: I0307 01:09:58.901237 3114 kubelet.go:387] "Adding apiserver pod source" Mar 7 01:09:58.901279 kubelet[3114]: I0307 01:09:58.901257 3114 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 7 01:09:58.914467 kubelet[3114]: I0307 01:09:58.913138 3114 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 7 01:09:58.914467 kubelet[3114]: I0307 01:09:58.913874 3114 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 7 01:09:58.914467 kubelet[3114]: I0307 01:09:58.913926 3114 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 7 01:09:58.927020 kubelet[3114]: I0307 01:09:58.926983 3114 server.go:1262] "Started kubelet" Mar 7 01:09:58.932491 kubelet[3114]: I0307 01:09:58.932112 3114 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 7 01:09:58.932491 kubelet[3114]: I0307 01:09:58.932240 3114 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 7 01:09:58.945649 kubelet[3114]: I0307 01:09:58.945596 3114 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 7 01:09:58.945827 kubelet[3114]: I0307 01:09:58.945816 3114 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 7 01:09:58.946554 kubelet[3114]: I0307 01:09:58.946531 3114 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 7 01:09:58.950826 kubelet[3114]: I0307 01:09:58.950775 3114 server.go:310] "Adding debug handlers to kubelet server" Mar 7 01:09:58.955440 kubelet[3114]: I0307 01:09:58.955302 3114 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 7 01:09:58.959940 kubelet[3114]: I0307 01:09:58.959909 3114 volume_manager.go:313] "Starting Kubelet Volume Manager" Mar 7 01:09:58.960392 kubelet[3114]: E0307 01:09:58.960160 3114 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ip-172-31-29-156\" not found" Mar 7 01:09:58.964847 kubelet[3114]: I0307 01:09:58.964820 3114 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 7 01:09:58.964984 kubelet[3114]: I0307 01:09:58.964971 3114 reconciler.go:29] "Reconciler: start to sync state" Mar 7 01:09:58.975140 kubelet[3114]: I0307 01:09:58.975115 3114 factory.go:223] Registration of the containerd container factory successfully Mar 7 01:09:58.975309 kubelet[3114]: I0307 01:09:58.975297 3114 factory.go:223] Registration of the systemd container factory successfully Mar 7 01:09:58.977429 kubelet[3114]: I0307 01:09:58.977359 3114 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 7 01:09:58.984362 kubelet[3114]: I0307 01:09:58.984319 3114 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 7 01:09:58.985701 kubelet[3114]: I0307 01:09:58.985667 3114 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 7 01:09:58.985701 kubelet[3114]: I0307 01:09:58.985690 3114 status_manager.go:244] "Starting to sync pod status with apiserver" Mar 7 01:09:58.985850 kubelet[3114]: I0307 01:09:58.985715 3114 kubelet.go:2428] "Starting kubelet main sync loop" Mar 7 01:09:58.985850 kubelet[3114]: E0307 01:09:58.985769 3114 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 7 01:09:59.032815 kubelet[3114]: I0307 01:09:59.032784 3114 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 7 01:09:59.032815 kubelet[3114]: I0307 01:09:59.032803 3114 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 7 01:09:59.032815 kubelet[3114]: I0307 01:09:59.032824 3114 state_mem.go:36] "Initialized new in-memory state store" Mar 7 01:09:59.033082 kubelet[3114]: I0307 01:09:59.032978 3114 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 7 01:09:59.033082 kubelet[3114]: I0307 01:09:59.032990 3114 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 7 01:09:59.033082 kubelet[3114]: I0307 01:09:59.033033 3114 policy_none.go:49] "None policy: Start" Mar 7 01:09:59.033082 kubelet[3114]: I0307 01:09:59.033046 3114 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 7 01:09:59.033082 kubelet[3114]: I0307 01:09:59.033058 3114 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 7 01:09:59.033370 kubelet[3114]: I0307 01:09:59.033183 3114 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Mar 7 01:09:59.033370 kubelet[3114]: I0307 01:09:59.033193 3114 policy_none.go:47] "Start" Mar 7 01:09:59.039750 kubelet[3114]: E0307 01:09:59.039277 3114 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 7 01:09:59.039750 kubelet[3114]: I0307 01:09:59.039507 3114 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 7 01:09:59.039750 kubelet[3114]: I0307 01:09:59.039520 3114 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 7 01:09:59.041118 kubelet[3114]: I0307 01:09:59.041102 3114 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 7 01:09:59.045815 kubelet[3114]: E0307 01:09:59.045729 3114 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 7 01:09:59.086636 kubelet[3114]: I0307 01:09:59.086595 3114 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-29-156" Mar 7 01:09:59.086961 kubelet[3114]: I0307 01:09:59.086930 3114 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-29-156" Mar 7 01:09:59.090572 kubelet[3114]: I0307 01:09:59.090372 3114 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-29-156" Mar 7 01:09:59.101663 kubelet[3114]: E0307 01:09:59.101622 3114 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-29-156\" already exists" pod="kube-system/kube-scheduler-ip-172-31-29-156" Mar 7 01:09:59.143584 kubelet[3114]: I0307 01:09:59.143401 3114 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-29-156" Mar 7 01:09:59.153408 kubelet[3114]: I0307 01:09:59.153365 3114 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-29-156" Mar 7 01:09:59.153550 kubelet[3114]: I0307 01:09:59.153451 3114 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-29-156" Mar 7 01:09:59.170426 kubelet[3114]: I0307 01:09:59.169429 3114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/622872b6f0395c57dfefd3510adba692-k8s-certs\") pod \"kube-apiserver-ip-172-31-29-156\" (UID: \"622872b6f0395c57dfefd3510adba692\") " pod="kube-system/kube-apiserver-ip-172-31-29-156" Mar 7 01:09:59.170426 kubelet[3114]: I0307 01:09:59.169481 3114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/622872b6f0395c57dfefd3510adba692-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-29-156\" (UID: \"622872b6f0395c57dfefd3510adba692\") " pod="kube-system/kube-apiserver-ip-172-31-29-156" Mar 7 01:09:59.170426 kubelet[3114]: I0307 01:09:59.169513 3114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0d5457358c539a54dc80ba3206b3b637-ca-certs\") pod \"kube-controller-manager-ip-172-31-29-156\" (UID: \"0d5457358c539a54dc80ba3206b3b637\") " pod="kube-system/kube-controller-manager-ip-172-31-29-156" Mar 7 01:09:59.170426 kubelet[3114]: I0307 01:09:59.169555 3114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0d5457358c539a54dc80ba3206b3b637-k8s-certs\") pod \"kube-controller-manager-ip-172-31-29-156\" (UID: \"0d5457358c539a54dc80ba3206b3b637\") " pod="kube-system/kube-controller-manager-ip-172-31-29-156" Mar 7 01:09:59.170426 kubelet[3114]: I0307 01:09:59.169579 3114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0d5457358c539a54dc80ba3206b3b637-kubeconfig\") pod \"kube-controller-manager-ip-172-31-29-156\" (UID: \"0d5457358c539a54dc80ba3206b3b637\") " pod="kube-system/kube-controller-manager-ip-172-31-29-156" Mar 7 01:09:59.170650 kubelet[3114]: I0307 01:09:59.169604 3114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0d5457358c539a54dc80ba3206b3b637-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-29-156\" (UID: \"0d5457358c539a54dc80ba3206b3b637\") " pod="kube-system/kube-controller-manager-ip-172-31-29-156" Mar 7 01:09:59.170650 kubelet[3114]: I0307 01:09:59.169634 3114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/732da558d8b253226c7633d21d8b9cd4-kubeconfig\") pod \"kube-scheduler-ip-172-31-29-156\" (UID: \"732da558d8b253226c7633d21d8b9cd4\") " pod="kube-system/kube-scheduler-ip-172-31-29-156" Mar 7 01:09:59.170650 kubelet[3114]: I0307 01:09:59.169657 3114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/622872b6f0395c57dfefd3510adba692-ca-certs\") pod \"kube-apiserver-ip-172-31-29-156\" (UID: \"622872b6f0395c57dfefd3510adba692\") " pod="kube-system/kube-apiserver-ip-172-31-29-156" Mar 7 01:09:59.170650 kubelet[3114]: I0307 01:09:59.169681 3114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/0d5457358c539a54dc80ba3206b3b637-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-29-156\" (UID: \"0d5457358c539a54dc80ba3206b3b637\") " pod="kube-system/kube-controller-manager-ip-172-31-29-156" Mar 7 01:09:59.912370 kubelet[3114]: I0307 01:09:59.912162 3114 apiserver.go:52] "Watching apiserver" Mar 7 01:09:59.965194 kubelet[3114]: I0307 01:09:59.965063 3114 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 7 01:09:59.969863 kubelet[3114]: I0307 01:09:59.969797 3114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-29-156" podStartSLOduration=0.969778061 podStartE2EDuration="969.778061ms" podCreationTimestamp="2026-03-07 01:09:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:09:59.966483075 +0000 UTC m=+1.148454256" watchObservedRunningTime="2026-03-07 01:09:59.969778061 +0000 UTC m=+1.151749240" Mar 7 01:10:00.006143 kubelet[3114]: I0307 01:10:00.006061 3114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-29-156" podStartSLOduration=3.006038879 podStartE2EDuration="3.006038879s" podCreationTimestamp="2026-03-07 01:09:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:09:59.989243326 +0000 UTC m=+1.171214506" watchObservedRunningTime="2026-03-07 01:10:00.006038879 +0000 UTC m=+1.188010059" Mar 7 01:10:00.037249 kubelet[3114]: I0307 01:10:00.036043 3114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-29-156" podStartSLOduration=1.035969672 podStartE2EDuration="1.035969672s" podCreationTimestamp="2026-03-07 01:09:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:10:00.012163638 +0000 UTC m=+1.194134819" watchObservedRunningTime="2026-03-07 01:10:00.035969672 +0000 UTC m=+1.217940850" Mar 7 01:10:04.636884 kubelet[3114]: I0307 01:10:04.636848 3114 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 7 01:10:04.637620 containerd[1890]: time="2026-03-07T01:10:04.637585175Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 7 01:10:04.638207 kubelet[3114]: I0307 01:10:04.637821 3114 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 7 01:10:05.232199 kubelet[3114]: I0307 01:10:05.232023 3114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/88dc25a1-9dc7-4c23-ab5e-ca45603a3c06-kube-proxy\") pod \"kube-proxy-sj9p9\" (UID: \"88dc25a1-9dc7-4c23-ab5e-ca45603a3c06\") " pod="kube-system/kube-proxy-sj9p9" Mar 7 01:10:05.232199 kubelet[3114]: I0307 01:10:05.232064 3114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/88dc25a1-9dc7-4c23-ab5e-ca45603a3c06-lib-modules\") pod \"kube-proxy-sj9p9\" (UID: \"88dc25a1-9dc7-4c23-ab5e-ca45603a3c06\") " pod="kube-system/kube-proxy-sj9p9" Mar 7 01:10:05.232199 kubelet[3114]: I0307 01:10:05.232092 3114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/88dc25a1-9dc7-4c23-ab5e-ca45603a3c06-xtables-lock\") pod \"kube-proxy-sj9p9\" (UID: \"88dc25a1-9dc7-4c23-ab5e-ca45603a3c06\") " pod="kube-system/kube-proxy-sj9p9" Mar 7 01:10:05.232199 kubelet[3114]: I0307 01:10:05.232123 3114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nft8k\" (UniqueName: \"kubernetes.io/projected/88dc25a1-9dc7-4c23-ab5e-ca45603a3c06-kube-api-access-nft8k\") pod \"kube-proxy-sj9p9\" (UID: \"88dc25a1-9dc7-4c23-ab5e-ca45603a3c06\") " pod="kube-system/kube-proxy-sj9p9" Mar 7 01:10:05.233617 systemd[1]: Created slice kubepods-besteffort-pod88dc25a1_9dc7_4c23_ab5e_ca45603a3c06.slice - libcontainer container kubepods-besteffort-pod88dc25a1_9dc7_4c23_ab5e_ca45603a3c06.slice. Mar 7 01:10:05.549837 containerd[1890]: time="2026-03-07T01:10:05.549384564Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-sj9p9,Uid:88dc25a1-9dc7-4c23-ab5e-ca45603a3c06,Namespace:kube-system,Attempt:0,}" Mar 7 01:10:05.630273 containerd[1890]: time="2026-03-07T01:10:05.629868762Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:10:05.630273 containerd[1890]: time="2026-03-07T01:10:05.629956061Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:10:05.630273 containerd[1890]: time="2026-03-07T01:10:05.629974021Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:10:05.630273 containerd[1890]: time="2026-03-07T01:10:05.630131176Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:10:05.665435 systemd[1]: run-containerd-runc-k8s.io-cc65a65c22654b01bff663c579028d2f93b9368aa95702f28137a2a24c229547-runc.XT39ZK.mount: Deactivated successfully. Mar 7 01:10:05.677260 systemd[1]: Started cri-containerd-cc65a65c22654b01bff663c579028d2f93b9368aa95702f28137a2a24c229547.scope - libcontainer container cc65a65c22654b01bff663c579028d2f93b9368aa95702f28137a2a24c229547. Mar 7 01:10:05.705764 containerd[1890]: time="2026-03-07T01:10:05.705571608Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-sj9p9,Uid:88dc25a1-9dc7-4c23-ab5e-ca45603a3c06,Namespace:kube-system,Attempt:0,} returns sandbox id \"cc65a65c22654b01bff663c579028d2f93b9368aa95702f28137a2a24c229547\"" Mar 7 01:10:05.713645 containerd[1890]: time="2026-03-07T01:10:05.713610710Z" level=info msg="CreateContainer within sandbox \"cc65a65c22654b01bff663c579028d2f93b9368aa95702f28137a2a24c229547\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 7 01:10:05.740520 containerd[1890]: time="2026-03-07T01:10:05.740462414Z" level=info msg="CreateContainer within sandbox \"cc65a65c22654b01bff663c579028d2f93b9368aa95702f28137a2a24c229547\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"a5477320f04f900a10566fd5a5d9b35f44c0fba047d3e9f7447d2f13ab20ac13\"" Mar 7 01:10:05.742706 containerd[1890]: time="2026-03-07T01:10:05.742638305Z" level=info msg="StartContainer for \"a5477320f04f900a10566fd5a5d9b35f44c0fba047d3e9f7447d2f13ab20ac13\"" Mar 7 01:10:05.794954 systemd[1]: Started cri-containerd-a5477320f04f900a10566fd5a5d9b35f44c0fba047d3e9f7447d2f13ab20ac13.scope - libcontainer container a5477320f04f900a10566fd5a5d9b35f44c0fba047d3e9f7447d2f13ab20ac13. Mar 7 01:10:05.838130 kubelet[3114]: I0307 01:10:05.837906 3114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhnmm\" (UniqueName: \"kubernetes.io/projected/10936a21-0d6e-4d13-a0f9-80061dc4a39c-kube-api-access-dhnmm\") pod \"tigera-operator-5588576f44-fqcvh\" (UID: \"10936a21-0d6e-4d13-a0f9-80061dc4a39c\") " pod="tigera-operator/tigera-operator-5588576f44-fqcvh" Mar 7 01:10:05.838130 kubelet[3114]: I0307 01:10:05.837952 3114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/10936a21-0d6e-4d13-a0f9-80061dc4a39c-var-lib-calico\") pod \"tigera-operator-5588576f44-fqcvh\" (UID: \"10936a21-0d6e-4d13-a0f9-80061dc4a39c\") " pod="tigera-operator/tigera-operator-5588576f44-fqcvh" Mar 7 01:10:05.842812 systemd[1]: Created slice kubepods-besteffort-pod10936a21_0d6e_4d13_a0f9_80061dc4a39c.slice - libcontainer container kubepods-besteffort-pod10936a21_0d6e_4d13_a0f9_80061dc4a39c.slice. Mar 7 01:10:05.882284 containerd[1890]: time="2026-03-07T01:10:05.882230270Z" level=info msg="StartContainer for \"a5477320f04f900a10566fd5a5d9b35f44c0fba047d3e9f7447d2f13ab20ac13\" returns successfully" Mar 7 01:10:06.053253 kubelet[3114]: I0307 01:10:06.053155 3114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-sj9p9" podStartSLOduration=1.053135727 podStartE2EDuration="1.053135727s" podCreationTimestamp="2026-03-07 01:10:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:10:06.052816361 +0000 UTC m=+7.234787542" watchObservedRunningTime="2026-03-07 01:10:06.053135727 +0000 UTC m=+7.235106908" Mar 7 01:10:06.154035 containerd[1890]: time="2026-03-07T01:10:06.152691800Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5588576f44-fqcvh,Uid:10936a21-0d6e-4d13-a0f9-80061dc4a39c,Namespace:tigera-operator,Attempt:0,}" Mar 7 01:10:06.191249 containerd[1890]: time="2026-03-07T01:10:06.190591060Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:10:06.191249 containerd[1890]: time="2026-03-07T01:10:06.190669039Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:10:06.191249 containerd[1890]: time="2026-03-07T01:10:06.190706318Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:10:06.191249 containerd[1890]: time="2026-03-07T01:10:06.190860815Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:10:06.216314 systemd[1]: Started cri-containerd-1ba4891a65be5041c41045c434b1257b1cf6af86e70305a89a360d34ea6efbce.scope - libcontainer container 1ba4891a65be5041c41045c434b1257b1cf6af86e70305a89a360d34ea6efbce. Mar 7 01:10:06.278081 containerd[1890]: time="2026-03-07T01:10:06.277788495Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5588576f44-fqcvh,Uid:10936a21-0d6e-4d13-a0f9-80061dc4a39c,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"1ba4891a65be5041c41045c434b1257b1cf6af86e70305a89a360d34ea6efbce\"" Mar 7 01:10:06.282204 containerd[1890]: time="2026-03-07T01:10:06.281891149Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Mar 7 01:10:06.627905 update_engine[1883]: I20260307 01:10:06.627727 1883 update_attempter.cc:509] Updating boot flags... Mar 7 01:10:06.680115 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 34 scanned by (udev-worker) (3425) Mar 7 01:10:07.672606 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3776932866.mount: Deactivated successfully. Mar 7 01:10:11.159619 containerd[1890]: time="2026-03-07T01:10:11.159557817Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:10:11.161813 containerd[1890]: time="2026-03-07T01:10:11.161619717Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=40846156" Mar 7 01:10:11.164116 containerd[1890]: time="2026-03-07T01:10:11.164078519Z" level=info msg="ImageCreate event name:\"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:10:11.169425 containerd[1890]: time="2026-03-07T01:10:11.167875444Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:10:11.169425 containerd[1890]: time="2026-03-07T01:10:11.168629250Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"40842151\" in 4.886692296s" Mar 7 01:10:11.169425 containerd[1890]: time="2026-03-07T01:10:11.168665579Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\"" Mar 7 01:10:11.175849 containerd[1890]: time="2026-03-07T01:10:11.175810857Z" level=info msg="CreateContainer within sandbox \"1ba4891a65be5041c41045c434b1257b1cf6af86e70305a89a360d34ea6efbce\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Mar 7 01:10:11.198173 containerd[1890]: time="2026-03-07T01:10:11.198117774Z" level=info msg="CreateContainer within sandbox \"1ba4891a65be5041c41045c434b1257b1cf6af86e70305a89a360d34ea6efbce\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"307ee737de7083e97f9b4000537014464cc41f8fcbb481bd0f2af2d8a7301565\"" Mar 7 01:10:11.198862 containerd[1890]: time="2026-03-07T01:10:11.198829368Z" level=info msg="StartContainer for \"307ee737de7083e97f9b4000537014464cc41f8fcbb481bd0f2af2d8a7301565\"" Mar 7 01:10:11.233945 systemd[1]: run-containerd-runc-k8s.io-307ee737de7083e97f9b4000537014464cc41f8fcbb481bd0f2af2d8a7301565-runc.6ct2NB.mount: Deactivated successfully. Mar 7 01:10:11.245315 systemd[1]: Started cri-containerd-307ee737de7083e97f9b4000537014464cc41f8fcbb481bd0f2af2d8a7301565.scope - libcontainer container 307ee737de7083e97f9b4000537014464cc41f8fcbb481bd0f2af2d8a7301565. Mar 7 01:10:11.284867 containerd[1890]: time="2026-03-07T01:10:11.284046113Z" level=info msg="StartContainer for \"307ee737de7083e97f9b4000537014464cc41f8fcbb481bd0f2af2d8a7301565\" returns successfully" Mar 7 01:10:18.270617 sudo[2220]: pam_unix(sudo:session): session closed for user root Mar 7 01:10:18.351540 sshd[2217]: pam_unix(sshd:session): session closed for user core Mar 7 01:10:18.356213 systemd[1]: sshd@6-172.31.29.156:22-68.220.241.50:56418.service: Deactivated successfully. Mar 7 01:10:18.359319 systemd[1]: session-7.scope: Deactivated successfully. Mar 7 01:10:18.359678 systemd[1]: session-7.scope: Consumed 6.872s CPU time, 150.2M memory peak, 0B memory swap peak. Mar 7 01:10:18.363212 systemd-logind[1880]: Session 7 logged out. Waiting for processes to exit. Mar 7 01:10:18.364969 systemd-logind[1880]: Removed session 7. Mar 7 01:10:21.756322 kubelet[3114]: I0307 01:10:21.755417 3114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5588576f44-fqcvh" podStartSLOduration=11.865944444 podStartE2EDuration="16.755396874s" podCreationTimestamp="2026-03-07 01:10:05 +0000 UTC" firstStartedPulling="2026-03-07 01:10:06.280817805 +0000 UTC m=+7.462788975" lastFinishedPulling="2026-03-07 01:10:11.170270233 +0000 UTC m=+12.352241405" observedRunningTime="2026-03-07 01:10:12.062543233 +0000 UTC m=+13.244514413" watchObservedRunningTime="2026-03-07 01:10:21.755396874 +0000 UTC m=+22.937368054" Mar 7 01:10:21.771452 systemd[1]: Created slice kubepods-besteffort-podc6a3ee3b_91a9_4681_bb0b_403841224e70.slice - libcontainer container kubepods-besteffort-podc6a3ee3b_91a9_4681_bb0b_403841224e70.slice. Mar 7 01:10:21.848129 kubelet[3114]: I0307 01:10:21.847940 3114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/c6a3ee3b-91a9-4681-bb0b-403841224e70-typha-certs\") pod \"calico-typha-5b5f77846d-5n286\" (UID: \"c6a3ee3b-91a9-4681-bb0b-403841224e70\") " pod="calico-system/calico-typha-5b5f77846d-5n286" Mar 7 01:10:21.848129 kubelet[3114]: I0307 01:10:21.847992 3114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c6a3ee3b-91a9-4681-bb0b-403841224e70-tigera-ca-bundle\") pod \"calico-typha-5b5f77846d-5n286\" (UID: \"c6a3ee3b-91a9-4681-bb0b-403841224e70\") " pod="calico-system/calico-typha-5b5f77846d-5n286" Mar 7 01:10:21.848129 kubelet[3114]: I0307 01:10:21.848034 3114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4b9qq\" (UniqueName: \"kubernetes.io/projected/c6a3ee3b-91a9-4681-bb0b-403841224e70-kube-api-access-4b9qq\") pod \"calico-typha-5b5f77846d-5n286\" (UID: \"c6a3ee3b-91a9-4681-bb0b-403841224e70\") " pod="calico-system/calico-typha-5b5f77846d-5n286" Mar 7 01:10:22.025084 systemd[1]: Created slice kubepods-besteffort-pode744e922_93ca_4f3f_a019_ddffe4903c17.slice - libcontainer container kubepods-besteffort-pode744e922_93ca_4f3f_a019_ddffe4903c17.slice. Mar 7 01:10:22.049376 kubelet[3114]: I0307 01:10:22.049330 3114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/e744e922-93ca-4f3f-a019-ddffe4903c17-node-certs\") pod \"calico-node-6sqr7\" (UID: \"e744e922-93ca-4f3f-a019-ddffe4903c17\") " pod="calico-system/calico-node-6sqr7" Mar 7 01:10:22.049376 kubelet[3114]: I0307 01:10:22.049375 3114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/e744e922-93ca-4f3f-a019-ddffe4903c17-sys-fs\") pod \"calico-node-6sqr7\" (UID: \"e744e922-93ca-4f3f-a019-ddffe4903c17\") " pod="calico-system/calico-node-6sqr7" Mar 7 01:10:22.049587 kubelet[3114]: I0307 01:10:22.049403 3114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/e744e922-93ca-4f3f-a019-ddffe4903c17-cni-net-dir\") pod \"calico-node-6sqr7\" (UID: \"e744e922-93ca-4f3f-a019-ddffe4903c17\") " pod="calico-system/calico-node-6sqr7" Mar 7 01:10:22.049587 kubelet[3114]: I0307 01:10:22.049422 3114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e744e922-93ca-4f3f-a019-ddffe4903c17-lib-modules\") pod \"calico-node-6sqr7\" (UID: \"e744e922-93ca-4f3f-a019-ddffe4903c17\") " pod="calico-system/calico-node-6sqr7" Mar 7 01:10:22.049587 kubelet[3114]: I0307 01:10:22.049449 3114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/e744e922-93ca-4f3f-a019-ddffe4903c17-policysync\") pod \"calico-node-6sqr7\" (UID: \"e744e922-93ca-4f3f-a019-ddffe4903c17\") " pod="calico-system/calico-node-6sqr7" Mar 7 01:10:22.049587 kubelet[3114]: I0307 01:10:22.049486 3114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e744e922-93ca-4f3f-a019-ddffe4903c17-tigera-ca-bundle\") pod \"calico-node-6sqr7\" (UID: \"e744e922-93ca-4f3f-a019-ddffe4903c17\") " pod="calico-system/calico-node-6sqr7" Mar 7 01:10:22.049587 kubelet[3114]: I0307 01:10:22.049512 3114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/e744e922-93ca-4f3f-a019-ddffe4903c17-nodeproc\") pod \"calico-node-6sqr7\" (UID: \"e744e922-93ca-4f3f-a019-ddffe4903c17\") " pod="calico-system/calico-node-6sqr7" Mar 7 01:10:22.049809 kubelet[3114]: I0307 01:10:22.049537 3114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/e744e922-93ca-4f3f-a019-ddffe4903c17-bpffs\") pod \"calico-node-6sqr7\" (UID: \"e744e922-93ca-4f3f-a019-ddffe4903c17\") " pod="calico-system/calico-node-6sqr7" Mar 7 01:10:22.049809 kubelet[3114]: I0307 01:10:22.049561 3114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e744e922-93ca-4f3f-a019-ddffe4903c17-var-lib-calico\") pod \"calico-node-6sqr7\" (UID: \"e744e922-93ca-4f3f-a019-ddffe4903c17\") " pod="calico-system/calico-node-6sqr7" Mar 7 01:10:22.049809 kubelet[3114]: I0307 01:10:22.049583 3114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmbjd\" (UniqueName: \"kubernetes.io/projected/e744e922-93ca-4f3f-a019-ddffe4903c17-kube-api-access-zmbjd\") pod \"calico-node-6sqr7\" (UID: \"e744e922-93ca-4f3f-a019-ddffe4903c17\") " pod="calico-system/calico-node-6sqr7" Mar 7 01:10:22.049809 kubelet[3114]: I0307 01:10:22.049607 3114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/e744e922-93ca-4f3f-a019-ddffe4903c17-cni-bin-dir\") pod \"calico-node-6sqr7\" (UID: \"e744e922-93ca-4f3f-a019-ddffe4903c17\") " pod="calico-system/calico-node-6sqr7" Mar 7 01:10:22.049809 kubelet[3114]: I0307 01:10:22.049631 3114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/e744e922-93ca-4f3f-a019-ddffe4903c17-cni-log-dir\") pod \"calico-node-6sqr7\" (UID: \"e744e922-93ca-4f3f-a019-ddffe4903c17\") " pod="calico-system/calico-node-6sqr7" Mar 7 01:10:22.050086 kubelet[3114]: I0307 01:10:22.049662 3114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/e744e922-93ca-4f3f-a019-ddffe4903c17-flexvol-driver-host\") pod \"calico-node-6sqr7\" (UID: \"e744e922-93ca-4f3f-a019-ddffe4903c17\") " pod="calico-system/calico-node-6sqr7" Mar 7 01:10:22.050086 kubelet[3114]: I0307 01:10:22.049684 3114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/e744e922-93ca-4f3f-a019-ddffe4903c17-var-run-calico\") pod \"calico-node-6sqr7\" (UID: \"e744e922-93ca-4f3f-a019-ddffe4903c17\") " pod="calico-system/calico-node-6sqr7" Mar 7 01:10:22.050086 kubelet[3114]: I0307 01:10:22.049712 3114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e744e922-93ca-4f3f-a019-ddffe4903c17-xtables-lock\") pod \"calico-node-6sqr7\" (UID: \"e744e922-93ca-4f3f-a019-ddffe4903c17\") " pod="calico-system/calico-node-6sqr7" Mar 7 01:10:22.079302 containerd[1890]: time="2026-03-07T01:10:22.079258421Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5b5f77846d-5n286,Uid:c6a3ee3b-91a9-4681-bb0b-403841224e70,Namespace:calico-system,Attempt:0,}" Mar 7 01:10:22.133539 kubelet[3114]: E0307 01:10:22.130950 3114 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9sl5j" podUID="cb0ce17a-98ad-40fb-b4f7-1528d43404aa" Mar 7 01:10:22.152071 kubelet[3114]: I0307 01:10:22.150670 3114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cb0ce17a-98ad-40fb-b4f7-1528d43404aa-kubelet-dir\") pod \"csi-node-driver-9sl5j\" (UID: \"cb0ce17a-98ad-40fb-b4f7-1528d43404aa\") " pod="calico-system/csi-node-driver-9sl5j" Mar 7 01:10:22.152071 kubelet[3114]: I0307 01:10:22.150980 3114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/cb0ce17a-98ad-40fb-b4f7-1528d43404aa-varrun\") pod \"csi-node-driver-9sl5j\" (UID: \"cb0ce17a-98ad-40fb-b4f7-1528d43404aa\") " pod="calico-system/csi-node-driver-9sl5j" Mar 7 01:10:22.152071 kubelet[3114]: I0307 01:10:22.151183 3114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/cb0ce17a-98ad-40fb-b4f7-1528d43404aa-registration-dir\") pod \"csi-node-driver-9sl5j\" (UID: \"cb0ce17a-98ad-40fb-b4f7-1528d43404aa\") " pod="calico-system/csi-node-driver-9sl5j" Mar 7 01:10:22.152071 kubelet[3114]: I0307 01:10:22.151207 3114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/cb0ce17a-98ad-40fb-b4f7-1528d43404aa-socket-dir\") pod \"csi-node-driver-9sl5j\" (UID: \"cb0ce17a-98ad-40fb-b4f7-1528d43404aa\") " pod="calico-system/csi-node-driver-9sl5j" Mar 7 01:10:22.152071 kubelet[3114]: I0307 01:10:22.151235 3114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x42ww\" (UniqueName: \"kubernetes.io/projected/cb0ce17a-98ad-40fb-b4f7-1528d43404aa-kube-api-access-x42ww\") pod \"csi-node-driver-9sl5j\" (UID: \"cb0ce17a-98ad-40fb-b4f7-1528d43404aa\") " pod="calico-system/csi-node-driver-9sl5j" Mar 7 01:10:22.157277 kubelet[3114]: E0307 01:10:22.157206 3114 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:10:22.161934 kubelet[3114]: W0307 01:10:22.161881 3114 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:10:22.162100 kubelet[3114]: E0307 01:10:22.161955 3114 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:10:22.177780 kubelet[3114]: E0307 01:10:22.177582 3114 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:10:22.177780 kubelet[3114]: W0307 01:10:22.177615 3114 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:10:22.177780 kubelet[3114]: E0307 01:10:22.177638 3114 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:10:22.181705 kubelet[3114]: E0307 01:10:22.180246 3114 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:10:22.181705 kubelet[3114]: W0307 01:10:22.180270 3114 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:10:22.181705 kubelet[3114]: E0307 01:10:22.180292 3114 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:10:22.183683 kubelet[3114]: E0307 01:10:22.183213 3114 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:10:22.183683 kubelet[3114]: W0307 01:10:22.183234 3114 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:10:22.183683 kubelet[3114]: E0307 01:10:22.183255 3114 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:10:22.209527 kubelet[3114]: E0307 01:10:22.209413 3114 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:10:22.209527 kubelet[3114]: W0307 01:10:22.209436 3114 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:10:22.209527 kubelet[3114]: E0307 01:10:22.209459 3114 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:10:22.224080 containerd[1890]: time="2026-03-07T01:10:22.223364017Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:10:22.224256 containerd[1890]: time="2026-03-07T01:10:22.224123283Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:10:22.224256 containerd[1890]: time="2026-03-07T01:10:22.224210894Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:10:22.225803 containerd[1890]: time="2026-03-07T01:10:22.225687344Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:10:22.252737 kubelet[3114]: E0307 01:10:22.252678 3114 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:10:22.252737 kubelet[3114]: W0307 01:10:22.252700 3114 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:10:22.253306 kubelet[3114]: E0307 01:10:22.252997 3114 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:10:22.253549 kubelet[3114]: E0307 01:10:22.253435 3114 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:10:22.253549 kubelet[3114]: W0307 01:10:22.253449 3114 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:10:22.253549 kubelet[3114]: E0307 01:10:22.253478 3114 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:10:22.254057 kubelet[3114]: E0307 01:10:22.253875 3114 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:10:22.254057 kubelet[3114]: W0307 01:10:22.253887 3114 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:10:22.254057 kubelet[3114]: E0307 01:10:22.253900 3114 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:10:22.254529 kubelet[3114]: E0307 01:10:22.254395 3114 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:10:22.254529 kubelet[3114]: W0307 01:10:22.254408 3114 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:10:22.254529 kubelet[3114]: E0307 01:10:22.254420 3114 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:10:22.254943 kubelet[3114]: E0307 01:10:22.254818 3114 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:10:22.254943 kubelet[3114]: W0307 01:10:22.254829 3114 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:10:22.254943 kubelet[3114]: E0307 01:10:22.254840 3114 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:10:22.256141 kubelet[3114]: E0307 01:10:22.255383 3114 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:10:22.256141 kubelet[3114]: W0307 01:10:22.255399 3114 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:10:22.256141 kubelet[3114]: E0307 01:10:22.255604 3114 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:10:22.256409 kubelet[3114]: E0307 01:10:22.256396 3114 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:10:22.258228 kubelet[3114]: W0307 01:10:22.257955 3114 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:10:22.258228 kubelet[3114]: E0307 01:10:22.258200 3114 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:10:22.258719 kubelet[3114]: E0307 01:10:22.258706 3114 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:10:22.258878 kubelet[3114]: W0307 01:10:22.258776 3114 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:10:22.258878 kubelet[3114]: E0307 01:10:22.258795 3114 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:10:22.260162 kubelet[3114]: E0307 01:10:22.259870 3114 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:10:22.260162 kubelet[3114]: W0307 01:10:22.259886 3114 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:10:22.260162 kubelet[3114]: E0307 01:10:22.259902 3114 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:10:22.263131 kubelet[3114]: E0307 01:10:22.262434 3114 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:10:22.263131 kubelet[3114]: W0307 01:10:22.262453 3114 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:10:22.263131 kubelet[3114]: E0307 01:10:22.262468 3114 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:10:22.263131 kubelet[3114]: E0307 01:10:22.262750 3114 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:10:22.263131 kubelet[3114]: W0307 01:10:22.262760 3114 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:10:22.263131 kubelet[3114]: E0307 01:10:22.262774 3114 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:10:22.263131 kubelet[3114]: E0307 01:10:22.263049 3114 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:10:22.263131 kubelet[3114]: W0307 01:10:22.263059 3114 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:10:22.263131 kubelet[3114]: E0307 01:10:22.263073 3114 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:10:22.267613 kubelet[3114]: E0307 01:10:22.266195 3114 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:10:22.267613 kubelet[3114]: W0307 01:10:22.266211 3114 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:10:22.267613 kubelet[3114]: E0307 01:10:22.266227 3114 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:10:22.267613 kubelet[3114]: E0307 01:10:22.266488 3114 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:10:22.267613 kubelet[3114]: W0307 01:10:22.266498 3114 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:10:22.267613 kubelet[3114]: E0307 01:10:22.266511 3114 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:10:22.267613 kubelet[3114]: E0307 01:10:22.266956 3114 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:10:22.267613 kubelet[3114]: W0307 01:10:22.266967 3114 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:10:22.267613 kubelet[3114]: E0307 01:10:22.266984 3114 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:10:22.267613 kubelet[3114]: E0307 01:10:22.267482 3114 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:10:22.268115 kubelet[3114]: W0307 01:10:22.267494 3114 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:10:22.268115 kubelet[3114]: E0307 01:10:22.267509 3114 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:10:22.268485 kubelet[3114]: E0307 01:10:22.268342 3114 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:10:22.268485 kubelet[3114]: W0307 01:10:22.268356 3114 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:10:22.268485 kubelet[3114]: E0307 01:10:22.268369 3114 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:10:22.269072 kubelet[3114]: E0307 01:10:22.268764 3114 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:10:22.269072 kubelet[3114]: W0307 01:10:22.268776 3114 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:10:22.269072 kubelet[3114]: E0307 01:10:22.268789 3114 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:10:22.270511 kubelet[3114]: E0307 01:10:22.270388 3114 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:10:22.270511 kubelet[3114]: W0307 01:10:22.270402 3114 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:10:22.270511 kubelet[3114]: E0307 01:10:22.270416 3114 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:10:22.273556 kubelet[3114]: E0307 01:10:22.272645 3114 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:10:22.273556 kubelet[3114]: W0307 01:10:22.272657 3114 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:10:22.273556 kubelet[3114]: E0307 01:10:22.272671 3114 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:10:22.273207 systemd[1]: Started cri-containerd-88d28b9cad76ec27e86974cba7adb35d8c1d3a52b6f5a7f8e4772a99d9a7fe36.scope - libcontainer container 88d28b9cad76ec27e86974cba7adb35d8c1d3a52b6f5a7f8e4772a99d9a7fe36. Mar 7 01:10:22.274363 kubelet[3114]: E0307 01:10:22.273901 3114 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:10:22.274363 kubelet[3114]: W0307 01:10:22.273916 3114 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:10:22.274363 kubelet[3114]: E0307 01:10:22.273930 3114 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:10:22.274363 kubelet[3114]: E0307 01:10:22.274208 3114 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:10:22.274363 kubelet[3114]: W0307 01:10:22.274218 3114 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:10:22.274363 kubelet[3114]: E0307 01:10:22.274229 3114 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:10:22.275068 kubelet[3114]: E0307 01:10:22.275051 3114 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:10:22.275163 kubelet[3114]: W0307 01:10:22.275150 3114 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:10:22.276964 kubelet[3114]: E0307 01:10:22.275230 3114 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:10:22.277712 kubelet[3114]: E0307 01:10:22.277699 3114 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:10:22.279072 kubelet[3114]: W0307 01:10:22.279053 3114 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:10:22.279290 kubelet[3114]: E0307 01:10:22.279269 3114 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:10:22.282638 kubelet[3114]: E0307 01:10:22.282410 3114 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:10:22.287041 kubelet[3114]: W0307 01:10:22.286408 3114 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:10:22.287195 kubelet[3114]: E0307 01:10:22.287175 3114 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:10:22.304333 kubelet[3114]: E0307 01:10:22.304305 3114 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:10:22.304939 kubelet[3114]: W0307 01:10:22.304838 3114 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:10:22.304939 kubelet[3114]: E0307 01:10:22.304891 3114 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:10:22.337502 containerd[1890]: time="2026-03-07T01:10:22.337319522Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-6sqr7,Uid:e744e922-93ca-4f3f-a019-ddffe4903c17,Namespace:calico-system,Attempt:0,}" Mar 7 01:10:22.386519 containerd[1890]: time="2026-03-07T01:10:22.386385232Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:10:22.386519 containerd[1890]: time="2026-03-07T01:10:22.386452974Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:10:22.386519 containerd[1890]: time="2026-03-07T01:10:22.386468824Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:10:22.386776 containerd[1890]: time="2026-03-07T01:10:22.386560335Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:10:22.390797 containerd[1890]: time="2026-03-07T01:10:22.390754092Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5b5f77846d-5n286,Uid:c6a3ee3b-91a9-4681-bb0b-403841224e70,Namespace:calico-system,Attempt:0,} returns sandbox id \"88d28b9cad76ec27e86974cba7adb35d8c1d3a52b6f5a7f8e4772a99d9a7fe36\"" Mar 7 01:10:22.403874 containerd[1890]: time="2026-03-07T01:10:22.402940472Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Mar 7 01:10:22.414284 systemd[1]: Started cri-containerd-703a9abb7370cbc20873e6d1bc4d27b23ac124b0f23de5fced90fdae538079bf.scope - libcontainer container 703a9abb7370cbc20873e6d1bc4d27b23ac124b0f23de5fced90fdae538079bf. Mar 7 01:10:22.466135 containerd[1890]: time="2026-03-07T01:10:22.464573019Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-6sqr7,Uid:e744e922-93ca-4f3f-a019-ddffe4903c17,Namespace:calico-system,Attempt:0,} returns sandbox id \"703a9abb7370cbc20873e6d1bc4d27b23ac124b0f23de5fced90fdae538079bf\"" Mar 7 01:10:23.793589 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3993153266.mount: Deactivated successfully. Mar 7 01:10:23.987312 kubelet[3114]: E0307 01:10:23.986711 3114 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9sl5j" podUID="cb0ce17a-98ad-40fb-b4f7-1528d43404aa" Mar 7 01:10:24.723661 containerd[1890]: time="2026-03-07T01:10:24.723610789Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:10:24.725140 containerd[1890]: time="2026-03-07T01:10:24.724950893Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=36107596" Mar 7 01:10:24.726699 containerd[1890]: time="2026-03-07T01:10:24.726471922Z" level=info msg="ImageCreate event name:\"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:10:24.732034 containerd[1890]: time="2026-03-07T01:10:24.730357220Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:10:24.748043 containerd[1890]: time="2026-03-07T01:10:24.747523546Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"36107450\" in 2.344040348s" Mar 7 01:10:24.748043 containerd[1890]: time="2026-03-07T01:10:24.747578232Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\"" Mar 7 01:10:24.751180 containerd[1890]: time="2026-03-07T01:10:24.750154941Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Mar 7 01:10:24.786450 containerd[1890]: time="2026-03-07T01:10:24.786411200Z" level=info msg="CreateContainer within sandbox \"88d28b9cad76ec27e86974cba7adb35d8c1d3a52b6f5a7f8e4772a99d9a7fe36\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Mar 7 01:10:24.855900 containerd[1890]: time="2026-03-07T01:10:24.855846576Z" level=info msg="CreateContainer within sandbox \"88d28b9cad76ec27e86974cba7adb35d8c1d3a52b6f5a7f8e4772a99d9a7fe36\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"82c3a74609ebb83efbfb91d44836f18b39f2c6e01a83bca42ba3b5da234c356a\"" Mar 7 01:10:24.857963 containerd[1890]: time="2026-03-07T01:10:24.856468492Z" level=info msg="StartContainer for \"82c3a74609ebb83efbfb91d44836f18b39f2c6e01a83bca42ba3b5da234c356a\"" Mar 7 01:10:24.905177 systemd[1]: Started cri-containerd-82c3a74609ebb83efbfb91d44836f18b39f2c6e01a83bca42ba3b5da234c356a.scope - libcontainer container 82c3a74609ebb83efbfb91d44836f18b39f2c6e01a83bca42ba3b5da234c356a. Mar 7 01:10:24.962987 containerd[1890]: time="2026-03-07T01:10:24.962852924Z" level=info msg="StartContainer for \"82c3a74609ebb83efbfb91d44836f18b39f2c6e01a83bca42ba3b5da234c356a\" returns successfully" Mar 7 01:10:25.168733 kubelet[3114]: E0307 01:10:25.168679 3114 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:10:25.170130 kubelet[3114]: W0307 01:10:25.170098 3114 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:10:25.170513 kubelet[3114]: E0307 01:10:25.170341 3114 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:10:25.171016 kubelet[3114]: E0307 01:10:25.170903 3114 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:10:25.171016 kubelet[3114]: W0307 01:10:25.170933 3114 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:10:25.171016 kubelet[3114]: E0307 01:10:25.170950 3114 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:10:25.171872 kubelet[3114]: E0307 01:10:25.171586 3114 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:10:25.171872 kubelet[3114]: W0307 01:10:25.171598 3114 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:10:25.171872 kubelet[3114]: E0307 01:10:25.171613 3114 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:10:25.173595 kubelet[3114]: E0307 01:10:25.173300 3114 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:10:25.173595 kubelet[3114]: W0307 01:10:25.173315 3114 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:10:25.173595 kubelet[3114]: E0307 01:10:25.173330 3114 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:10:25.174231 kubelet[3114]: E0307 01:10:25.173973 3114 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:10:25.174231 kubelet[3114]: W0307 01:10:25.173998 3114 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:10:25.174231 kubelet[3114]: E0307 01:10:25.174055 3114 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:10:25.174806 kubelet[3114]: E0307 01:10:25.174651 3114 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:10:25.174806 kubelet[3114]: W0307 01:10:25.174683 3114 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:10:25.174806 kubelet[3114]: E0307 01:10:25.174698 3114 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:10:25.175438 kubelet[3114]: E0307 01:10:25.175235 3114 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:10:25.175438 kubelet[3114]: W0307 01:10:25.175248 3114 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:10:25.175438 kubelet[3114]: E0307 01:10:25.175264 3114 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:10:25.176039 kubelet[3114]: E0307 01:10:25.175904 3114 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:10:25.176039 kubelet[3114]: W0307 01:10:25.175918 3114 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:10:25.176039 kubelet[3114]: E0307 01:10:25.175932 3114 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:10:25.176890 kubelet[3114]: E0307 01:10:25.176563 3114 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:10:25.176890 kubelet[3114]: W0307 01:10:25.176577 3114 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:10:25.176890 kubelet[3114]: E0307 01:10:25.176590 3114 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:10:25.177473 kubelet[3114]: E0307 01:10:25.177206 3114 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:10:25.177473 kubelet[3114]: W0307 01:10:25.177220 3114 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:10:25.177473 kubelet[3114]: E0307 01:10:25.177234 3114 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:10:25.178154 kubelet[3114]: E0307 01:10:25.178028 3114 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:10:25.178154 kubelet[3114]: W0307 01:10:25.178043 3114 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:10:25.178154 kubelet[3114]: E0307 01:10:25.178056 3114 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:10:25.178682 kubelet[3114]: E0307 01:10:25.178459 3114 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:10:25.178682 kubelet[3114]: W0307 01:10:25.178472 3114 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:10:25.178682 kubelet[3114]: E0307 01:10:25.178486 3114 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:10:25.179168 kubelet[3114]: E0307 01:10:25.178912 3114 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:10:25.179168 kubelet[3114]: W0307 01:10:25.178924 3114 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:10:25.179168 kubelet[3114]: E0307 01:10:25.178937 3114 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:10:25.179778 kubelet[3114]: E0307 01:10:25.179657 3114 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:10:25.179778 kubelet[3114]: W0307 01:10:25.179671 3114 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:10:25.179778 kubelet[3114]: E0307 01:10:25.179687 3114 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:10:25.180611 kubelet[3114]: E0307 01:10:25.180241 3114 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:10:25.180611 kubelet[3114]: W0307 01:10:25.180254 3114 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:10:25.180611 kubelet[3114]: E0307 01:10:25.180267 3114 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:10:25.181314 kubelet[3114]: E0307 01:10:25.181073 3114 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:10:25.181314 kubelet[3114]: W0307 01:10:25.181091 3114 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:10:25.181314 kubelet[3114]: E0307 01:10:25.181105 3114 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:10:25.181969 kubelet[3114]: E0307 01:10:25.181742 3114 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:10:25.181969 kubelet[3114]: W0307 01:10:25.181755 3114 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:10:25.181969 kubelet[3114]: E0307 01:10:25.181769 3114 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:10:25.182649 kubelet[3114]: E0307 01:10:25.182489 3114 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:10:25.182649 kubelet[3114]: W0307 01:10:25.182503 3114 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:10:25.182649 kubelet[3114]: E0307 01:10:25.182517 3114 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:10:25.183381 kubelet[3114]: E0307 01:10:25.183084 3114 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:10:25.183381 kubelet[3114]: W0307 01:10:25.183098 3114 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:10:25.183381 kubelet[3114]: E0307 01:10:25.183114 3114 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:10:25.183992 kubelet[3114]: E0307 01:10:25.183782 3114 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:10:25.183992 kubelet[3114]: W0307 01:10:25.183796 3114 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:10:25.183992 kubelet[3114]: E0307 01:10:25.183811 3114 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:10:25.184590 kubelet[3114]: E0307 01:10:25.184465 3114 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:10:25.184590 kubelet[3114]: W0307 01:10:25.184477 3114 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:10:25.184590 kubelet[3114]: E0307 01:10:25.184490 3114 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:10:25.185269 kubelet[3114]: E0307 01:10:25.185100 3114 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:10:25.185269 kubelet[3114]: W0307 01:10:25.185114 3114 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:10:25.185269 kubelet[3114]: E0307 01:10:25.185127 3114 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:10:25.186045 kubelet[3114]: E0307 01:10:25.185792 3114 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:10:25.186045 kubelet[3114]: W0307 01:10:25.185806 3114 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:10:25.186045 kubelet[3114]: E0307 01:10:25.185908 3114 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:10:25.187123 kubelet[3114]: E0307 01:10:25.186860 3114 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:10:25.187123 kubelet[3114]: W0307 01:10:25.186874 3114 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:10:25.187123 kubelet[3114]: E0307 01:10:25.186890 3114 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:10:25.187461 kubelet[3114]: E0307 01:10:25.187328 3114 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:10:25.187461 kubelet[3114]: W0307 01:10:25.187341 3114 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:10:25.187461 kubelet[3114]: E0307 01:10:25.187354 3114 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:10:25.187930 kubelet[3114]: E0307 01:10:25.187753 3114 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:10:25.187930 kubelet[3114]: W0307 01:10:25.187766 3114 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:10:25.187930 kubelet[3114]: E0307 01:10:25.187782 3114 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:10:25.188590 kubelet[3114]: E0307 01:10:25.188497 3114 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:10:25.188590 kubelet[3114]: W0307 01:10:25.188509 3114 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:10:25.188590 kubelet[3114]: E0307 01:10:25.188523 3114 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:10:25.190403 kubelet[3114]: E0307 01:10:25.189719 3114 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:10:25.190403 kubelet[3114]: W0307 01:10:25.189733 3114 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:10:25.190403 kubelet[3114]: E0307 01:10:25.189747 3114 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:10:25.191071 kubelet[3114]: E0307 01:10:25.190606 3114 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:10:25.191071 kubelet[3114]: W0307 01:10:25.190621 3114 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:10:25.191071 kubelet[3114]: E0307 01:10:25.190634 3114 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:10:25.191614 kubelet[3114]: E0307 01:10:25.191601 3114 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:10:25.191732 kubelet[3114]: W0307 01:10:25.191687 3114 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:10:25.191732 kubelet[3114]: E0307 01:10:25.191708 3114 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:10:25.193081 kubelet[3114]: E0307 01:10:25.193036 3114 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:10:25.193081 kubelet[3114]: W0307 01:10:25.193051 3114 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:10:25.193081 kubelet[3114]: E0307 01:10:25.193066 3114 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:10:25.193865 kubelet[3114]: E0307 01:10:25.193657 3114 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:10:25.193865 kubelet[3114]: W0307 01:10:25.193671 3114 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:10:25.193865 kubelet[3114]: E0307 01:10:25.193683 3114 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:10:25.194164 kubelet[3114]: E0307 01:10:25.194086 3114 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:10:25.194164 kubelet[3114]: W0307 01:10:25.194098 3114 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:10:25.194164 kubelet[3114]: E0307 01:10:25.194111 3114 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:10:25.960422 containerd[1890]: time="2026-03-07T01:10:25.960374443Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:10:25.962456 containerd[1890]: time="2026-03-07T01:10:25.962380655Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4630250" Mar 7 01:10:25.964790 containerd[1890]: time="2026-03-07T01:10:25.964723102Z" level=info msg="ImageCreate event name:\"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:10:25.969803 containerd[1890]: time="2026-03-07T01:10:25.968875295Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:10:25.969803 containerd[1890]: time="2026-03-07T01:10:25.969670109Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"6186255\" in 1.219469759s" Mar 7 01:10:25.969803 containerd[1890]: time="2026-03-07T01:10:25.969709805Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\"" Mar 7 01:10:25.975803 containerd[1890]: time="2026-03-07T01:10:25.975766388Z" level=info msg="CreateContainer within sandbox \"703a9abb7370cbc20873e6d1bc4d27b23ac124b0f23de5fced90fdae538079bf\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 7 01:10:25.986282 kubelet[3114]: E0307 01:10:25.986222 3114 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9sl5j" podUID="cb0ce17a-98ad-40fb-b4f7-1528d43404aa" Mar 7 01:10:26.001147 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1531106398.mount: Deactivated successfully. Mar 7 01:10:26.008000 containerd[1890]: time="2026-03-07T01:10:26.007956400Z" level=info msg="CreateContainer within sandbox \"703a9abb7370cbc20873e6d1bc4d27b23ac124b0f23de5fced90fdae538079bf\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"baaa4d3b751afba392a9779b310a8d363b6252a6619d626540fb40162c9246d1\"" Mar 7 01:10:26.009172 containerd[1890]: time="2026-03-07T01:10:26.009136222Z" level=info msg="StartContainer for \"baaa4d3b751afba392a9779b310a8d363b6252a6619d626540fb40162c9246d1\"" Mar 7 01:10:26.051206 systemd[1]: Started cri-containerd-baaa4d3b751afba392a9779b310a8d363b6252a6619d626540fb40162c9246d1.scope - libcontainer container baaa4d3b751afba392a9779b310a8d363b6252a6619d626540fb40162c9246d1. Mar 7 01:10:26.087973 containerd[1890]: time="2026-03-07T01:10:26.087941073Z" level=info msg="StartContainer for \"baaa4d3b751afba392a9779b310a8d363b6252a6619d626540fb40162c9246d1\" returns successfully" Mar 7 01:10:26.100157 kubelet[3114]: I0307 01:10:26.099189 3114 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 7 01:10:26.114909 systemd[1]: cri-containerd-baaa4d3b751afba392a9779b310a8d363b6252a6619d626540fb40162c9246d1.scope: Deactivated successfully. Mar 7 01:10:26.126782 kubelet[3114]: I0307 01:10:26.125216 3114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5b5f77846d-5n286" podStartSLOduration=2.776754849 podStartE2EDuration="5.125195819s" podCreationTimestamp="2026-03-07 01:10:21 +0000 UTC" firstStartedPulling="2026-03-07 01:10:22.400403262 +0000 UTC m=+23.582374434" lastFinishedPulling="2026-03-07 01:10:24.74884423 +0000 UTC m=+25.930815404" observedRunningTime="2026-03-07 01:10:25.151916409 +0000 UTC m=+26.333887603" watchObservedRunningTime="2026-03-07 01:10:26.125195819 +0000 UTC m=+27.307166999" Mar 7 01:10:26.249234 containerd[1890]: time="2026-03-07T01:10:26.244218262Z" level=info msg="shim disconnected" id=baaa4d3b751afba392a9779b310a8d363b6252a6619d626540fb40162c9246d1 namespace=k8s.io Mar 7 01:10:26.249636 containerd[1890]: time="2026-03-07T01:10:26.249467932Z" level=warning msg="cleaning up after shim disconnected" id=baaa4d3b751afba392a9779b310a8d363b6252a6619d626540fb40162c9246d1 namespace=k8s.io Mar 7 01:10:26.249773 containerd[1890]: time="2026-03-07T01:10:26.249750466Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:10:26.992367 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-baaa4d3b751afba392a9779b310a8d363b6252a6619d626540fb40162c9246d1-rootfs.mount: Deactivated successfully. Mar 7 01:10:27.105118 containerd[1890]: time="2026-03-07T01:10:27.104830833Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Mar 7 01:10:27.987119 kubelet[3114]: E0307 01:10:27.987056 3114 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9sl5j" podUID="cb0ce17a-98ad-40fb-b4f7-1528d43404aa" Mar 7 01:10:29.987022 kubelet[3114]: E0307 01:10:29.986964 3114 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9sl5j" podUID="cb0ce17a-98ad-40fb-b4f7-1528d43404aa" Mar 7 01:10:31.986694 kubelet[3114]: E0307 01:10:31.986618 3114 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9sl5j" podUID="cb0ce17a-98ad-40fb-b4f7-1528d43404aa" Mar 7 01:10:33.986600 kubelet[3114]: E0307 01:10:33.986548 3114 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9sl5j" podUID="cb0ce17a-98ad-40fb-b4f7-1528d43404aa" Mar 7 01:10:35.986413 kubelet[3114]: E0307 01:10:35.986357 3114 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9sl5j" podUID="cb0ce17a-98ad-40fb-b4f7-1528d43404aa" Mar 7 01:10:37.987749 kubelet[3114]: E0307 01:10:37.986799 3114 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9sl5j" podUID="cb0ce17a-98ad-40fb-b4f7-1528d43404aa" Mar 7 01:10:38.980540 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1731848693.mount: Deactivated successfully. Mar 7 01:10:39.043490 containerd[1890]: time="2026-03-07T01:10:39.033438195Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:10:39.045205 containerd[1890]: time="2026-03-07T01:10:39.037827977Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=159838564" Mar 7 01:10:39.048047 containerd[1890]: time="2026-03-07T01:10:39.047083382Z" level=info msg="ImageCreate event name:\"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:10:39.050826 containerd[1890]: time="2026-03-07T01:10:39.050645707Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:10:39.052413 containerd[1890]: time="2026-03-07T01:10:39.052286742Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"159838426\" in 11.947406172s" Mar 7 01:10:39.052413 containerd[1890]: time="2026-03-07T01:10:39.052327993Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\"" Mar 7 01:10:39.064654 containerd[1890]: time="2026-03-07T01:10:39.064614860Z" level=info msg="CreateContainer within sandbox \"703a9abb7370cbc20873e6d1bc4d27b23ac124b0f23de5fced90fdae538079bf\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Mar 7 01:10:39.096562 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2307853124.mount: Deactivated successfully. Mar 7 01:10:39.110224 containerd[1890]: time="2026-03-07T01:10:39.110176019Z" level=info msg="CreateContainer within sandbox \"703a9abb7370cbc20873e6d1bc4d27b23ac124b0f23de5fced90fdae538079bf\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"4f16e26ba1edfb3dc1dd746e7ea125f209f446b842a99e3b5271337ffe1698ee\"" Mar 7 01:10:39.111050 containerd[1890]: time="2026-03-07T01:10:39.110994303Z" level=info msg="StartContainer for \"4f16e26ba1edfb3dc1dd746e7ea125f209f446b842a99e3b5271337ffe1698ee\"" Mar 7 01:10:39.173469 systemd[1]: Started cri-containerd-4f16e26ba1edfb3dc1dd746e7ea125f209f446b842a99e3b5271337ffe1698ee.scope - libcontainer container 4f16e26ba1edfb3dc1dd746e7ea125f209f446b842a99e3b5271337ffe1698ee. Mar 7 01:10:39.222743 containerd[1890]: time="2026-03-07T01:10:39.222675401Z" level=info msg="StartContainer for \"4f16e26ba1edfb3dc1dd746e7ea125f209f446b842a99e3b5271337ffe1698ee\" returns successfully" Mar 7 01:10:39.279358 systemd[1]: cri-containerd-4f16e26ba1edfb3dc1dd746e7ea125f209f446b842a99e3b5271337ffe1698ee.scope: Deactivated successfully. Mar 7 01:10:39.323399 containerd[1890]: time="2026-03-07T01:10:39.323303956Z" level=info msg="shim disconnected" id=4f16e26ba1edfb3dc1dd746e7ea125f209f446b842a99e3b5271337ffe1698ee namespace=k8s.io Mar 7 01:10:39.323399 containerd[1890]: time="2026-03-07T01:10:39.323378644Z" level=warning msg="cleaning up after shim disconnected" id=4f16e26ba1edfb3dc1dd746e7ea125f209f446b842a99e3b5271337ffe1698ee namespace=k8s.io Mar 7 01:10:39.323399 containerd[1890]: time="2026-03-07T01:10:39.323392824Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:10:39.702766 kubelet[3114]: I0307 01:10:39.702444 3114 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 7 01:10:39.981330 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4f16e26ba1edfb3dc1dd746e7ea125f209f446b842a99e3b5271337ffe1698ee-rootfs.mount: Deactivated successfully. Mar 7 01:10:39.986744 kubelet[3114]: E0307 01:10:39.986675 3114 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9sl5j" podUID="cb0ce17a-98ad-40fb-b4f7-1528d43404aa" Mar 7 01:10:40.202403 containerd[1890]: time="2026-03-07T01:10:40.202257281Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Mar 7 01:10:41.987510 kubelet[3114]: E0307 01:10:41.986984 3114 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9sl5j" podUID="cb0ce17a-98ad-40fb-b4f7-1528d43404aa" Mar 7 01:10:43.272481 containerd[1890]: time="2026-03-07T01:10:43.271257877Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:10:43.277793 containerd[1890]: time="2026-03-07T01:10:43.277730542Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=70611671" Mar 7 01:10:43.279127 containerd[1890]: time="2026-03-07T01:10:43.279086418Z" level=info msg="ImageCreate event name:\"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:10:43.282133 containerd[1890]: time="2026-03-07T01:10:43.282073169Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:10:43.283048 containerd[1890]: time="2026-03-07T01:10:43.282879414Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"72167716\" in 3.080466264s" Mar 7 01:10:43.283048 containerd[1890]: time="2026-03-07T01:10:43.282920875Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\"" Mar 7 01:10:43.288610 containerd[1890]: time="2026-03-07T01:10:43.288569889Z" level=info msg="CreateContainer within sandbox \"703a9abb7370cbc20873e6d1bc4d27b23ac124b0f23de5fced90fdae538079bf\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 7 01:10:43.316382 containerd[1890]: time="2026-03-07T01:10:43.316327796Z" level=info msg="CreateContainer within sandbox \"703a9abb7370cbc20873e6d1bc4d27b23ac124b0f23de5fced90fdae538079bf\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"3731e2793c2028b64ddc2739bbf1f59115cf5b92e2b8dcf5b86d9319b5c8aa1c\"" Mar 7 01:10:43.318796 containerd[1890]: time="2026-03-07T01:10:43.316950112Z" level=info msg="StartContainer for \"3731e2793c2028b64ddc2739bbf1f59115cf5b92e2b8dcf5b86d9319b5c8aa1c\"" Mar 7 01:10:43.357967 systemd[1]: run-containerd-runc-k8s.io-3731e2793c2028b64ddc2739bbf1f59115cf5b92e2b8dcf5b86d9319b5c8aa1c-runc.tMQ44E.mount: Deactivated successfully. Mar 7 01:10:43.368254 systemd[1]: Started cri-containerd-3731e2793c2028b64ddc2739bbf1f59115cf5b92e2b8dcf5b86d9319b5c8aa1c.scope - libcontainer container 3731e2793c2028b64ddc2739bbf1f59115cf5b92e2b8dcf5b86d9319b5c8aa1c. Mar 7 01:10:43.404039 containerd[1890]: time="2026-03-07T01:10:43.403916596Z" level=info msg="StartContainer for \"3731e2793c2028b64ddc2739bbf1f59115cf5b92e2b8dcf5b86d9319b5c8aa1c\" returns successfully" Mar 7 01:10:43.987024 kubelet[3114]: E0307 01:10:43.986967 3114 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9sl5j" podUID="cb0ce17a-98ad-40fb-b4f7-1528d43404aa" Mar 7 01:10:44.457253 systemd[1]: cri-containerd-3731e2793c2028b64ddc2739bbf1f59115cf5b92e2b8dcf5b86d9319b5c8aa1c.scope: Deactivated successfully. Mar 7 01:10:44.502565 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3731e2793c2028b64ddc2739bbf1f59115cf5b92e2b8dcf5b86d9319b5c8aa1c-rootfs.mount: Deactivated successfully. Mar 7 01:10:44.504398 containerd[1890]: time="2026-03-07T01:10:44.504336265Z" level=info msg="shim disconnected" id=3731e2793c2028b64ddc2739bbf1f59115cf5b92e2b8dcf5b86d9319b5c8aa1c namespace=k8s.io Mar 7 01:10:44.504398 containerd[1890]: time="2026-03-07T01:10:44.504395876Z" level=warning msg="cleaning up after shim disconnected" id=3731e2793c2028b64ddc2739bbf1f59115cf5b92e2b8dcf5b86d9319b5c8aa1c namespace=k8s.io Mar 7 01:10:44.505041 containerd[1890]: time="2026-03-07T01:10:44.504407187Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:10:44.562472 kubelet[3114]: I0307 01:10:44.562442 3114 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Mar 7 01:10:44.745733 kubelet[3114]: I0307 01:10:44.744656 3114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c3318a07-edd7-4b14-bc1e-8289ed132c3a-config-volume\") pod \"coredns-66bc5c9577-7lsxp\" (UID: \"c3318a07-edd7-4b14-bc1e-8289ed132c3a\") " pod="kube-system/coredns-66bc5c9577-7lsxp" Mar 7 01:10:44.745733 kubelet[3114]: I0307 01:10:44.744703 3114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec167290-b451-4cd1-a0d3-0912ddaf0ce2-config\") pod \"goldmane-cccfbd5cf-9kx9l\" (UID: \"ec167290-b451-4cd1-a0d3-0912ddaf0ce2\") " pod="calico-system/goldmane-cccfbd5cf-9kx9l" Mar 7 01:10:44.745733 kubelet[3114]: I0307 01:10:44.744738 3114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hv7cq\" (UniqueName: \"kubernetes.io/projected/b097a7a7-3ecc-4008-8602-5cdf4dd7e06f-kube-api-access-hv7cq\") pod \"calico-kube-controllers-dd6b64fbf-c45tm\" (UID: \"b097a7a7-3ecc-4008-8602-5cdf4dd7e06f\") " pod="calico-system/calico-kube-controllers-dd6b64fbf-c45tm" Mar 7 01:10:44.745733 kubelet[3114]: I0307 01:10:44.744762 3114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hb9vf\" (UniqueName: \"kubernetes.io/projected/c72f5271-c799-4260-ae9b-eddba6e2b3f5-kube-api-access-hb9vf\") pod \"coredns-66bc5c9577-rvl56\" (UID: \"c72f5271-c799-4260-ae9b-eddba6e2b3f5\") " pod="kube-system/coredns-66bc5c9577-rvl56" Mar 7 01:10:44.745733 kubelet[3114]: I0307 01:10:44.744794 3114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6wn6\" (UniqueName: \"kubernetes.io/projected/31e4d6ab-37b1-4c95-b346-1277ec27fa0d-kube-api-access-v6wn6\") pod \"calico-apiserver-5fffbd9bb8-p7j5c\" (UID: \"31e4d6ab-37b1-4c95-b346-1277ec27fa0d\") " pod="calico-system/calico-apiserver-5fffbd9bb8-p7j5c" Mar 7 01:10:44.747486 kubelet[3114]: I0307 01:10:44.744821 3114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfkts\" (UniqueName: \"kubernetes.io/projected/84013ff8-1caf-4210-8629-c308dc9cf92c-kube-api-access-zfkts\") pod \"calico-apiserver-5fffbd9bb8-8fmss\" (UID: \"84013ff8-1caf-4210-8629-c308dc9cf92c\") " pod="calico-system/calico-apiserver-5fffbd9bb8-8fmss" Mar 7 01:10:44.747486 kubelet[3114]: I0307 01:10:44.744848 3114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/ec167290-b451-4cd1-a0d3-0912ddaf0ce2-goldmane-key-pair\") pod \"goldmane-cccfbd5cf-9kx9l\" (UID: \"ec167290-b451-4cd1-a0d3-0912ddaf0ce2\") " pod="calico-system/goldmane-cccfbd5cf-9kx9l" Mar 7 01:10:44.747486 kubelet[3114]: I0307 01:10:44.744874 3114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/31e4d6ab-37b1-4c95-b346-1277ec27fa0d-calico-apiserver-certs\") pod \"calico-apiserver-5fffbd9bb8-p7j5c\" (UID: \"31e4d6ab-37b1-4c95-b346-1277ec27fa0d\") " pod="calico-system/calico-apiserver-5fffbd9bb8-p7j5c" Mar 7 01:10:44.747486 kubelet[3114]: I0307 01:10:44.744899 3114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7mp6\" (UniqueName: \"kubernetes.io/projected/c3318a07-edd7-4b14-bc1e-8289ed132c3a-kube-api-access-h7mp6\") pod \"coredns-66bc5c9577-7lsxp\" (UID: \"c3318a07-edd7-4b14-bc1e-8289ed132c3a\") " pod="kube-system/coredns-66bc5c9577-7lsxp" Mar 7 01:10:44.747486 kubelet[3114]: I0307 01:10:44.744922 3114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/84013ff8-1caf-4210-8629-c308dc9cf92c-calico-apiserver-certs\") pod \"calico-apiserver-5fffbd9bb8-8fmss\" (UID: \"84013ff8-1caf-4210-8629-c308dc9cf92c\") " pod="calico-system/calico-apiserver-5fffbd9bb8-8fmss" Mar 7 01:10:44.746071 systemd[1]: Created slice kubepods-besteffort-podb097a7a7_3ecc_4008_8602_5cdf4dd7e06f.slice - libcontainer container kubepods-besteffort-podb097a7a7_3ecc_4008_8602_5cdf4dd7e06f.slice. Mar 7 01:10:44.747870 kubelet[3114]: I0307 01:10:44.744950 3114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ec167290-b451-4cd1-a0d3-0912ddaf0ce2-goldmane-ca-bundle\") pod \"goldmane-cccfbd5cf-9kx9l\" (UID: \"ec167290-b451-4cd1-a0d3-0912ddaf0ce2\") " pod="calico-system/goldmane-cccfbd5cf-9kx9l" Mar 7 01:10:44.747870 kubelet[3114]: I0307 01:10:44.744973 3114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gssqr\" (UniqueName: \"kubernetes.io/projected/ec167290-b451-4cd1-a0d3-0912ddaf0ce2-kube-api-access-gssqr\") pod \"goldmane-cccfbd5cf-9kx9l\" (UID: \"ec167290-b451-4cd1-a0d3-0912ddaf0ce2\") " pod="calico-system/goldmane-cccfbd5cf-9kx9l" Mar 7 01:10:44.747870 kubelet[3114]: I0307 01:10:44.747206 3114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b097a7a7-3ecc-4008-8602-5cdf4dd7e06f-tigera-ca-bundle\") pod \"calico-kube-controllers-dd6b64fbf-c45tm\" (UID: \"b097a7a7-3ecc-4008-8602-5cdf4dd7e06f\") " pod="calico-system/calico-kube-controllers-dd6b64fbf-c45tm" Mar 7 01:10:44.747870 kubelet[3114]: I0307 01:10:44.747259 3114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c72f5271-c799-4260-ae9b-eddba6e2b3f5-config-volume\") pod \"coredns-66bc5c9577-rvl56\" (UID: \"c72f5271-c799-4260-ae9b-eddba6e2b3f5\") " pod="kube-system/coredns-66bc5c9577-rvl56" Mar 7 01:10:44.758776 systemd[1]: Created slice kubepods-burstable-podc72f5271_c799_4260_ae9b_eddba6e2b3f5.slice - libcontainer container kubepods-burstable-podc72f5271_c799_4260_ae9b_eddba6e2b3f5.slice. Mar 7 01:10:44.768782 systemd[1]: Created slice kubepods-burstable-podc3318a07_edd7_4b14_bc1e_8289ed132c3a.slice - libcontainer container kubepods-burstable-podc3318a07_edd7_4b14_bc1e_8289ed132c3a.slice. Mar 7 01:10:44.779524 systemd[1]: Created slice kubepods-besteffort-pod31e4d6ab_37b1_4c95_b346_1277ec27fa0d.slice - libcontainer container kubepods-besteffort-pod31e4d6ab_37b1_4c95_b346_1277ec27fa0d.slice. Mar 7 01:10:44.787306 systemd[1]: Created slice kubepods-besteffort-podec167290_b451_4cd1_a0d3_0912ddaf0ce2.slice - libcontainer container kubepods-besteffort-podec167290_b451_4cd1_a0d3_0912ddaf0ce2.slice. Mar 7 01:10:44.798324 systemd[1]: Created slice kubepods-besteffort-pod84013ff8_1caf_4210_8629_c308dc9cf92c.slice - libcontainer container kubepods-besteffort-pod84013ff8_1caf_4210_8629_c308dc9cf92c.slice. Mar 7 01:10:44.805621 systemd[1]: Created slice kubepods-besteffort-pode39111b2_0e10_4e35_8ff3_e8249485a878.slice - libcontainer container kubepods-besteffort-pode39111b2_0e10_4e35_8ff3_e8249485a878.slice. Mar 7 01:10:44.849026 kubelet[3114]: I0307 01:10:44.848203 3114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e39111b2-0e10-4e35-8ff3-e8249485a878-whisker-backend-key-pair\") pod \"whisker-669bdb6b65-nhkt8\" (UID: \"e39111b2-0e10-4e35-8ff3-e8249485a878\") " pod="calico-system/whisker-669bdb6b65-nhkt8" Mar 7 01:10:44.849026 kubelet[3114]: I0307 01:10:44.848253 3114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzw6w\" (UniqueName: \"kubernetes.io/projected/e39111b2-0e10-4e35-8ff3-e8249485a878-kube-api-access-bzw6w\") pod \"whisker-669bdb6b65-nhkt8\" (UID: \"e39111b2-0e10-4e35-8ff3-e8249485a878\") " pod="calico-system/whisker-669bdb6b65-nhkt8" Mar 7 01:10:44.849026 kubelet[3114]: I0307 01:10:44.848282 3114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/e39111b2-0e10-4e35-8ff3-e8249485a878-nginx-config\") pod \"whisker-669bdb6b65-nhkt8\" (UID: \"e39111b2-0e10-4e35-8ff3-e8249485a878\") " pod="calico-system/whisker-669bdb6b65-nhkt8" Mar 7 01:10:44.849026 kubelet[3114]: I0307 01:10:44.848347 3114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e39111b2-0e10-4e35-8ff3-e8249485a878-whisker-ca-bundle\") pod \"whisker-669bdb6b65-nhkt8\" (UID: \"e39111b2-0e10-4e35-8ff3-e8249485a878\") " pod="calico-system/whisker-669bdb6b65-nhkt8" Mar 7 01:10:45.065391 containerd[1890]: time="2026-03-07T01:10:45.065257626Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-dd6b64fbf-c45tm,Uid:b097a7a7-3ecc-4008-8602-5cdf4dd7e06f,Namespace:calico-system,Attempt:0,}" Mar 7 01:10:45.069234 containerd[1890]: time="2026-03-07T01:10:45.069197301Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-rvl56,Uid:c72f5271-c799-4260-ae9b-eddba6e2b3f5,Namespace:kube-system,Attempt:0,}" Mar 7 01:10:45.078307 containerd[1890]: time="2026-03-07T01:10:45.078261100Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-7lsxp,Uid:c3318a07-edd7-4b14-bc1e-8289ed132c3a,Namespace:kube-system,Attempt:0,}" Mar 7 01:10:45.087705 containerd[1890]: time="2026-03-07T01:10:45.087663554Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5fffbd9bb8-p7j5c,Uid:31e4d6ab-37b1-4c95-b346-1277ec27fa0d,Namespace:calico-system,Attempt:0,}" Mar 7 01:10:45.099895 containerd[1890]: time="2026-03-07T01:10:45.099847748Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-9kx9l,Uid:ec167290-b451-4cd1-a0d3-0912ddaf0ce2,Namespace:calico-system,Attempt:0,}" Mar 7 01:10:45.108941 containerd[1890]: time="2026-03-07T01:10:45.108663075Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5fffbd9bb8-8fmss,Uid:84013ff8-1caf-4210-8629-c308dc9cf92c,Namespace:calico-system,Attempt:0,}" Mar 7 01:10:45.113585 containerd[1890]: time="2026-03-07T01:10:45.113545859Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-669bdb6b65-nhkt8,Uid:e39111b2-0e10-4e35-8ff3-e8249485a878,Namespace:calico-system,Attempt:0,}" Mar 7 01:10:45.267691 containerd[1890]: time="2026-03-07T01:10:45.267299316Z" level=info msg="CreateContainer within sandbox \"703a9abb7370cbc20873e6d1bc4d27b23ac124b0f23de5fced90fdae538079bf\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 7 01:10:45.337455 containerd[1890]: time="2026-03-07T01:10:45.337115276Z" level=info msg="CreateContainer within sandbox \"703a9abb7370cbc20873e6d1bc4d27b23ac124b0f23de5fced90fdae538079bf\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"3651d5704743824a348ecfd91e5ff1a82a49bee489794f6ec5b07b0b2a57fc26\"" Mar 7 01:10:45.361837 containerd[1890]: time="2026-03-07T01:10:45.361789788Z" level=info msg="StartContainer for \"3651d5704743824a348ecfd91e5ff1a82a49bee489794f6ec5b07b0b2a57fc26\"" Mar 7 01:10:45.424242 systemd[1]: Started cri-containerd-3651d5704743824a348ecfd91e5ff1a82a49bee489794f6ec5b07b0b2a57fc26.scope - libcontainer container 3651d5704743824a348ecfd91e5ff1a82a49bee489794f6ec5b07b0b2a57fc26. Mar 7 01:10:45.596484 containerd[1890]: time="2026-03-07T01:10:45.595127817Z" level=info msg="StartContainer for \"3651d5704743824a348ecfd91e5ff1a82a49bee489794f6ec5b07b0b2a57fc26\" returns successfully" Mar 7 01:10:45.781810 containerd[1890]: time="2026-03-07T01:10:45.781667746Z" level=error msg="Failed to destroy network for sandbox \"9120a8cca8fc27f512ef3ea5c6ca6c5ad0e99e37ebf0e7d9a053d26ab94ed0b0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:10:45.791341 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9120a8cca8fc27f512ef3ea5c6ca6c5ad0e99e37ebf0e7d9a053d26ab94ed0b0-shm.mount: Deactivated successfully. Mar 7 01:10:45.799927 containerd[1890]: time="2026-03-07T01:10:45.799713137Z" level=error msg="encountered an error cleaning up failed sandbox \"9120a8cca8fc27f512ef3ea5c6ca6c5ad0e99e37ebf0e7d9a053d26ab94ed0b0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:10:45.799927 containerd[1890]: time="2026-03-07T01:10:45.799809719Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-7lsxp,Uid:c3318a07-edd7-4b14-bc1e-8289ed132c3a,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9120a8cca8fc27f512ef3ea5c6ca6c5ad0e99e37ebf0e7d9a053d26ab94ed0b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:10:45.807411 kubelet[3114]: E0307 01:10:45.805936 3114 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9120a8cca8fc27f512ef3ea5c6ca6c5ad0e99e37ebf0e7d9a053d26ab94ed0b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:10:45.810684 containerd[1890]: time="2026-03-07T01:10:45.810628488Z" level=error msg="Failed to destroy network for sandbox \"5d0fcf4dab67c8d93f1e1124224a1e6311c7cfcf308268b210578a9b8f750e8a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:10:45.817201 containerd[1890]: time="2026-03-07T01:10:45.817147079Z" level=error msg="encountered an error cleaning up failed sandbox \"5d0fcf4dab67c8d93f1e1124224a1e6311c7cfcf308268b210578a9b8f750e8a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:10:45.818632 containerd[1890]: time="2026-03-07T01:10:45.817398430Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5fffbd9bb8-8fmss,Uid:84013ff8-1caf-4210-8629-c308dc9cf92c,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5d0fcf4dab67c8d93f1e1124224a1e6311c7cfcf308268b210578a9b8f750e8a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:10:45.820335 kubelet[3114]: E0307 01:10:45.819420 3114 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5d0fcf4dab67c8d93f1e1124224a1e6311c7cfcf308268b210578a9b8f750e8a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:10:45.819974 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5d0fcf4dab67c8d93f1e1124224a1e6311c7cfcf308268b210578a9b8f750e8a-shm.mount: Deactivated successfully. Mar 7 01:10:45.822209 kubelet[3114]: E0307 01:10:45.820315 3114 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5d0fcf4dab67c8d93f1e1124224a1e6311c7cfcf308268b210578a9b8f750e8a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-5fffbd9bb8-8fmss" Mar 7 01:10:45.822209 kubelet[3114]: E0307 01:10:45.822120 3114 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5d0fcf4dab67c8d93f1e1124224a1e6311c7cfcf308268b210578a9b8f750e8a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-5fffbd9bb8-8fmss" Mar 7 01:10:45.824057 kubelet[3114]: E0307 01:10:45.822203 3114 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5fffbd9bb8-8fmss_calico-system(84013ff8-1caf-4210-8629-c308dc9cf92c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5fffbd9bb8-8fmss_calico-system(84013ff8-1caf-4210-8629-c308dc9cf92c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5d0fcf4dab67c8d93f1e1124224a1e6311c7cfcf308268b210578a9b8f750e8a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-5fffbd9bb8-8fmss" podUID="84013ff8-1caf-4210-8629-c308dc9cf92c" Mar 7 01:10:45.824710 containerd[1890]: time="2026-03-07T01:10:45.824674893Z" level=error msg="Failed to destroy network for sandbox \"ba7b2e28a7d246b29bded19b23b3a56ceef7edb20e861d0af8e111b2f8763254\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:10:45.829861 containerd[1890]: time="2026-03-07T01:10:45.829799194Z" level=error msg="encountered an error cleaning up failed sandbox \"ba7b2e28a7d246b29bded19b23b3a56ceef7edb20e861d0af8e111b2f8763254\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:10:45.829973 containerd[1890]: time="2026-03-07T01:10:45.829897654Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5fffbd9bb8-p7j5c,Uid:31e4d6ab-37b1-4c95-b346-1277ec27fa0d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ba7b2e28a7d246b29bded19b23b3a56ceef7edb20e861d0af8e111b2f8763254\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:10:45.834053 kubelet[3114]: E0307 01:10:45.831942 3114 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba7b2e28a7d246b29bded19b23b3a56ceef7edb20e861d0af8e111b2f8763254\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:10:45.833497 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ba7b2e28a7d246b29bded19b23b3a56ceef7edb20e861d0af8e111b2f8763254-shm.mount: Deactivated successfully. Mar 7 01:10:45.834230 kubelet[3114]: E0307 01:10:45.832011 3114 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba7b2e28a7d246b29bded19b23b3a56ceef7edb20e861d0af8e111b2f8763254\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-5fffbd9bb8-p7j5c" Mar 7 01:10:45.834230 kubelet[3114]: E0307 01:10:45.834120 3114 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba7b2e28a7d246b29bded19b23b3a56ceef7edb20e861d0af8e111b2f8763254\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-5fffbd9bb8-p7j5c" Mar 7 01:10:45.834230 kubelet[3114]: E0307 01:10:45.834196 3114 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5fffbd9bb8-p7j5c_calico-system(31e4d6ab-37b1-4c95-b346-1277ec27fa0d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5fffbd9bb8-p7j5c_calico-system(31e4d6ab-37b1-4c95-b346-1277ec27fa0d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ba7b2e28a7d246b29bded19b23b3a56ceef7edb20e861d0af8e111b2f8763254\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-5fffbd9bb8-p7j5c" podUID="31e4d6ab-37b1-4c95-b346-1277ec27fa0d" Mar 7 01:10:45.835820 kubelet[3114]: E0307 01:10:45.819125 3114 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9120a8cca8fc27f512ef3ea5c6ca6c5ad0e99e37ebf0e7d9a053d26ab94ed0b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-7lsxp" Mar 7 01:10:45.835820 kubelet[3114]: E0307 01:10:45.834505 3114 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9120a8cca8fc27f512ef3ea5c6ca6c5ad0e99e37ebf0e7d9a053d26ab94ed0b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-7lsxp" Mar 7 01:10:45.835820 kubelet[3114]: E0307 01:10:45.834556 3114 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-7lsxp_kube-system(c3318a07-edd7-4b14-bc1e-8289ed132c3a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-7lsxp_kube-system(c3318a07-edd7-4b14-bc1e-8289ed132c3a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9120a8cca8fc27f512ef3ea5c6ca6c5ad0e99e37ebf0e7d9a053d26ab94ed0b0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-7lsxp" podUID="c3318a07-edd7-4b14-bc1e-8289ed132c3a" Mar 7 01:10:45.860037 containerd[1890]: time="2026-03-07T01:10:45.856173262Z" level=error msg="Failed to destroy network for sandbox \"661c9389056cc3660920971191063bb1f7531e643e2d78b53970713803121040\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:10:45.860037 containerd[1890]: time="2026-03-07T01:10:45.858613074Z" level=error msg="encountered an error cleaning up failed sandbox \"661c9389056cc3660920971191063bb1f7531e643e2d78b53970713803121040\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:10:45.860037 containerd[1890]: time="2026-03-07T01:10:45.858873875Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-rvl56,Uid:c72f5271-c799-4260-ae9b-eddba6e2b3f5,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"661c9389056cc3660920971191063bb1f7531e643e2d78b53970713803121040\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:10:45.860734 kubelet[3114]: E0307 01:10:45.859203 3114 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"661c9389056cc3660920971191063bb1f7531e643e2d78b53970713803121040\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:10:45.860734 kubelet[3114]: E0307 01:10:45.859263 3114 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"661c9389056cc3660920971191063bb1f7531e643e2d78b53970713803121040\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-rvl56" Mar 7 01:10:45.860734 kubelet[3114]: E0307 01:10:45.859295 3114 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"661c9389056cc3660920971191063bb1f7531e643e2d78b53970713803121040\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-rvl56" Mar 7 01:10:45.861153 kubelet[3114]: E0307 01:10:45.859355 3114 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-rvl56_kube-system(c72f5271-c799-4260-ae9b-eddba6e2b3f5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-rvl56_kube-system(c72f5271-c799-4260-ae9b-eddba6e2b3f5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"661c9389056cc3660920971191063bb1f7531e643e2d78b53970713803121040\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-rvl56" podUID="c72f5271-c799-4260-ae9b-eddba6e2b3f5" Mar 7 01:10:45.863318 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-661c9389056cc3660920971191063bb1f7531e643e2d78b53970713803121040-shm.mount: Deactivated successfully. Mar 7 01:10:45.875374 containerd[1890]: time="2026-03-07T01:10:45.875211415Z" level=error msg="Failed to destroy network for sandbox \"99d0b5775a858fd6614d3b6b3e297a9efb07e6a6ac5b83e8eaca9c48e1ad6ffc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:10:45.875374 containerd[1890]: time="2026-03-07T01:10:45.875289314Z" level=error msg="Failed to destroy network for sandbox \"ddd8e99e314d9b4f882513cffc793abbff7a3732819d6b39628bae8815cba902\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:10:45.876054 containerd[1890]: time="2026-03-07T01:10:45.875780347Z" level=error msg="encountered an error cleaning up failed sandbox \"99d0b5775a858fd6614d3b6b3e297a9efb07e6a6ac5b83e8eaca9c48e1ad6ffc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:10:45.876054 containerd[1890]: time="2026-03-07T01:10:45.875843772Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-669bdb6b65-nhkt8,Uid:e39111b2-0e10-4e35-8ff3-e8249485a878,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"99d0b5775a858fd6614d3b6b3e297a9efb07e6a6ac5b83e8eaca9c48e1ad6ffc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:10:45.876511 containerd[1890]: time="2026-03-07T01:10:45.876477906Z" level=error msg="encountered an error cleaning up failed sandbox \"ddd8e99e314d9b4f882513cffc793abbff7a3732819d6b39628bae8815cba902\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:10:45.876611 containerd[1890]: time="2026-03-07T01:10:45.876541549Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-dd6b64fbf-c45tm,Uid:b097a7a7-3ecc-4008-8602-5cdf4dd7e06f,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ddd8e99e314d9b4f882513cffc793abbff7a3732819d6b39628bae8815cba902\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:10:45.876785 kubelet[3114]: E0307 01:10:45.876738 3114 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"99d0b5775a858fd6614d3b6b3e297a9efb07e6a6ac5b83e8eaca9c48e1ad6ffc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:10:45.876856 kubelet[3114]: E0307 01:10:45.876801 3114 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"99d0b5775a858fd6614d3b6b3e297a9efb07e6a6ac5b83e8eaca9c48e1ad6ffc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-669bdb6b65-nhkt8" Mar 7 01:10:45.876914 kubelet[3114]: E0307 01:10:45.876830 3114 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"99d0b5775a858fd6614d3b6b3e297a9efb07e6a6ac5b83e8eaca9c48e1ad6ffc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-669bdb6b65-nhkt8" Mar 7 01:10:45.876965 kubelet[3114]: E0307 01:10:45.876921 3114 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-669bdb6b65-nhkt8_calico-system(e39111b2-0e10-4e35-8ff3-e8249485a878)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-669bdb6b65-nhkt8_calico-system(e39111b2-0e10-4e35-8ff3-e8249485a878)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"99d0b5775a858fd6614d3b6b3e297a9efb07e6a6ac5b83e8eaca9c48e1ad6ffc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-669bdb6b65-nhkt8" podUID="e39111b2-0e10-4e35-8ff3-e8249485a878" Mar 7 01:10:45.877135 kubelet[3114]: E0307 01:10:45.877106 3114 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ddd8e99e314d9b4f882513cffc793abbff7a3732819d6b39628bae8815cba902\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:10:45.877214 kubelet[3114]: E0307 01:10:45.877152 3114 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ddd8e99e314d9b4f882513cffc793abbff7a3732819d6b39628bae8815cba902\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-dd6b64fbf-c45tm" Mar 7 01:10:45.877365 kubelet[3114]: E0307 01:10:45.877174 3114 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ddd8e99e314d9b4f882513cffc793abbff7a3732819d6b39628bae8815cba902\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-dd6b64fbf-c45tm" Mar 7 01:10:45.877365 kubelet[3114]: E0307 01:10:45.877333 3114 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-dd6b64fbf-c45tm_calico-system(b097a7a7-3ecc-4008-8602-5cdf4dd7e06f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-dd6b64fbf-c45tm_calico-system(b097a7a7-3ecc-4008-8602-5cdf4dd7e06f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ddd8e99e314d9b4f882513cffc793abbff7a3732819d6b39628bae8815cba902\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-dd6b64fbf-c45tm" podUID="b097a7a7-3ecc-4008-8602-5cdf4dd7e06f" Mar 7 01:10:45.877789 containerd[1890]: time="2026-03-07T01:10:45.877650891Z" level=error msg="Failed to destroy network for sandbox \"7b65c902c69d759ce2d3e6171f3b29f6d48f79083b7b1eb3061112466fd9bfa7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:10:45.878451 containerd[1890]: time="2026-03-07T01:10:45.878343143Z" level=error msg="encountered an error cleaning up failed sandbox \"7b65c902c69d759ce2d3e6171f3b29f6d48f79083b7b1eb3061112466fd9bfa7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:10:45.878451 containerd[1890]: time="2026-03-07T01:10:45.878392254Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-9kx9l,Uid:ec167290-b451-4cd1-a0d3-0912ddaf0ce2,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7b65c902c69d759ce2d3e6171f3b29f6d48f79083b7b1eb3061112466fd9bfa7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:10:45.878991 kubelet[3114]: E0307 01:10:45.878878 3114 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b65c902c69d759ce2d3e6171f3b29f6d48f79083b7b1eb3061112466fd9bfa7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:10:45.878991 kubelet[3114]: E0307 01:10:45.878957 3114 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b65c902c69d759ce2d3e6171f3b29f6d48f79083b7b1eb3061112466fd9bfa7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-cccfbd5cf-9kx9l" Mar 7 01:10:45.879412 kubelet[3114]: E0307 01:10:45.879184 3114 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b65c902c69d759ce2d3e6171f3b29f6d48f79083b7b1eb3061112466fd9bfa7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-cccfbd5cf-9kx9l" Mar 7 01:10:45.879557 kubelet[3114]: E0307 01:10:45.879368 3114 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-cccfbd5cf-9kx9l_calico-system(ec167290-b451-4cd1-a0d3-0912ddaf0ce2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-cccfbd5cf-9kx9l_calico-system(ec167290-b451-4cd1-a0d3-0912ddaf0ce2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7b65c902c69d759ce2d3e6171f3b29f6d48f79083b7b1eb3061112466fd9bfa7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-cccfbd5cf-9kx9l" podUID="ec167290-b451-4cd1-a0d3-0912ddaf0ce2" Mar 7 01:10:45.992330 systemd[1]: Created slice kubepods-besteffort-podcb0ce17a_98ad_40fb_b4f7_1528d43404aa.slice - libcontainer container kubepods-besteffort-podcb0ce17a_98ad_40fb_b4f7_1528d43404aa.slice. Mar 7 01:10:45.999414 containerd[1890]: time="2026-03-07T01:10:45.999364468Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9sl5j,Uid:cb0ce17a-98ad-40fb-b4f7-1528d43404aa,Namespace:calico-system,Attempt:0,}" Mar 7 01:10:46.087022 containerd[1890]: time="2026-03-07T01:10:46.086947633Z" level=error msg="Failed to destroy network for sandbox \"310709574521823a1da1e64f86c1ca73d58804ce949dd386cf59f43444a78613\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:10:46.087543 containerd[1890]: time="2026-03-07T01:10:46.087500713Z" level=error msg="encountered an error cleaning up failed sandbox \"310709574521823a1da1e64f86c1ca73d58804ce949dd386cf59f43444a78613\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:10:46.087671 containerd[1890]: time="2026-03-07T01:10:46.087577835Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9sl5j,Uid:cb0ce17a-98ad-40fb-b4f7-1528d43404aa,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"310709574521823a1da1e64f86c1ca73d58804ce949dd386cf59f43444a78613\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:10:46.087918 kubelet[3114]: E0307 01:10:46.087883 3114 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"310709574521823a1da1e64f86c1ca73d58804ce949dd386cf59f43444a78613\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:10:46.088054 kubelet[3114]: E0307 01:10:46.087937 3114 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"310709574521823a1da1e64f86c1ca73d58804ce949dd386cf59f43444a78613\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9sl5j" Mar 7 01:10:46.088054 kubelet[3114]: E0307 01:10:46.087964 3114 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"310709574521823a1da1e64f86c1ca73d58804ce949dd386cf59f43444a78613\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9sl5j" Mar 7 01:10:46.088150 kubelet[3114]: E0307 01:10:46.088043 3114 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-9sl5j_calico-system(cb0ce17a-98ad-40fb-b4f7-1528d43404aa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-9sl5j_calico-system(cb0ce17a-98ad-40fb-b4f7-1528d43404aa)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"310709574521823a1da1e64f86c1ca73d58804ce949dd386cf59f43444a78613\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-9sl5j" podUID="cb0ce17a-98ad-40fb-b4f7-1528d43404aa" Mar 7 01:10:46.253523 kubelet[3114]: I0307 01:10:46.253475 3114 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="310709574521823a1da1e64f86c1ca73d58804ce949dd386cf59f43444a78613" Mar 7 01:10:46.256172 kubelet[3114]: I0307 01:10:46.255736 3114 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="661c9389056cc3660920971191063bb1f7531e643e2d78b53970713803121040" Mar 7 01:10:46.270046 kubelet[3114]: I0307 01:10:46.269217 3114 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ddd8e99e314d9b4f882513cffc793abbff7a3732819d6b39628bae8815cba902" Mar 7 01:10:46.273522 kubelet[3114]: I0307 01:10:46.272808 3114 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5d0fcf4dab67c8d93f1e1124224a1e6311c7cfcf308268b210578a9b8f750e8a" Mar 7 01:10:46.276870 kubelet[3114]: I0307 01:10:46.276161 3114 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7b65c902c69d759ce2d3e6171f3b29f6d48f79083b7b1eb3061112466fd9bfa7" Mar 7 01:10:46.278561 kubelet[3114]: I0307 01:10:46.278504 3114 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ba7b2e28a7d246b29bded19b23b3a56ceef7edb20e861d0af8e111b2f8763254" Mar 7 01:10:46.282808 kubelet[3114]: I0307 01:10:46.281142 3114 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9120a8cca8fc27f512ef3ea5c6ca6c5ad0e99e37ebf0e7d9a053d26ab94ed0b0" Mar 7 01:10:46.305372 kubelet[3114]: I0307 01:10:46.305026 3114 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="99d0b5775a858fd6614d3b6b3e297a9efb07e6a6ac5b83e8eaca9c48e1ad6ffc" Mar 7 01:10:46.321148 containerd[1890]: time="2026-03-07T01:10:46.321104352Z" level=info msg="StopPodSandbox for \"9120a8cca8fc27f512ef3ea5c6ca6c5ad0e99e37ebf0e7d9a053d26ab94ed0b0\"" Mar 7 01:10:46.327588 containerd[1890]: time="2026-03-07T01:10:46.326956562Z" level=info msg="Ensure that sandbox 9120a8cca8fc27f512ef3ea5c6ca6c5ad0e99e37ebf0e7d9a053d26ab94ed0b0 in task-service has been cleanup successfully" Mar 7 01:10:46.331321 containerd[1890]: time="2026-03-07T01:10:46.330545489Z" level=info msg="StopPodSandbox for \"7b65c902c69d759ce2d3e6171f3b29f6d48f79083b7b1eb3061112466fd9bfa7\"" Mar 7 01:10:46.331608 containerd[1890]: time="2026-03-07T01:10:46.331582623Z" level=info msg="StopPodSandbox for \"99d0b5775a858fd6614d3b6b3e297a9efb07e6a6ac5b83e8eaca9c48e1ad6ffc\"" Mar 7 01:10:46.331958 containerd[1890]: time="2026-03-07T01:10:46.331933102Z" level=info msg="Ensure that sandbox 99d0b5775a858fd6614d3b6b3e297a9efb07e6a6ac5b83e8eaca9c48e1ad6ffc in task-service has been cleanup successfully" Mar 7 01:10:46.334704 containerd[1890]: time="2026-03-07T01:10:46.334669763Z" level=info msg="Ensure that sandbox 7b65c902c69d759ce2d3e6171f3b29f6d48f79083b7b1eb3061112466fd9bfa7 in task-service has been cleanup successfully" Mar 7 01:10:46.340705 kubelet[3114]: I0307 01:10:46.340582 3114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-6sqr7" podStartSLOduration=4.509258881 podStartE2EDuration="25.324826465s" podCreationTimestamp="2026-03-07 01:10:21 +0000 UTC" firstStartedPulling="2026-03-07 01:10:22.468553257 +0000 UTC m=+23.650524426" lastFinishedPulling="2026-03-07 01:10:43.284120848 +0000 UTC m=+44.466092010" observedRunningTime="2026-03-07 01:10:46.324345007 +0000 UTC m=+47.506316188" watchObservedRunningTime="2026-03-07 01:10:46.324826465 +0000 UTC m=+47.506797644" Mar 7 01:10:46.355627 containerd[1890]: time="2026-03-07T01:10:46.355573786Z" level=info msg="StopPodSandbox for \"661c9389056cc3660920971191063bb1f7531e643e2d78b53970713803121040\"" Mar 7 01:10:46.356851 containerd[1890]: time="2026-03-07T01:10:46.356811355Z" level=info msg="StopPodSandbox for \"310709574521823a1da1e64f86c1ca73d58804ce949dd386cf59f43444a78613\"" Mar 7 01:10:46.357876 containerd[1890]: time="2026-03-07T01:10:46.357807652Z" level=info msg="Ensure that sandbox 310709574521823a1da1e64f86c1ca73d58804ce949dd386cf59f43444a78613 in task-service has been cleanup successfully" Mar 7 01:10:46.358623 containerd[1890]: time="2026-03-07T01:10:46.358529521Z" level=info msg="StopPodSandbox for \"ba7b2e28a7d246b29bded19b23b3a56ceef7edb20e861d0af8e111b2f8763254\"" Mar 7 01:10:46.362586 containerd[1890]: time="2026-03-07T01:10:46.361460517Z" level=info msg="Ensure that sandbox ba7b2e28a7d246b29bded19b23b3a56ceef7edb20e861d0af8e111b2f8763254 in task-service has been cleanup successfully" Mar 7 01:10:46.381496 containerd[1890]: time="2026-03-07T01:10:46.381451444Z" level=info msg="StopPodSandbox for \"ddd8e99e314d9b4f882513cffc793abbff7a3732819d6b39628bae8815cba902\"" Mar 7 01:10:46.381709 containerd[1890]: time="2026-03-07T01:10:46.381667384Z" level=info msg="Ensure that sandbox 661c9389056cc3660920971191063bb1f7531e643e2d78b53970713803121040 in task-service has been cleanup successfully" Mar 7 01:10:46.382030 containerd[1890]: time="2026-03-07T01:10:46.381974468Z" level=info msg="Ensure that sandbox ddd8e99e314d9b4f882513cffc793abbff7a3732819d6b39628bae8815cba902 in task-service has been cleanup successfully" Mar 7 01:10:46.385054 containerd[1890]: time="2026-03-07T01:10:46.383639340Z" level=info msg="StopPodSandbox for \"5d0fcf4dab67c8d93f1e1124224a1e6311c7cfcf308268b210578a9b8f750e8a\"" Mar 7 01:10:46.385054 containerd[1890]: time="2026-03-07T01:10:46.383968420Z" level=info msg="Ensure that sandbox 5d0fcf4dab67c8d93f1e1124224a1e6311c7cfcf308268b210578a9b8f750e8a in task-service has been cleanup successfully" Mar 7 01:10:46.505825 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-99d0b5775a858fd6614d3b6b3e297a9efb07e6a6ac5b83e8eaca9c48e1ad6ffc-shm.mount: Deactivated successfully. Mar 7 01:10:46.507397 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7b65c902c69d759ce2d3e6171f3b29f6d48f79083b7b1eb3061112466fd9bfa7-shm.mount: Deactivated successfully. Mar 7 01:10:46.507493 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ddd8e99e314d9b4f882513cffc793abbff7a3732819d6b39628bae8815cba902-shm.mount: Deactivated successfully. Mar 7 01:10:47.216801 containerd[1890]: 2026-03-07 01:10:46.807 [INFO][4356] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="99d0b5775a858fd6614d3b6b3e297a9efb07e6a6ac5b83e8eaca9c48e1ad6ffc" Mar 7 01:10:47.216801 containerd[1890]: 2026-03-07 01:10:46.810 [INFO][4356] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="99d0b5775a858fd6614d3b6b3e297a9efb07e6a6ac5b83e8eaca9c48e1ad6ffc" iface="eth0" netns="/var/run/netns/cni-7332af84-d109-40ef-0408-3dc657dbdc71" Mar 7 01:10:47.216801 containerd[1890]: 2026-03-07 01:10:46.812 [INFO][4356] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="99d0b5775a858fd6614d3b6b3e297a9efb07e6a6ac5b83e8eaca9c48e1ad6ffc" iface="eth0" netns="/var/run/netns/cni-7332af84-d109-40ef-0408-3dc657dbdc71" Mar 7 01:10:47.216801 containerd[1890]: 2026-03-07 01:10:46.815 [INFO][4356] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="99d0b5775a858fd6614d3b6b3e297a9efb07e6a6ac5b83e8eaca9c48e1ad6ffc" iface="eth0" netns="/var/run/netns/cni-7332af84-d109-40ef-0408-3dc657dbdc71" Mar 7 01:10:47.216801 containerd[1890]: 2026-03-07 01:10:46.817 [INFO][4356] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="99d0b5775a858fd6614d3b6b3e297a9efb07e6a6ac5b83e8eaca9c48e1ad6ffc" Mar 7 01:10:47.216801 containerd[1890]: 2026-03-07 01:10:46.819 [INFO][4356] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="99d0b5775a858fd6614d3b6b3e297a9efb07e6a6ac5b83e8eaca9c48e1ad6ffc" Mar 7 01:10:47.216801 containerd[1890]: 2026-03-07 01:10:47.191 [INFO][4438] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="99d0b5775a858fd6614d3b6b3e297a9efb07e6a6ac5b83e8eaca9c48e1ad6ffc" HandleID="k8s-pod-network.99d0b5775a858fd6614d3b6b3e297a9efb07e6a6ac5b83e8eaca9c48e1ad6ffc" Workload="ip--172--31--29--156-k8s-whisker--669bdb6b65--nhkt8-eth0" Mar 7 01:10:47.216801 containerd[1890]: 2026-03-07 01:10:47.191 [INFO][4438] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:10:47.216801 containerd[1890]: 2026-03-07 01:10:47.191 [INFO][4438] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:10:47.216801 containerd[1890]: 2026-03-07 01:10:47.204 [WARNING][4438] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="99d0b5775a858fd6614d3b6b3e297a9efb07e6a6ac5b83e8eaca9c48e1ad6ffc" HandleID="k8s-pod-network.99d0b5775a858fd6614d3b6b3e297a9efb07e6a6ac5b83e8eaca9c48e1ad6ffc" Workload="ip--172--31--29--156-k8s-whisker--669bdb6b65--nhkt8-eth0" Mar 7 01:10:47.216801 containerd[1890]: 2026-03-07 01:10:47.204 [INFO][4438] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="99d0b5775a858fd6614d3b6b3e297a9efb07e6a6ac5b83e8eaca9c48e1ad6ffc" HandleID="k8s-pod-network.99d0b5775a858fd6614d3b6b3e297a9efb07e6a6ac5b83e8eaca9c48e1ad6ffc" Workload="ip--172--31--29--156-k8s-whisker--669bdb6b65--nhkt8-eth0" Mar 7 01:10:47.216801 containerd[1890]: 2026-03-07 01:10:47.209 [INFO][4438] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:10:47.216801 containerd[1890]: 2026-03-07 01:10:47.211 [INFO][4356] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="99d0b5775a858fd6614d3b6b3e297a9efb07e6a6ac5b83e8eaca9c48e1ad6ffc" Mar 7 01:10:47.219405 containerd[1890]: time="2026-03-07T01:10:47.218267138Z" level=info msg="TearDown network for sandbox \"99d0b5775a858fd6614d3b6b3e297a9efb07e6a6ac5b83e8eaca9c48e1ad6ffc\" successfully" Mar 7 01:10:47.219405 containerd[1890]: time="2026-03-07T01:10:47.218331325Z" level=info msg="StopPodSandbox for \"99d0b5775a858fd6614d3b6b3e297a9efb07e6a6ac5b83e8eaca9c48e1ad6ffc\" returns successfully" Mar 7 01:10:47.224886 systemd[1]: run-netns-cni\x2d7332af84\x2dd109\x2d40ef\x2d0408\x2d3dc657dbdc71.mount: Deactivated successfully. Mar 7 01:10:47.231633 containerd[1890]: 2026-03-07 01:10:46.805 [INFO][4373] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="ba7b2e28a7d246b29bded19b23b3a56ceef7edb20e861d0af8e111b2f8763254" Mar 7 01:10:47.231633 containerd[1890]: 2026-03-07 01:10:46.806 [INFO][4373] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ba7b2e28a7d246b29bded19b23b3a56ceef7edb20e861d0af8e111b2f8763254" iface="eth0" netns="/var/run/netns/cni-24e41a60-e3af-5223-9c36-904f12720699" Mar 7 01:10:47.231633 containerd[1890]: 2026-03-07 01:10:46.811 [INFO][4373] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ba7b2e28a7d246b29bded19b23b3a56ceef7edb20e861d0af8e111b2f8763254" iface="eth0" netns="/var/run/netns/cni-24e41a60-e3af-5223-9c36-904f12720699" Mar 7 01:10:47.231633 containerd[1890]: 2026-03-07 01:10:46.811 [INFO][4373] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ba7b2e28a7d246b29bded19b23b3a56ceef7edb20e861d0af8e111b2f8763254" iface="eth0" netns="/var/run/netns/cni-24e41a60-e3af-5223-9c36-904f12720699" Mar 7 01:10:47.231633 containerd[1890]: 2026-03-07 01:10:46.814 [INFO][4373] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="ba7b2e28a7d246b29bded19b23b3a56ceef7edb20e861d0af8e111b2f8763254" Mar 7 01:10:47.231633 containerd[1890]: 2026-03-07 01:10:46.817 [INFO][4373] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="ba7b2e28a7d246b29bded19b23b3a56ceef7edb20e861d0af8e111b2f8763254" Mar 7 01:10:47.231633 containerd[1890]: 2026-03-07 01:10:47.191 [INFO][4439] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="ba7b2e28a7d246b29bded19b23b3a56ceef7edb20e861d0af8e111b2f8763254" HandleID="k8s-pod-network.ba7b2e28a7d246b29bded19b23b3a56ceef7edb20e861d0af8e111b2f8763254" Workload="ip--172--31--29--156-k8s-calico--apiserver--5fffbd9bb8--p7j5c-eth0" Mar 7 01:10:47.231633 containerd[1890]: 2026-03-07 01:10:47.192 [INFO][4439] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:10:47.231633 containerd[1890]: 2026-03-07 01:10:47.206 [INFO][4439] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:10:47.231633 containerd[1890]: 2026-03-07 01:10:47.215 [WARNING][4439] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="ba7b2e28a7d246b29bded19b23b3a56ceef7edb20e861d0af8e111b2f8763254" HandleID="k8s-pod-network.ba7b2e28a7d246b29bded19b23b3a56ceef7edb20e861d0af8e111b2f8763254" Workload="ip--172--31--29--156-k8s-calico--apiserver--5fffbd9bb8--p7j5c-eth0" Mar 7 01:10:47.231633 containerd[1890]: 2026-03-07 01:10:47.215 [INFO][4439] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="ba7b2e28a7d246b29bded19b23b3a56ceef7edb20e861d0af8e111b2f8763254" HandleID="k8s-pod-network.ba7b2e28a7d246b29bded19b23b3a56ceef7edb20e861d0af8e111b2f8763254" Workload="ip--172--31--29--156-k8s-calico--apiserver--5fffbd9bb8--p7j5c-eth0" Mar 7 01:10:47.231633 containerd[1890]: 2026-03-07 01:10:47.218 [INFO][4439] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:10:47.231633 containerd[1890]: 2026-03-07 01:10:47.223 [INFO][4373] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="ba7b2e28a7d246b29bded19b23b3a56ceef7edb20e861d0af8e111b2f8763254" Mar 7 01:10:47.231633 containerd[1890]: time="2026-03-07T01:10:47.230570704Z" level=info msg="TearDown network for sandbox \"ba7b2e28a7d246b29bded19b23b3a56ceef7edb20e861d0af8e111b2f8763254\" successfully" Mar 7 01:10:47.231633 containerd[1890]: time="2026-03-07T01:10:47.230602415Z" level=info msg="StopPodSandbox for \"ba7b2e28a7d246b29bded19b23b3a56ceef7edb20e861d0af8e111b2f8763254\" returns successfully" Mar 7 01:10:47.240864 systemd[1]: run-netns-cni\x2d24e41a60\x2de3af\x2d5223\x2d9c36\x2d904f12720699.mount: Deactivated successfully. Mar 7 01:10:47.252139 containerd[1890]: time="2026-03-07T01:10:47.252071441Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5fffbd9bb8-p7j5c,Uid:31e4d6ab-37b1-4c95-b346-1277ec27fa0d,Namespace:calico-system,Attempt:1,}" Mar 7 01:10:47.255856 containerd[1890]: time="2026-03-07T01:10:47.255723146Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-669bdb6b65-nhkt8,Uid:e39111b2-0e10-4e35-8ff3-e8249485a878,Namespace:calico-system,Attempt:1,}" Mar 7 01:10:47.264842 containerd[1890]: 2026-03-07 01:10:46.802 [INFO][4387] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="5d0fcf4dab67c8d93f1e1124224a1e6311c7cfcf308268b210578a9b8f750e8a" Mar 7 01:10:47.264842 containerd[1890]: 2026-03-07 01:10:46.802 [INFO][4387] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5d0fcf4dab67c8d93f1e1124224a1e6311c7cfcf308268b210578a9b8f750e8a" iface="eth0" netns="/var/run/netns/cni-f2ff8c9e-020e-6f37-0d54-cd2500d532cf" Mar 7 01:10:47.264842 containerd[1890]: 2026-03-07 01:10:46.802 [INFO][4387] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5d0fcf4dab67c8d93f1e1124224a1e6311c7cfcf308268b210578a9b8f750e8a" iface="eth0" netns="/var/run/netns/cni-f2ff8c9e-020e-6f37-0d54-cd2500d532cf" Mar 7 01:10:47.264842 containerd[1890]: 2026-03-07 01:10:46.804 [INFO][4387] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5d0fcf4dab67c8d93f1e1124224a1e6311c7cfcf308268b210578a9b8f750e8a" iface="eth0" netns="/var/run/netns/cni-f2ff8c9e-020e-6f37-0d54-cd2500d532cf" Mar 7 01:10:47.264842 containerd[1890]: 2026-03-07 01:10:46.804 [INFO][4387] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="5d0fcf4dab67c8d93f1e1124224a1e6311c7cfcf308268b210578a9b8f750e8a" Mar 7 01:10:47.264842 containerd[1890]: 2026-03-07 01:10:46.804 [INFO][4387] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="5d0fcf4dab67c8d93f1e1124224a1e6311c7cfcf308268b210578a9b8f750e8a" Mar 7 01:10:47.264842 containerd[1890]: 2026-03-07 01:10:47.192 [INFO][4432] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="5d0fcf4dab67c8d93f1e1124224a1e6311c7cfcf308268b210578a9b8f750e8a" HandleID="k8s-pod-network.5d0fcf4dab67c8d93f1e1124224a1e6311c7cfcf308268b210578a9b8f750e8a" Workload="ip--172--31--29--156-k8s-calico--apiserver--5fffbd9bb8--8fmss-eth0" Mar 7 01:10:47.264842 containerd[1890]: 2026-03-07 01:10:47.192 [INFO][4432] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:10:47.264842 containerd[1890]: 2026-03-07 01:10:47.219 [INFO][4432] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:10:47.264842 containerd[1890]: 2026-03-07 01:10:47.247 [WARNING][4432] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="5d0fcf4dab67c8d93f1e1124224a1e6311c7cfcf308268b210578a9b8f750e8a" HandleID="k8s-pod-network.5d0fcf4dab67c8d93f1e1124224a1e6311c7cfcf308268b210578a9b8f750e8a" Workload="ip--172--31--29--156-k8s-calico--apiserver--5fffbd9bb8--8fmss-eth0" Mar 7 01:10:47.264842 containerd[1890]: 2026-03-07 01:10:47.249 [INFO][4432] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="5d0fcf4dab67c8d93f1e1124224a1e6311c7cfcf308268b210578a9b8f750e8a" HandleID="k8s-pod-network.5d0fcf4dab67c8d93f1e1124224a1e6311c7cfcf308268b210578a9b8f750e8a" Workload="ip--172--31--29--156-k8s-calico--apiserver--5fffbd9bb8--8fmss-eth0" Mar 7 01:10:47.264842 containerd[1890]: 2026-03-07 01:10:47.253 [INFO][4432] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:10:47.264842 containerd[1890]: 2026-03-07 01:10:47.260 [INFO][4387] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="5d0fcf4dab67c8d93f1e1124224a1e6311c7cfcf308268b210578a9b8f750e8a" Mar 7 01:10:47.267556 containerd[1890]: time="2026-03-07T01:10:47.265060128Z" level=info msg="TearDown network for sandbox \"5d0fcf4dab67c8d93f1e1124224a1e6311c7cfcf308268b210578a9b8f750e8a\" successfully" Mar 7 01:10:47.267556 containerd[1890]: time="2026-03-07T01:10:47.265090460Z" level=info msg="StopPodSandbox for \"5d0fcf4dab67c8d93f1e1124224a1e6311c7cfcf308268b210578a9b8f750e8a\" returns successfully" Mar 7 01:10:47.271385 systemd[1]: run-netns-cni\x2df2ff8c9e\x2d020e\x2d6f37\x2d0d54\x2dcd2500d532cf.mount: Deactivated successfully. Mar 7 01:10:47.286958 containerd[1890]: time="2026-03-07T01:10:47.286679113Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5fffbd9bb8-8fmss,Uid:84013ff8-1caf-4210-8629-c308dc9cf92c,Namespace:calico-system,Attempt:1,}" Mar 7 01:10:47.288469 containerd[1890]: 2026-03-07 01:10:46.820 [INFO][4351] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="9120a8cca8fc27f512ef3ea5c6ca6c5ad0e99e37ebf0e7d9a053d26ab94ed0b0" Mar 7 01:10:47.288469 containerd[1890]: 2026-03-07 01:10:46.821 [INFO][4351] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9120a8cca8fc27f512ef3ea5c6ca6c5ad0e99e37ebf0e7d9a053d26ab94ed0b0" iface="eth0" netns="/var/run/netns/cni-588e089f-ba78-de16-653a-079e2979a8d8" Mar 7 01:10:47.288469 containerd[1890]: 2026-03-07 01:10:46.822 [INFO][4351] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9120a8cca8fc27f512ef3ea5c6ca6c5ad0e99e37ebf0e7d9a053d26ab94ed0b0" iface="eth0" netns="/var/run/netns/cni-588e089f-ba78-de16-653a-079e2979a8d8" Mar 7 01:10:47.288469 containerd[1890]: 2026-03-07 01:10:46.822 [INFO][4351] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9120a8cca8fc27f512ef3ea5c6ca6c5ad0e99e37ebf0e7d9a053d26ab94ed0b0" iface="eth0" netns="/var/run/netns/cni-588e089f-ba78-de16-653a-079e2979a8d8" Mar 7 01:10:47.288469 containerd[1890]: 2026-03-07 01:10:46.822 [INFO][4351] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="9120a8cca8fc27f512ef3ea5c6ca6c5ad0e99e37ebf0e7d9a053d26ab94ed0b0" Mar 7 01:10:47.288469 containerd[1890]: 2026-03-07 01:10:46.822 [INFO][4351] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="9120a8cca8fc27f512ef3ea5c6ca6c5ad0e99e37ebf0e7d9a053d26ab94ed0b0" Mar 7 01:10:47.288469 containerd[1890]: 2026-03-07 01:10:47.192 [INFO][4441] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="9120a8cca8fc27f512ef3ea5c6ca6c5ad0e99e37ebf0e7d9a053d26ab94ed0b0" HandleID="k8s-pod-network.9120a8cca8fc27f512ef3ea5c6ca6c5ad0e99e37ebf0e7d9a053d26ab94ed0b0" Workload="ip--172--31--29--156-k8s-coredns--66bc5c9577--7lsxp-eth0" Mar 7 01:10:47.288469 containerd[1890]: 2026-03-07 01:10:47.192 [INFO][4441] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:10:47.288469 containerd[1890]: 2026-03-07 01:10:47.254 [INFO][4441] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:10:47.288469 containerd[1890]: 2026-03-07 01:10:47.270 [WARNING][4441] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="9120a8cca8fc27f512ef3ea5c6ca6c5ad0e99e37ebf0e7d9a053d26ab94ed0b0" HandleID="k8s-pod-network.9120a8cca8fc27f512ef3ea5c6ca6c5ad0e99e37ebf0e7d9a053d26ab94ed0b0" Workload="ip--172--31--29--156-k8s-coredns--66bc5c9577--7lsxp-eth0" Mar 7 01:10:47.288469 containerd[1890]: 2026-03-07 01:10:47.270 [INFO][4441] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="9120a8cca8fc27f512ef3ea5c6ca6c5ad0e99e37ebf0e7d9a053d26ab94ed0b0" HandleID="k8s-pod-network.9120a8cca8fc27f512ef3ea5c6ca6c5ad0e99e37ebf0e7d9a053d26ab94ed0b0" Workload="ip--172--31--29--156-k8s-coredns--66bc5c9577--7lsxp-eth0" Mar 7 01:10:47.288469 containerd[1890]: 2026-03-07 01:10:47.276 [INFO][4441] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:10:47.288469 containerd[1890]: 2026-03-07 01:10:47.282 [INFO][4351] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="9120a8cca8fc27f512ef3ea5c6ca6c5ad0e99e37ebf0e7d9a053d26ab94ed0b0" Mar 7 01:10:47.291711 containerd[1890]: time="2026-03-07T01:10:47.291253221Z" level=info msg="TearDown network for sandbox \"9120a8cca8fc27f512ef3ea5c6ca6c5ad0e99e37ebf0e7d9a053d26ab94ed0b0\" successfully" Mar 7 01:10:47.291711 containerd[1890]: time="2026-03-07T01:10:47.291283435Z" level=info msg="StopPodSandbox for \"9120a8cca8fc27f512ef3ea5c6ca6c5ad0e99e37ebf0e7d9a053d26ab94ed0b0\" returns successfully" Mar 7 01:10:47.299130 containerd[1890]: time="2026-03-07T01:10:47.299090306Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-7lsxp,Uid:c3318a07-edd7-4b14-bc1e-8289ed132c3a,Namespace:kube-system,Attempt:1,}" Mar 7 01:10:47.309507 containerd[1890]: 2026-03-07 01:10:46.805 [INFO][4397] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="ddd8e99e314d9b4f882513cffc793abbff7a3732819d6b39628bae8815cba902" Mar 7 01:10:47.309507 containerd[1890]: 2026-03-07 01:10:46.806 [INFO][4397] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ddd8e99e314d9b4f882513cffc793abbff7a3732819d6b39628bae8815cba902" iface="eth0" netns="/var/run/netns/cni-a4c4a4af-4a4b-21bc-2abf-a8e85d042a74" Mar 7 01:10:47.309507 containerd[1890]: 2026-03-07 01:10:46.808 [INFO][4397] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ddd8e99e314d9b4f882513cffc793abbff7a3732819d6b39628bae8815cba902" iface="eth0" netns="/var/run/netns/cni-a4c4a4af-4a4b-21bc-2abf-a8e85d042a74" Mar 7 01:10:47.309507 containerd[1890]: 2026-03-07 01:10:46.808 [INFO][4397] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ddd8e99e314d9b4f882513cffc793abbff7a3732819d6b39628bae8815cba902" iface="eth0" netns="/var/run/netns/cni-a4c4a4af-4a4b-21bc-2abf-a8e85d042a74" Mar 7 01:10:47.309507 containerd[1890]: 2026-03-07 01:10:46.808 [INFO][4397] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="ddd8e99e314d9b4f882513cffc793abbff7a3732819d6b39628bae8815cba902" Mar 7 01:10:47.309507 containerd[1890]: 2026-03-07 01:10:46.808 [INFO][4397] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="ddd8e99e314d9b4f882513cffc793abbff7a3732819d6b39628bae8815cba902" Mar 7 01:10:47.309507 containerd[1890]: 2026-03-07 01:10:47.190 [INFO][4435] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="ddd8e99e314d9b4f882513cffc793abbff7a3732819d6b39628bae8815cba902" HandleID="k8s-pod-network.ddd8e99e314d9b4f882513cffc793abbff7a3732819d6b39628bae8815cba902" Workload="ip--172--31--29--156-k8s-calico--kube--controllers--dd6b64fbf--c45tm-eth0" Mar 7 01:10:47.309507 containerd[1890]: 2026-03-07 01:10:47.191 [INFO][4435] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:10:47.309507 containerd[1890]: 2026-03-07 01:10:47.276 [INFO][4435] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:10:47.309507 containerd[1890]: 2026-03-07 01:10:47.292 [WARNING][4435] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="ddd8e99e314d9b4f882513cffc793abbff7a3732819d6b39628bae8815cba902" HandleID="k8s-pod-network.ddd8e99e314d9b4f882513cffc793abbff7a3732819d6b39628bae8815cba902" Workload="ip--172--31--29--156-k8s-calico--kube--controllers--dd6b64fbf--c45tm-eth0" Mar 7 01:10:47.309507 containerd[1890]: 2026-03-07 01:10:47.292 [INFO][4435] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="ddd8e99e314d9b4f882513cffc793abbff7a3732819d6b39628bae8815cba902" HandleID="k8s-pod-network.ddd8e99e314d9b4f882513cffc793abbff7a3732819d6b39628bae8815cba902" Workload="ip--172--31--29--156-k8s-calico--kube--controllers--dd6b64fbf--c45tm-eth0" Mar 7 01:10:47.309507 containerd[1890]: 2026-03-07 01:10:47.299 [INFO][4435] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:10:47.309507 containerd[1890]: 2026-03-07 01:10:47.302 [INFO][4397] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="ddd8e99e314d9b4f882513cffc793abbff7a3732819d6b39628bae8815cba902" Mar 7 01:10:47.310833 containerd[1890]: time="2026-03-07T01:10:47.310618349Z" level=info msg="TearDown network for sandbox \"ddd8e99e314d9b4f882513cffc793abbff7a3732819d6b39628bae8815cba902\" successfully" Mar 7 01:10:47.310833 containerd[1890]: time="2026-03-07T01:10:47.310649823Z" level=info msg="StopPodSandbox for \"ddd8e99e314d9b4f882513cffc793abbff7a3732819d6b39628bae8815cba902\" returns successfully" Mar 7 01:10:47.317742 kubelet[3114]: I0307 01:10:47.317686 3114 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 7 01:10:47.323261 containerd[1890]: time="2026-03-07T01:10:47.323185503Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-dd6b64fbf-c45tm,Uid:b097a7a7-3ecc-4008-8602-5cdf4dd7e06f,Namespace:calico-system,Attempt:1,}" Mar 7 01:10:47.351364 containerd[1890]: 2026-03-07 01:10:46.810 [INFO][4364] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="310709574521823a1da1e64f86c1ca73d58804ce949dd386cf59f43444a78613" Mar 7 01:10:47.351364 containerd[1890]: 2026-03-07 01:10:46.811 [INFO][4364] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="310709574521823a1da1e64f86c1ca73d58804ce949dd386cf59f43444a78613" iface="eth0" netns="/var/run/netns/cni-a5c6d32f-4423-61ee-18fa-7df6c280e379" Mar 7 01:10:47.351364 containerd[1890]: 2026-03-07 01:10:46.812 [INFO][4364] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="310709574521823a1da1e64f86c1ca73d58804ce949dd386cf59f43444a78613" iface="eth0" netns="/var/run/netns/cni-a5c6d32f-4423-61ee-18fa-7df6c280e379" Mar 7 01:10:47.351364 containerd[1890]: 2026-03-07 01:10:46.813 [INFO][4364] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="310709574521823a1da1e64f86c1ca73d58804ce949dd386cf59f43444a78613" iface="eth0" netns="/var/run/netns/cni-a5c6d32f-4423-61ee-18fa-7df6c280e379" Mar 7 01:10:47.351364 containerd[1890]: 2026-03-07 01:10:46.813 [INFO][4364] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="310709574521823a1da1e64f86c1ca73d58804ce949dd386cf59f43444a78613" Mar 7 01:10:47.351364 containerd[1890]: 2026-03-07 01:10:46.813 [INFO][4364] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="310709574521823a1da1e64f86c1ca73d58804ce949dd386cf59f43444a78613" Mar 7 01:10:47.351364 containerd[1890]: 2026-03-07 01:10:47.191 [INFO][4434] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="310709574521823a1da1e64f86c1ca73d58804ce949dd386cf59f43444a78613" HandleID="k8s-pod-network.310709574521823a1da1e64f86c1ca73d58804ce949dd386cf59f43444a78613" Workload="ip--172--31--29--156-k8s-csi--node--driver--9sl5j-eth0" Mar 7 01:10:47.351364 containerd[1890]: 2026-03-07 01:10:47.192 [INFO][4434] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:10:47.351364 containerd[1890]: 2026-03-07 01:10:47.299 [INFO][4434] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:10:47.351364 containerd[1890]: 2026-03-07 01:10:47.318 [WARNING][4434] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="310709574521823a1da1e64f86c1ca73d58804ce949dd386cf59f43444a78613" HandleID="k8s-pod-network.310709574521823a1da1e64f86c1ca73d58804ce949dd386cf59f43444a78613" Workload="ip--172--31--29--156-k8s-csi--node--driver--9sl5j-eth0" Mar 7 01:10:47.351364 containerd[1890]: 2026-03-07 01:10:47.319 [INFO][4434] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="310709574521823a1da1e64f86c1ca73d58804ce949dd386cf59f43444a78613" HandleID="k8s-pod-network.310709574521823a1da1e64f86c1ca73d58804ce949dd386cf59f43444a78613" Workload="ip--172--31--29--156-k8s-csi--node--driver--9sl5j-eth0" Mar 7 01:10:47.351364 containerd[1890]: 2026-03-07 01:10:47.328 [INFO][4434] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:10:47.351364 containerd[1890]: 2026-03-07 01:10:47.336 [INFO][4364] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="310709574521823a1da1e64f86c1ca73d58804ce949dd386cf59f43444a78613" Mar 7 01:10:47.352777 containerd[1890]: time="2026-03-07T01:10:47.352224468Z" level=info msg="TearDown network for sandbox \"310709574521823a1da1e64f86c1ca73d58804ce949dd386cf59f43444a78613\" successfully" Mar 7 01:10:47.353130 containerd[1890]: time="2026-03-07T01:10:47.352255132Z" level=info msg="StopPodSandbox for \"310709574521823a1da1e64f86c1ca73d58804ce949dd386cf59f43444a78613\" returns successfully" Mar 7 01:10:47.358857 containerd[1890]: 2026-03-07 01:10:46.794 [INFO][4362] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="7b65c902c69d759ce2d3e6171f3b29f6d48f79083b7b1eb3061112466fd9bfa7" Mar 7 01:10:47.358857 containerd[1890]: 2026-03-07 01:10:46.795 [INFO][4362] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7b65c902c69d759ce2d3e6171f3b29f6d48f79083b7b1eb3061112466fd9bfa7" iface="eth0" netns="/var/run/netns/cni-80edd948-a50d-c0c0-7f4c-11ab16edeaf7" Mar 7 01:10:47.358857 containerd[1890]: 2026-03-07 01:10:46.796 [INFO][4362] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7b65c902c69d759ce2d3e6171f3b29f6d48f79083b7b1eb3061112466fd9bfa7" iface="eth0" netns="/var/run/netns/cni-80edd948-a50d-c0c0-7f4c-11ab16edeaf7" Mar 7 01:10:47.358857 containerd[1890]: 2026-03-07 01:10:46.797 [INFO][4362] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7b65c902c69d759ce2d3e6171f3b29f6d48f79083b7b1eb3061112466fd9bfa7" iface="eth0" netns="/var/run/netns/cni-80edd948-a50d-c0c0-7f4c-11ab16edeaf7" Mar 7 01:10:47.358857 containerd[1890]: 2026-03-07 01:10:46.798 [INFO][4362] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="7b65c902c69d759ce2d3e6171f3b29f6d48f79083b7b1eb3061112466fd9bfa7" Mar 7 01:10:47.358857 containerd[1890]: 2026-03-07 01:10:46.798 [INFO][4362] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="7b65c902c69d759ce2d3e6171f3b29f6d48f79083b7b1eb3061112466fd9bfa7" Mar 7 01:10:47.358857 containerd[1890]: 2026-03-07 01:10:47.193 [INFO][4428] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="7b65c902c69d759ce2d3e6171f3b29f6d48f79083b7b1eb3061112466fd9bfa7" HandleID="k8s-pod-network.7b65c902c69d759ce2d3e6171f3b29f6d48f79083b7b1eb3061112466fd9bfa7" Workload="ip--172--31--29--156-k8s-goldmane--cccfbd5cf--9kx9l-eth0" Mar 7 01:10:47.358857 containerd[1890]: 2026-03-07 01:10:47.193 [INFO][4428] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:10:47.358857 containerd[1890]: 2026-03-07 01:10:47.327 [INFO][4428] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:10:47.358857 containerd[1890]: 2026-03-07 01:10:47.335 [WARNING][4428] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="7b65c902c69d759ce2d3e6171f3b29f6d48f79083b7b1eb3061112466fd9bfa7" HandleID="k8s-pod-network.7b65c902c69d759ce2d3e6171f3b29f6d48f79083b7b1eb3061112466fd9bfa7" Workload="ip--172--31--29--156-k8s-goldmane--cccfbd5cf--9kx9l-eth0" Mar 7 01:10:47.358857 containerd[1890]: 2026-03-07 01:10:47.335 [INFO][4428] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="7b65c902c69d759ce2d3e6171f3b29f6d48f79083b7b1eb3061112466fd9bfa7" HandleID="k8s-pod-network.7b65c902c69d759ce2d3e6171f3b29f6d48f79083b7b1eb3061112466fd9bfa7" Workload="ip--172--31--29--156-k8s-goldmane--cccfbd5cf--9kx9l-eth0" Mar 7 01:10:47.358857 containerd[1890]: 2026-03-07 01:10:47.337 [INFO][4428] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:10:47.358857 containerd[1890]: 2026-03-07 01:10:47.348 [INFO][4362] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="7b65c902c69d759ce2d3e6171f3b29f6d48f79083b7b1eb3061112466fd9bfa7" Mar 7 01:10:47.360706 containerd[1890]: time="2026-03-07T01:10:47.360142385Z" level=info msg="TearDown network for sandbox \"7b65c902c69d759ce2d3e6171f3b29f6d48f79083b7b1eb3061112466fd9bfa7\" successfully" Mar 7 01:10:47.362026 containerd[1890]: time="2026-03-07T01:10:47.361320948Z" level=info msg="StopPodSandbox for \"7b65c902c69d759ce2d3e6171f3b29f6d48f79083b7b1eb3061112466fd9bfa7\" returns successfully" Mar 7 01:10:47.376177 containerd[1890]: time="2026-03-07T01:10:47.376132540Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-9kx9l,Uid:ec167290-b451-4cd1-a0d3-0912ddaf0ce2,Namespace:calico-system,Attempt:1,}" Mar 7 01:10:47.377296 containerd[1890]: time="2026-03-07T01:10:47.377264104Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9sl5j,Uid:cb0ce17a-98ad-40fb-b4f7-1528d43404aa,Namespace:calico-system,Attempt:1,}" Mar 7 01:10:47.378633 containerd[1890]: 2026-03-07 01:10:46.798 [INFO][4374] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="661c9389056cc3660920971191063bb1f7531e643e2d78b53970713803121040" Mar 7 01:10:47.378633 containerd[1890]: 2026-03-07 01:10:46.799 [INFO][4374] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="661c9389056cc3660920971191063bb1f7531e643e2d78b53970713803121040" iface="eth0" netns="/var/run/netns/cni-27e58bdc-f1b9-92d1-52e3-cd0807ea7611" Mar 7 01:10:47.378633 containerd[1890]: 2026-03-07 01:10:46.800 [INFO][4374] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="661c9389056cc3660920971191063bb1f7531e643e2d78b53970713803121040" iface="eth0" netns="/var/run/netns/cni-27e58bdc-f1b9-92d1-52e3-cd0807ea7611" Mar 7 01:10:47.378633 containerd[1890]: 2026-03-07 01:10:46.803 [INFO][4374] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="661c9389056cc3660920971191063bb1f7531e643e2d78b53970713803121040" iface="eth0" netns="/var/run/netns/cni-27e58bdc-f1b9-92d1-52e3-cd0807ea7611" Mar 7 01:10:47.378633 containerd[1890]: 2026-03-07 01:10:46.803 [INFO][4374] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="661c9389056cc3660920971191063bb1f7531e643e2d78b53970713803121040" Mar 7 01:10:47.378633 containerd[1890]: 2026-03-07 01:10:46.803 [INFO][4374] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="661c9389056cc3660920971191063bb1f7531e643e2d78b53970713803121040" Mar 7 01:10:47.378633 containerd[1890]: 2026-03-07 01:10:47.194 [INFO][4429] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="661c9389056cc3660920971191063bb1f7531e643e2d78b53970713803121040" HandleID="k8s-pod-network.661c9389056cc3660920971191063bb1f7531e643e2d78b53970713803121040" Workload="ip--172--31--29--156-k8s-coredns--66bc5c9577--rvl56-eth0" Mar 7 01:10:47.378633 containerd[1890]: 2026-03-07 01:10:47.194 [INFO][4429] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:10:47.378633 containerd[1890]: 2026-03-07 01:10:47.339 [INFO][4429] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:10:47.378633 containerd[1890]: 2026-03-07 01:10:47.353 [WARNING][4429] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="661c9389056cc3660920971191063bb1f7531e643e2d78b53970713803121040" HandleID="k8s-pod-network.661c9389056cc3660920971191063bb1f7531e643e2d78b53970713803121040" Workload="ip--172--31--29--156-k8s-coredns--66bc5c9577--rvl56-eth0" Mar 7 01:10:47.378633 containerd[1890]: 2026-03-07 01:10:47.353 [INFO][4429] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="661c9389056cc3660920971191063bb1f7531e643e2d78b53970713803121040" HandleID="k8s-pod-network.661c9389056cc3660920971191063bb1f7531e643e2d78b53970713803121040" Workload="ip--172--31--29--156-k8s-coredns--66bc5c9577--rvl56-eth0" Mar 7 01:10:47.378633 containerd[1890]: 2026-03-07 01:10:47.359 [INFO][4429] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:10:47.378633 containerd[1890]: 2026-03-07 01:10:47.374 [INFO][4374] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="661c9389056cc3660920971191063bb1f7531e643e2d78b53970713803121040" Mar 7 01:10:47.379903 containerd[1890]: time="2026-03-07T01:10:47.379210204Z" level=info msg="TearDown network for sandbox \"661c9389056cc3660920971191063bb1f7531e643e2d78b53970713803121040\" successfully" Mar 7 01:10:47.379903 containerd[1890]: time="2026-03-07T01:10:47.379237802Z" level=info msg="StopPodSandbox for \"661c9389056cc3660920971191063bb1f7531e643e2d78b53970713803121040\" returns successfully" Mar 7 01:10:47.394047 containerd[1890]: time="2026-03-07T01:10:47.392869048Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-rvl56,Uid:c72f5271-c799-4260-ae9b-eddba6e2b3f5,Namespace:kube-system,Attempt:1,}" Mar 7 01:10:47.528595 systemd[1]: run-netns-cni\x2da5c6d32f\x2d4423\x2d61ee\x2d18fa\x2d7df6c280e379.mount: Deactivated successfully. Mar 7 01:10:47.529066 systemd[1]: run-netns-cni\x2d80edd948\x2da50d\x2dc0c0\x2d7f4c\x2d11ab16edeaf7.mount: Deactivated successfully. Mar 7 01:10:47.529152 systemd[1]: run-netns-cni\x2d588e089f\x2dba78\x2dde16\x2d653a\x2d079e2979a8d8.mount: Deactivated successfully. Mar 7 01:10:47.529234 systemd[1]: run-netns-cni\x2da4c4a4af\x2d4a4b\x2d21bc\x2d2abf\x2da8e85d042a74.mount: Deactivated successfully. Mar 7 01:10:47.529316 systemd[1]: run-netns-cni\x2d27e58bdc\x2df1b9\x2d92d1\x2d52e3\x2dcd0807ea7611.mount: Deactivated successfully. Mar 7 01:10:47.885793 systemd-networkd[1818]: calieecb1753e85: Link UP Mar 7 01:10:47.887284 systemd-networkd[1818]: calieecb1753e85: Gained carrier Mar 7 01:10:47.898946 (udev-worker)[4642]: Network interface NamePolicy= disabled on kernel command line. Mar 7 01:10:47.952540 containerd[1890]: 2026-03-07 01:10:47.478 [ERROR][4506] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 7 01:10:47.952540 containerd[1890]: 2026-03-07 01:10:47.562 [INFO][4506] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--29--156-k8s-calico--apiserver--5fffbd9bb8--p7j5c-eth0 calico-apiserver-5fffbd9bb8- calico-system 31e4d6ab-37b1-4c95-b346-1277ec27fa0d 896 0 2026-03-07 01:10:20 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5fffbd9bb8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-29-156 calico-apiserver-5fffbd9bb8-p7j5c eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] calieecb1753e85 [] [] }} ContainerID="63bb7fe5e5f49f2d606f24a7c6471461b8206efd3ed866ae6b22f3b917325da3" Namespace="calico-system" Pod="calico-apiserver-5fffbd9bb8-p7j5c" WorkloadEndpoint="ip--172--31--29--156-k8s-calico--apiserver--5fffbd9bb8--p7j5c-" Mar 7 01:10:47.952540 containerd[1890]: 2026-03-07 01:10:47.562 [INFO][4506] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="63bb7fe5e5f49f2d606f24a7c6471461b8206efd3ed866ae6b22f3b917325da3" Namespace="calico-system" Pod="calico-apiserver-5fffbd9bb8-p7j5c" WorkloadEndpoint="ip--172--31--29--156-k8s-calico--apiserver--5fffbd9bb8--p7j5c-eth0" Mar 7 01:10:47.952540 containerd[1890]: 2026-03-07 01:10:47.717 [INFO][4586] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="63bb7fe5e5f49f2d606f24a7c6471461b8206efd3ed866ae6b22f3b917325da3" HandleID="k8s-pod-network.63bb7fe5e5f49f2d606f24a7c6471461b8206efd3ed866ae6b22f3b917325da3" Workload="ip--172--31--29--156-k8s-calico--apiserver--5fffbd9bb8--p7j5c-eth0" Mar 7 01:10:47.952540 containerd[1890]: 2026-03-07 01:10:47.769 [INFO][4586] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="63bb7fe5e5f49f2d606f24a7c6471461b8206efd3ed866ae6b22f3b917325da3" HandleID="k8s-pod-network.63bb7fe5e5f49f2d606f24a7c6471461b8206efd3ed866ae6b22f3b917325da3" Workload="ip--172--31--29--156-k8s-calico--apiserver--5fffbd9bb8--p7j5c-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e940), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-29-156", "pod":"calico-apiserver-5fffbd9bb8-p7j5c", "timestamp":"2026-03-07 01:10:47.717865356 +0000 UTC"}, Hostname:"ip-172-31-29-156", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00040bb80)} Mar 7 01:10:47.952540 containerd[1890]: 2026-03-07 01:10:47.769 [INFO][4586] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:10:47.952540 containerd[1890]: 2026-03-07 01:10:47.769 [INFO][4586] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:10:47.952540 containerd[1890]: 2026-03-07 01:10:47.769 [INFO][4586] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-29-156' Mar 7 01:10:47.952540 containerd[1890]: 2026-03-07 01:10:47.777 [INFO][4586] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.63bb7fe5e5f49f2d606f24a7c6471461b8206efd3ed866ae6b22f3b917325da3" host="ip-172-31-29-156" Mar 7 01:10:47.952540 containerd[1890]: 2026-03-07 01:10:47.792 [INFO][4586] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-29-156" Mar 7 01:10:47.952540 containerd[1890]: 2026-03-07 01:10:47.804 [INFO][4586] ipam/ipam.go 526: Trying affinity for 192.168.2.0/26 host="ip-172-31-29-156" Mar 7 01:10:47.952540 containerd[1890]: 2026-03-07 01:10:47.806 [INFO][4586] ipam/ipam.go 160: Attempting to load block cidr=192.168.2.0/26 host="ip-172-31-29-156" Mar 7 01:10:47.952540 containerd[1890]: 2026-03-07 01:10:47.811 [INFO][4586] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.2.0/26 host="ip-172-31-29-156" Mar 7 01:10:47.952540 containerd[1890]: 2026-03-07 01:10:47.811 [INFO][4586] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.2.0/26 handle="k8s-pod-network.63bb7fe5e5f49f2d606f24a7c6471461b8206efd3ed866ae6b22f3b917325da3" host="ip-172-31-29-156" Mar 7 01:10:47.952540 containerd[1890]: 2026-03-07 01:10:47.814 [INFO][4586] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.63bb7fe5e5f49f2d606f24a7c6471461b8206efd3ed866ae6b22f3b917325da3 Mar 7 01:10:47.952540 containerd[1890]: 2026-03-07 01:10:47.827 [INFO][4586] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.2.0/26 handle="k8s-pod-network.63bb7fe5e5f49f2d606f24a7c6471461b8206efd3ed866ae6b22f3b917325da3" host="ip-172-31-29-156" Mar 7 01:10:47.952540 containerd[1890]: 2026-03-07 01:10:47.835 [INFO][4586] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.2.1/26] block=192.168.2.0/26 handle="k8s-pod-network.63bb7fe5e5f49f2d606f24a7c6471461b8206efd3ed866ae6b22f3b917325da3" host="ip-172-31-29-156" Mar 7 01:10:47.952540 containerd[1890]: 2026-03-07 01:10:47.835 [INFO][4586] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.2.1/26] handle="k8s-pod-network.63bb7fe5e5f49f2d606f24a7c6471461b8206efd3ed866ae6b22f3b917325da3" host="ip-172-31-29-156" Mar 7 01:10:47.952540 containerd[1890]: 2026-03-07 01:10:47.836 [INFO][4586] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:10:47.952540 containerd[1890]: 2026-03-07 01:10:47.836 [INFO][4586] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.2.1/26] IPv6=[] ContainerID="63bb7fe5e5f49f2d606f24a7c6471461b8206efd3ed866ae6b22f3b917325da3" HandleID="k8s-pod-network.63bb7fe5e5f49f2d606f24a7c6471461b8206efd3ed866ae6b22f3b917325da3" Workload="ip--172--31--29--156-k8s-calico--apiserver--5fffbd9bb8--p7j5c-eth0" Mar 7 01:10:47.954436 containerd[1890]: 2026-03-07 01:10:47.844 [INFO][4506] cni-plugin/k8s.go 418: Populated endpoint ContainerID="63bb7fe5e5f49f2d606f24a7c6471461b8206efd3ed866ae6b22f3b917325da3" Namespace="calico-system" Pod="calico-apiserver-5fffbd9bb8-p7j5c" WorkloadEndpoint="ip--172--31--29--156-k8s-calico--apiserver--5fffbd9bb8--p7j5c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--156-k8s-calico--apiserver--5fffbd9bb8--p7j5c-eth0", GenerateName:"calico-apiserver-5fffbd9bb8-", Namespace:"calico-system", SelfLink:"", UID:"31e4d6ab-37b1-4c95-b346-1277ec27fa0d", ResourceVersion:"896", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 10, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5fffbd9bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-156", ContainerID:"", Pod:"calico-apiserver-5fffbd9bb8-p7j5c", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.2.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calieecb1753e85", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:10:47.954436 containerd[1890]: 2026-03-07 01:10:47.845 [INFO][4506] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.2.1/32] ContainerID="63bb7fe5e5f49f2d606f24a7c6471461b8206efd3ed866ae6b22f3b917325da3" Namespace="calico-system" Pod="calico-apiserver-5fffbd9bb8-p7j5c" WorkloadEndpoint="ip--172--31--29--156-k8s-calico--apiserver--5fffbd9bb8--p7j5c-eth0" Mar 7 01:10:47.954436 containerd[1890]: 2026-03-07 01:10:47.845 [INFO][4506] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calieecb1753e85 ContainerID="63bb7fe5e5f49f2d606f24a7c6471461b8206efd3ed866ae6b22f3b917325da3" Namespace="calico-system" Pod="calico-apiserver-5fffbd9bb8-p7j5c" WorkloadEndpoint="ip--172--31--29--156-k8s-calico--apiserver--5fffbd9bb8--p7j5c-eth0" Mar 7 01:10:47.954436 containerd[1890]: 2026-03-07 01:10:47.869 [INFO][4506] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="63bb7fe5e5f49f2d606f24a7c6471461b8206efd3ed866ae6b22f3b917325da3" Namespace="calico-system" Pod="calico-apiserver-5fffbd9bb8-p7j5c" WorkloadEndpoint="ip--172--31--29--156-k8s-calico--apiserver--5fffbd9bb8--p7j5c-eth0" Mar 7 01:10:47.954436 containerd[1890]: 2026-03-07 01:10:47.869 [INFO][4506] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="63bb7fe5e5f49f2d606f24a7c6471461b8206efd3ed866ae6b22f3b917325da3" Namespace="calico-system" Pod="calico-apiserver-5fffbd9bb8-p7j5c" WorkloadEndpoint="ip--172--31--29--156-k8s-calico--apiserver--5fffbd9bb8--p7j5c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--156-k8s-calico--apiserver--5fffbd9bb8--p7j5c-eth0", GenerateName:"calico-apiserver-5fffbd9bb8-", Namespace:"calico-system", SelfLink:"", UID:"31e4d6ab-37b1-4c95-b346-1277ec27fa0d", ResourceVersion:"896", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 10, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5fffbd9bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-156", ContainerID:"63bb7fe5e5f49f2d606f24a7c6471461b8206efd3ed866ae6b22f3b917325da3", Pod:"calico-apiserver-5fffbd9bb8-p7j5c", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.2.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calieecb1753e85", MAC:"e2:16:90:c1:a6:1c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:10:47.954436 containerd[1890]: 2026-03-07 01:10:47.896 [INFO][4506] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="63bb7fe5e5f49f2d606f24a7c6471461b8206efd3ed866ae6b22f3b917325da3" Namespace="calico-system" Pod="calico-apiserver-5fffbd9bb8-p7j5c" WorkloadEndpoint="ip--172--31--29--156-k8s-calico--apiserver--5fffbd9bb8--p7j5c-eth0" Mar 7 01:10:48.075493 (udev-worker)[4641]: Network interface NamePolicy= disabled on kernel command line. Mar 7 01:10:48.105349 systemd-networkd[1818]: cali51d0eef42d2: Link UP Mar 7 01:10:48.106588 systemd-networkd[1818]: cali51d0eef42d2: Gained carrier Mar 7 01:10:48.167257 containerd[1890]: 2026-03-07 01:10:47.603 [ERROR][4516] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 7 01:10:48.167257 containerd[1890]: 2026-03-07 01:10:47.638 [INFO][4516] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--29--156-k8s-calico--apiserver--5fffbd9bb8--8fmss-eth0 calico-apiserver-5fffbd9bb8- calico-system 84013ff8-1caf-4210-8629-c308dc9cf92c 893 0 2026-03-07 01:10:20 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5fffbd9bb8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-29-156 calico-apiserver-5fffbd9bb8-8fmss eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali51d0eef42d2 [] [] }} ContainerID="10de6ce919483936bd11836cdc7c56f459922c3060d91051fb0cec1d419f0d20" Namespace="calico-system" Pod="calico-apiserver-5fffbd9bb8-8fmss" WorkloadEndpoint="ip--172--31--29--156-k8s-calico--apiserver--5fffbd9bb8--8fmss-" Mar 7 01:10:48.167257 containerd[1890]: 2026-03-07 01:10:47.638 [INFO][4516] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="10de6ce919483936bd11836cdc7c56f459922c3060d91051fb0cec1d419f0d20" Namespace="calico-system" Pod="calico-apiserver-5fffbd9bb8-8fmss" WorkloadEndpoint="ip--172--31--29--156-k8s-calico--apiserver--5fffbd9bb8--8fmss-eth0" Mar 7 01:10:48.167257 containerd[1890]: 2026-03-07 01:10:47.807 [INFO][4600] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="10de6ce919483936bd11836cdc7c56f459922c3060d91051fb0cec1d419f0d20" HandleID="k8s-pod-network.10de6ce919483936bd11836cdc7c56f459922c3060d91051fb0cec1d419f0d20" Workload="ip--172--31--29--156-k8s-calico--apiserver--5fffbd9bb8--8fmss-eth0" Mar 7 01:10:48.167257 containerd[1890]: 2026-03-07 01:10:47.836 [INFO][4600] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="10de6ce919483936bd11836cdc7c56f459922c3060d91051fb0cec1d419f0d20" HandleID="k8s-pod-network.10de6ce919483936bd11836cdc7c56f459922c3060d91051fb0cec1d419f0d20" Workload="ip--172--31--29--156-k8s-calico--apiserver--5fffbd9bb8--8fmss-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f840), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-29-156", "pod":"calico-apiserver-5fffbd9bb8-8fmss", "timestamp":"2026-03-07 01:10:47.807122381 +0000 UTC"}, Hostname:"ip-172-31-29-156", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0004f2000)} Mar 7 01:10:48.167257 containerd[1890]: 2026-03-07 01:10:47.836 [INFO][4600] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:10:48.167257 containerd[1890]: 2026-03-07 01:10:47.837 [INFO][4600] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:10:48.167257 containerd[1890]: 2026-03-07 01:10:47.837 [INFO][4600] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-29-156' Mar 7 01:10:48.167257 containerd[1890]: 2026-03-07 01:10:47.873 [INFO][4600] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.10de6ce919483936bd11836cdc7c56f459922c3060d91051fb0cec1d419f0d20" host="ip-172-31-29-156" Mar 7 01:10:48.167257 containerd[1890]: 2026-03-07 01:10:47.893 [INFO][4600] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-29-156" Mar 7 01:10:48.167257 containerd[1890]: 2026-03-07 01:10:47.939 [INFO][4600] ipam/ipam.go 526: Trying affinity for 192.168.2.0/26 host="ip-172-31-29-156" Mar 7 01:10:48.167257 containerd[1890]: 2026-03-07 01:10:47.964 [INFO][4600] ipam/ipam.go 160: Attempting to load block cidr=192.168.2.0/26 host="ip-172-31-29-156" Mar 7 01:10:48.167257 containerd[1890]: 2026-03-07 01:10:47.976 [INFO][4600] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.2.0/26 host="ip-172-31-29-156" Mar 7 01:10:48.167257 containerd[1890]: 2026-03-07 01:10:47.976 [INFO][4600] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.2.0/26 handle="k8s-pod-network.10de6ce919483936bd11836cdc7c56f459922c3060d91051fb0cec1d419f0d20" host="ip-172-31-29-156" Mar 7 01:10:48.167257 containerd[1890]: 2026-03-07 01:10:47.982 [INFO][4600] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.10de6ce919483936bd11836cdc7c56f459922c3060d91051fb0cec1d419f0d20 Mar 7 01:10:48.167257 containerd[1890]: 2026-03-07 01:10:48.008 [INFO][4600] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.2.0/26 handle="k8s-pod-network.10de6ce919483936bd11836cdc7c56f459922c3060d91051fb0cec1d419f0d20" host="ip-172-31-29-156" Mar 7 01:10:48.167257 containerd[1890]: 2026-03-07 01:10:48.053 [INFO][4600] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.2.2/26] block=192.168.2.0/26 handle="k8s-pod-network.10de6ce919483936bd11836cdc7c56f459922c3060d91051fb0cec1d419f0d20" host="ip-172-31-29-156" Mar 7 01:10:48.167257 containerd[1890]: 2026-03-07 01:10:48.054 [INFO][4600] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.2.2/26] handle="k8s-pod-network.10de6ce919483936bd11836cdc7c56f459922c3060d91051fb0cec1d419f0d20" host="ip-172-31-29-156" Mar 7 01:10:48.167257 containerd[1890]: 2026-03-07 01:10:48.056 [INFO][4600] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:10:48.167257 containerd[1890]: 2026-03-07 01:10:48.056 [INFO][4600] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.2.2/26] IPv6=[] ContainerID="10de6ce919483936bd11836cdc7c56f459922c3060d91051fb0cec1d419f0d20" HandleID="k8s-pod-network.10de6ce919483936bd11836cdc7c56f459922c3060d91051fb0cec1d419f0d20" Workload="ip--172--31--29--156-k8s-calico--apiserver--5fffbd9bb8--8fmss-eth0" Mar 7 01:10:48.171420 containerd[1890]: 2026-03-07 01:10:48.067 [INFO][4516] cni-plugin/k8s.go 418: Populated endpoint ContainerID="10de6ce919483936bd11836cdc7c56f459922c3060d91051fb0cec1d419f0d20" Namespace="calico-system" Pod="calico-apiserver-5fffbd9bb8-8fmss" WorkloadEndpoint="ip--172--31--29--156-k8s-calico--apiserver--5fffbd9bb8--8fmss-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--156-k8s-calico--apiserver--5fffbd9bb8--8fmss-eth0", GenerateName:"calico-apiserver-5fffbd9bb8-", Namespace:"calico-system", SelfLink:"", UID:"84013ff8-1caf-4210-8629-c308dc9cf92c", ResourceVersion:"893", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 10, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5fffbd9bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-156", ContainerID:"", Pod:"calico-apiserver-5fffbd9bb8-8fmss", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.2.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali51d0eef42d2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:10:48.171420 containerd[1890]: 2026-03-07 01:10:48.071 [INFO][4516] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.2.2/32] ContainerID="10de6ce919483936bd11836cdc7c56f459922c3060d91051fb0cec1d419f0d20" Namespace="calico-system" Pod="calico-apiserver-5fffbd9bb8-8fmss" WorkloadEndpoint="ip--172--31--29--156-k8s-calico--apiserver--5fffbd9bb8--8fmss-eth0" Mar 7 01:10:48.171420 containerd[1890]: 2026-03-07 01:10:48.071 [INFO][4516] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali51d0eef42d2 ContainerID="10de6ce919483936bd11836cdc7c56f459922c3060d91051fb0cec1d419f0d20" Namespace="calico-system" Pod="calico-apiserver-5fffbd9bb8-8fmss" WorkloadEndpoint="ip--172--31--29--156-k8s-calico--apiserver--5fffbd9bb8--8fmss-eth0" Mar 7 01:10:48.171420 containerd[1890]: 2026-03-07 01:10:48.109 [INFO][4516] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="10de6ce919483936bd11836cdc7c56f459922c3060d91051fb0cec1d419f0d20" Namespace="calico-system" Pod="calico-apiserver-5fffbd9bb8-8fmss" WorkloadEndpoint="ip--172--31--29--156-k8s-calico--apiserver--5fffbd9bb8--8fmss-eth0" Mar 7 01:10:48.171420 containerd[1890]: 2026-03-07 01:10:48.113 [INFO][4516] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="10de6ce919483936bd11836cdc7c56f459922c3060d91051fb0cec1d419f0d20" Namespace="calico-system" Pod="calico-apiserver-5fffbd9bb8-8fmss" WorkloadEndpoint="ip--172--31--29--156-k8s-calico--apiserver--5fffbd9bb8--8fmss-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--156-k8s-calico--apiserver--5fffbd9bb8--8fmss-eth0", GenerateName:"calico-apiserver-5fffbd9bb8-", Namespace:"calico-system", SelfLink:"", UID:"84013ff8-1caf-4210-8629-c308dc9cf92c", ResourceVersion:"893", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 10, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5fffbd9bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-156", ContainerID:"10de6ce919483936bd11836cdc7c56f459922c3060d91051fb0cec1d419f0d20", Pod:"calico-apiserver-5fffbd9bb8-8fmss", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.2.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali51d0eef42d2", MAC:"7e:9a:d9:97:c6:dc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:10:48.171420 containerd[1890]: 2026-03-07 01:10:48.137 [INFO][4516] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="10de6ce919483936bd11836cdc7c56f459922c3060d91051fb0cec1d419f0d20" Namespace="calico-system" Pod="calico-apiserver-5fffbd9bb8-8fmss" WorkloadEndpoint="ip--172--31--29--156-k8s-calico--apiserver--5fffbd9bb8--8fmss-eth0" Mar 7 01:10:48.186652 containerd[1890]: time="2026-03-07T01:10:48.186349816Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:10:48.186652 containerd[1890]: time="2026-03-07T01:10:48.186426050Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:10:48.186652 containerd[1890]: time="2026-03-07T01:10:48.186444840Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:10:48.186652 containerd[1890]: time="2026-03-07T01:10:48.186559308Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:10:48.275312 systemd[1]: Started cri-containerd-63bb7fe5e5f49f2d606f24a7c6471461b8206efd3ed866ae6b22f3b917325da3.scope - libcontainer container 63bb7fe5e5f49f2d606f24a7c6471461b8206efd3ed866ae6b22f3b917325da3. Mar 7 01:10:48.284232 containerd[1890]: time="2026-03-07T01:10:48.283268940Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:10:48.284232 containerd[1890]: time="2026-03-07T01:10:48.283350694Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:10:48.284983 containerd[1890]: time="2026-03-07T01:10:48.283373261Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:10:48.285112 containerd[1890]: time="2026-03-07T01:10:48.284932454Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:10:48.325346 systemd-networkd[1818]: cali99b024c8fdc: Link UP Mar 7 01:10:48.342588 systemd-networkd[1818]: cali99b024c8fdc: Gained carrier Mar 7 01:10:48.388252 systemd[1]: Started cri-containerd-10de6ce919483936bd11836cdc7c56f459922c3060d91051fb0cec1d419f0d20.scope - libcontainer container 10de6ce919483936bd11836cdc7c56f459922c3060d91051fb0cec1d419f0d20. Mar 7 01:10:48.445277 containerd[1890]: 2026-03-07 01:10:47.466 [ERROR][4495] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 7 01:10:48.445277 containerd[1890]: 2026-03-07 01:10:47.524 [INFO][4495] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--29--156-k8s-whisker--669bdb6b65--nhkt8-eth0 whisker-669bdb6b65- calico-system e39111b2-0e10-4e35-8ff3-e8249485a878 898 0 2026-03-07 01:10:25 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:669bdb6b65 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-29-156 whisker-669bdb6b65-nhkt8 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali99b024c8fdc [] [] }} ContainerID="5237c383c7f60d776b4d1e7f0da8f300dfceabab4bad4d87ba9ba9d045bf7a10" Namespace="calico-system" Pod="whisker-669bdb6b65-nhkt8" WorkloadEndpoint="ip--172--31--29--156-k8s-whisker--669bdb6b65--nhkt8-" Mar 7 01:10:48.445277 containerd[1890]: 2026-03-07 01:10:47.524 [INFO][4495] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5237c383c7f60d776b4d1e7f0da8f300dfceabab4bad4d87ba9ba9d045bf7a10" Namespace="calico-system" Pod="whisker-669bdb6b65-nhkt8" WorkloadEndpoint="ip--172--31--29--156-k8s-whisker--669bdb6b65--nhkt8-eth0" Mar 7 01:10:48.445277 containerd[1890]: 2026-03-07 01:10:47.807 [INFO][4584] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5237c383c7f60d776b4d1e7f0da8f300dfceabab4bad4d87ba9ba9d045bf7a10" HandleID="k8s-pod-network.5237c383c7f60d776b4d1e7f0da8f300dfceabab4bad4d87ba9ba9d045bf7a10" Workload="ip--172--31--29--156-k8s-whisker--669bdb6b65--nhkt8-eth0" Mar 7 01:10:48.445277 containerd[1890]: 2026-03-07 01:10:47.846 [INFO][4584] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="5237c383c7f60d776b4d1e7f0da8f300dfceabab4bad4d87ba9ba9d045bf7a10" HandleID="k8s-pod-network.5237c383c7f60d776b4d1e7f0da8f300dfceabab4bad4d87ba9ba9d045bf7a10" Workload="ip--172--31--29--156-k8s-whisker--669bdb6b65--nhkt8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000380bf0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-29-156", "pod":"whisker-669bdb6b65-nhkt8", "timestamp":"2026-03-07 01:10:47.807400691 +0000 UTC"}, Hostname:"ip-172-31-29-156", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0001894a0)} Mar 7 01:10:48.445277 containerd[1890]: 2026-03-07 01:10:47.846 [INFO][4584] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:10:48.445277 containerd[1890]: 2026-03-07 01:10:48.062 [INFO][4584] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:10:48.445277 containerd[1890]: 2026-03-07 01:10:48.063 [INFO][4584] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-29-156' Mar 7 01:10:48.445277 containerd[1890]: 2026-03-07 01:10:48.069 [INFO][4584] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.5237c383c7f60d776b4d1e7f0da8f300dfceabab4bad4d87ba9ba9d045bf7a10" host="ip-172-31-29-156" Mar 7 01:10:48.445277 containerd[1890]: 2026-03-07 01:10:48.108 [INFO][4584] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-29-156" Mar 7 01:10:48.445277 containerd[1890]: 2026-03-07 01:10:48.137 [INFO][4584] ipam/ipam.go 526: Trying affinity for 192.168.2.0/26 host="ip-172-31-29-156" Mar 7 01:10:48.445277 containerd[1890]: 2026-03-07 01:10:48.160 [INFO][4584] ipam/ipam.go 160: Attempting to load block cidr=192.168.2.0/26 host="ip-172-31-29-156" Mar 7 01:10:48.445277 containerd[1890]: 2026-03-07 01:10:48.189 [INFO][4584] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.2.0/26 host="ip-172-31-29-156" Mar 7 01:10:48.445277 containerd[1890]: 2026-03-07 01:10:48.189 [INFO][4584] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.2.0/26 handle="k8s-pod-network.5237c383c7f60d776b4d1e7f0da8f300dfceabab4bad4d87ba9ba9d045bf7a10" host="ip-172-31-29-156" Mar 7 01:10:48.445277 containerd[1890]: 2026-03-07 01:10:48.193 [INFO][4584] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.5237c383c7f60d776b4d1e7f0da8f300dfceabab4bad4d87ba9ba9d045bf7a10 Mar 7 01:10:48.445277 containerd[1890]: 2026-03-07 01:10:48.259 [INFO][4584] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.2.0/26 handle="k8s-pod-network.5237c383c7f60d776b4d1e7f0da8f300dfceabab4bad4d87ba9ba9d045bf7a10" host="ip-172-31-29-156" Mar 7 01:10:48.445277 containerd[1890]: 2026-03-07 01:10:48.307 [INFO][4584] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.2.3/26] block=192.168.2.0/26 handle="k8s-pod-network.5237c383c7f60d776b4d1e7f0da8f300dfceabab4bad4d87ba9ba9d045bf7a10" host="ip-172-31-29-156" Mar 7 01:10:48.445277 containerd[1890]: 2026-03-07 01:10:48.307 [INFO][4584] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.2.3/26] handle="k8s-pod-network.5237c383c7f60d776b4d1e7f0da8f300dfceabab4bad4d87ba9ba9d045bf7a10" host="ip-172-31-29-156" Mar 7 01:10:48.445277 containerd[1890]: 2026-03-07 01:10:48.307 [INFO][4584] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:10:48.445277 containerd[1890]: 2026-03-07 01:10:48.307 [INFO][4584] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.2.3/26] IPv6=[] ContainerID="5237c383c7f60d776b4d1e7f0da8f300dfceabab4bad4d87ba9ba9d045bf7a10" HandleID="k8s-pod-network.5237c383c7f60d776b4d1e7f0da8f300dfceabab4bad4d87ba9ba9d045bf7a10" Workload="ip--172--31--29--156-k8s-whisker--669bdb6b65--nhkt8-eth0" Mar 7 01:10:48.447509 containerd[1890]: 2026-03-07 01:10:48.315 [INFO][4495] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5237c383c7f60d776b4d1e7f0da8f300dfceabab4bad4d87ba9ba9d045bf7a10" Namespace="calico-system" Pod="whisker-669bdb6b65-nhkt8" WorkloadEndpoint="ip--172--31--29--156-k8s-whisker--669bdb6b65--nhkt8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--156-k8s-whisker--669bdb6b65--nhkt8-eth0", GenerateName:"whisker-669bdb6b65-", Namespace:"calico-system", SelfLink:"", UID:"e39111b2-0e10-4e35-8ff3-e8249485a878", ResourceVersion:"898", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 10, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"669bdb6b65", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-156", ContainerID:"", Pod:"whisker-669bdb6b65-nhkt8", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.2.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali99b024c8fdc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:10:48.447509 containerd[1890]: 2026-03-07 01:10:48.315 [INFO][4495] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.2.3/32] ContainerID="5237c383c7f60d776b4d1e7f0da8f300dfceabab4bad4d87ba9ba9d045bf7a10" Namespace="calico-system" Pod="whisker-669bdb6b65-nhkt8" WorkloadEndpoint="ip--172--31--29--156-k8s-whisker--669bdb6b65--nhkt8-eth0" Mar 7 01:10:48.447509 containerd[1890]: 2026-03-07 01:10:48.315 [INFO][4495] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali99b024c8fdc ContainerID="5237c383c7f60d776b4d1e7f0da8f300dfceabab4bad4d87ba9ba9d045bf7a10" Namespace="calico-system" Pod="whisker-669bdb6b65-nhkt8" WorkloadEndpoint="ip--172--31--29--156-k8s-whisker--669bdb6b65--nhkt8-eth0" Mar 7 01:10:48.447509 containerd[1890]: 2026-03-07 01:10:48.378 [INFO][4495] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5237c383c7f60d776b4d1e7f0da8f300dfceabab4bad4d87ba9ba9d045bf7a10" Namespace="calico-system" Pod="whisker-669bdb6b65-nhkt8" WorkloadEndpoint="ip--172--31--29--156-k8s-whisker--669bdb6b65--nhkt8-eth0" Mar 7 01:10:48.447509 containerd[1890]: 2026-03-07 01:10:48.383 [INFO][4495] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5237c383c7f60d776b4d1e7f0da8f300dfceabab4bad4d87ba9ba9d045bf7a10" Namespace="calico-system" Pod="whisker-669bdb6b65-nhkt8" WorkloadEndpoint="ip--172--31--29--156-k8s-whisker--669bdb6b65--nhkt8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--156-k8s-whisker--669bdb6b65--nhkt8-eth0", GenerateName:"whisker-669bdb6b65-", Namespace:"calico-system", SelfLink:"", UID:"e39111b2-0e10-4e35-8ff3-e8249485a878", ResourceVersion:"898", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 10, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"669bdb6b65", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-156", ContainerID:"5237c383c7f60d776b4d1e7f0da8f300dfceabab4bad4d87ba9ba9d045bf7a10", Pod:"whisker-669bdb6b65-nhkt8", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.2.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali99b024c8fdc", MAC:"c6:56:23:0d:c4:b0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:10:48.447509 containerd[1890]: 2026-03-07 01:10:48.438 [INFO][4495] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5237c383c7f60d776b4d1e7f0da8f300dfceabab4bad4d87ba9ba9d045bf7a10" Namespace="calico-system" Pod="whisker-669bdb6b65-nhkt8" WorkloadEndpoint="ip--172--31--29--156-k8s-whisker--669bdb6b65--nhkt8-eth0" Mar 7 01:10:48.481456 containerd[1890]: time="2026-03-07T01:10:48.481333879Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:10:48.482529 containerd[1890]: time="2026-03-07T01:10:48.482364427Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:10:48.482529 containerd[1890]: time="2026-03-07T01:10:48.482493104Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:10:48.482894 containerd[1890]: time="2026-03-07T01:10:48.482698924Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:10:48.525240 systemd[1]: Started cri-containerd-5237c383c7f60d776b4d1e7f0da8f300dfceabab4bad4d87ba9ba9d045bf7a10.scope - libcontainer container 5237c383c7f60d776b4d1e7f0da8f300dfceabab4bad4d87ba9ba9d045bf7a10. Mar 7 01:10:48.601407 systemd-networkd[1818]: calibc36f3d5211: Link UP Mar 7 01:10:48.602741 systemd-networkd[1818]: calibc36f3d5211: Gained carrier Mar 7 01:10:48.666666 containerd[1890]: time="2026-03-07T01:10:48.666573207Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5fffbd9bb8-p7j5c,Uid:31e4d6ab-37b1-4c95-b346-1277ec27fa0d,Namespace:calico-system,Attempt:1,} returns sandbox id \"63bb7fe5e5f49f2d606f24a7c6471461b8206efd3ed866ae6b22f3b917325da3\"" Mar 7 01:10:48.688020 containerd[1890]: 2026-03-07 01:10:47.606 [ERROR][4525] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 7 01:10:48.688020 containerd[1890]: 2026-03-07 01:10:47.649 [INFO][4525] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--29--156-k8s-coredns--66bc5c9577--7lsxp-eth0 coredns-66bc5c9577- kube-system c3318a07-edd7-4b14-bc1e-8289ed132c3a 899 0 2026-03-07 01:10:05 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-29-156 coredns-66bc5c9577-7lsxp eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calibc36f3d5211 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="7128e3de14d1a4ed371bc1231c34bf21f1f935e8b41b53d0dbf4a1afe61c1c12" Namespace="kube-system" Pod="coredns-66bc5c9577-7lsxp" WorkloadEndpoint="ip--172--31--29--156-k8s-coredns--66bc5c9577--7lsxp-" Mar 7 01:10:48.688020 containerd[1890]: 2026-03-07 01:10:47.649 [INFO][4525] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7128e3de14d1a4ed371bc1231c34bf21f1f935e8b41b53d0dbf4a1afe61c1c12" Namespace="kube-system" Pod="coredns-66bc5c9577-7lsxp" WorkloadEndpoint="ip--172--31--29--156-k8s-coredns--66bc5c9577--7lsxp-eth0" Mar 7 01:10:48.688020 containerd[1890]: 2026-03-07 01:10:47.970 [INFO][4605] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7128e3de14d1a4ed371bc1231c34bf21f1f935e8b41b53d0dbf4a1afe61c1c12" HandleID="k8s-pod-network.7128e3de14d1a4ed371bc1231c34bf21f1f935e8b41b53d0dbf4a1afe61c1c12" Workload="ip--172--31--29--156-k8s-coredns--66bc5c9577--7lsxp-eth0" Mar 7 01:10:48.688020 containerd[1890]: 2026-03-07 01:10:48.002 [INFO][4605] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="7128e3de14d1a4ed371bc1231c34bf21f1f935e8b41b53d0dbf4a1afe61c1c12" HandleID="k8s-pod-network.7128e3de14d1a4ed371bc1231c34bf21f1f935e8b41b53d0dbf4a1afe61c1c12" Workload="ip--172--31--29--156-k8s-coredns--66bc5c9577--7lsxp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003cc460), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-29-156", "pod":"coredns-66bc5c9577-7lsxp", "timestamp":"2026-03-07 01:10:47.970506456 +0000 UTC"}, Hostname:"ip-172-31-29-156", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0000b3600)} Mar 7 01:10:48.688020 containerd[1890]: 2026-03-07 01:10:48.002 [INFO][4605] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:10:48.688020 containerd[1890]: 2026-03-07 01:10:48.308 [INFO][4605] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:10:48.688020 containerd[1890]: 2026-03-07 01:10:48.308 [INFO][4605] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-29-156' Mar 7 01:10:48.688020 containerd[1890]: 2026-03-07 01:10:48.364 [INFO][4605] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.7128e3de14d1a4ed371bc1231c34bf21f1f935e8b41b53d0dbf4a1afe61c1c12" host="ip-172-31-29-156" Mar 7 01:10:48.688020 containerd[1890]: 2026-03-07 01:10:48.410 [INFO][4605] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-29-156" Mar 7 01:10:48.688020 containerd[1890]: 2026-03-07 01:10:48.440 [INFO][4605] ipam/ipam.go 526: Trying affinity for 192.168.2.0/26 host="ip-172-31-29-156" Mar 7 01:10:48.688020 containerd[1890]: 2026-03-07 01:10:48.470 [INFO][4605] ipam/ipam.go 160: Attempting to load block cidr=192.168.2.0/26 host="ip-172-31-29-156" Mar 7 01:10:48.688020 containerd[1890]: 2026-03-07 01:10:48.480 [INFO][4605] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.2.0/26 host="ip-172-31-29-156" Mar 7 01:10:48.688020 containerd[1890]: 2026-03-07 01:10:48.481 [INFO][4605] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.2.0/26 handle="k8s-pod-network.7128e3de14d1a4ed371bc1231c34bf21f1f935e8b41b53d0dbf4a1afe61c1c12" host="ip-172-31-29-156" Mar 7 01:10:48.688020 containerd[1890]: 2026-03-07 01:10:48.488 [INFO][4605] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.7128e3de14d1a4ed371bc1231c34bf21f1f935e8b41b53d0dbf4a1afe61c1c12 Mar 7 01:10:48.688020 containerd[1890]: 2026-03-07 01:10:48.548 [INFO][4605] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.2.0/26 handle="k8s-pod-network.7128e3de14d1a4ed371bc1231c34bf21f1f935e8b41b53d0dbf4a1afe61c1c12" host="ip-172-31-29-156" Mar 7 01:10:48.688020 containerd[1890]: 2026-03-07 01:10:48.564 [INFO][4605] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.2.4/26] block=192.168.2.0/26 handle="k8s-pod-network.7128e3de14d1a4ed371bc1231c34bf21f1f935e8b41b53d0dbf4a1afe61c1c12" host="ip-172-31-29-156" Mar 7 01:10:48.688020 containerd[1890]: 2026-03-07 01:10:48.564 [INFO][4605] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.2.4/26] handle="k8s-pod-network.7128e3de14d1a4ed371bc1231c34bf21f1f935e8b41b53d0dbf4a1afe61c1c12" host="ip-172-31-29-156" Mar 7 01:10:48.688020 containerd[1890]: 2026-03-07 01:10:48.566 [INFO][4605] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:10:48.688020 containerd[1890]: 2026-03-07 01:10:48.568 [INFO][4605] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.2.4/26] IPv6=[] ContainerID="7128e3de14d1a4ed371bc1231c34bf21f1f935e8b41b53d0dbf4a1afe61c1c12" HandleID="k8s-pod-network.7128e3de14d1a4ed371bc1231c34bf21f1f935e8b41b53d0dbf4a1afe61c1c12" Workload="ip--172--31--29--156-k8s-coredns--66bc5c9577--7lsxp-eth0" Mar 7 01:10:48.688989 containerd[1890]: 2026-03-07 01:10:48.591 [INFO][4525] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7128e3de14d1a4ed371bc1231c34bf21f1f935e8b41b53d0dbf4a1afe61c1c12" Namespace="kube-system" Pod="coredns-66bc5c9577-7lsxp" WorkloadEndpoint="ip--172--31--29--156-k8s-coredns--66bc5c9577--7lsxp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--156-k8s-coredns--66bc5c9577--7lsxp-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"c3318a07-edd7-4b14-bc1e-8289ed132c3a", ResourceVersion:"899", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 10, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-156", ContainerID:"", Pod:"coredns-66bc5c9577-7lsxp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.2.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibc36f3d5211", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:10:48.688989 containerd[1890]: 2026-03-07 01:10:48.591 [INFO][4525] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.2.4/32] ContainerID="7128e3de14d1a4ed371bc1231c34bf21f1f935e8b41b53d0dbf4a1afe61c1c12" Namespace="kube-system" Pod="coredns-66bc5c9577-7lsxp" WorkloadEndpoint="ip--172--31--29--156-k8s-coredns--66bc5c9577--7lsxp-eth0" Mar 7 01:10:48.688989 containerd[1890]: 2026-03-07 01:10:48.591 [INFO][4525] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibc36f3d5211 ContainerID="7128e3de14d1a4ed371bc1231c34bf21f1f935e8b41b53d0dbf4a1afe61c1c12" Namespace="kube-system" Pod="coredns-66bc5c9577-7lsxp" WorkloadEndpoint="ip--172--31--29--156-k8s-coredns--66bc5c9577--7lsxp-eth0" Mar 7 01:10:48.688989 containerd[1890]: 2026-03-07 01:10:48.608 [INFO][4525] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7128e3de14d1a4ed371bc1231c34bf21f1f935e8b41b53d0dbf4a1afe61c1c12" Namespace="kube-system" Pod="coredns-66bc5c9577-7lsxp" WorkloadEndpoint="ip--172--31--29--156-k8s-coredns--66bc5c9577--7lsxp-eth0" Mar 7 01:10:48.688989 containerd[1890]: 2026-03-07 01:10:48.618 [INFO][4525] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7128e3de14d1a4ed371bc1231c34bf21f1f935e8b41b53d0dbf4a1afe61c1c12" Namespace="kube-system" Pod="coredns-66bc5c9577-7lsxp" WorkloadEndpoint="ip--172--31--29--156-k8s-coredns--66bc5c9577--7lsxp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--156-k8s-coredns--66bc5c9577--7lsxp-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"c3318a07-edd7-4b14-bc1e-8289ed132c3a", ResourceVersion:"899", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 10, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-156", ContainerID:"7128e3de14d1a4ed371bc1231c34bf21f1f935e8b41b53d0dbf4a1afe61c1c12", Pod:"coredns-66bc5c9577-7lsxp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.2.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibc36f3d5211", MAC:"d6:00:e4:7b:55:76", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:10:48.688989 containerd[1890]: 2026-03-07 01:10:48.680 [INFO][4525] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7128e3de14d1a4ed371bc1231c34bf21f1f935e8b41b53d0dbf4a1afe61c1c12" Namespace="kube-system" Pod="coredns-66bc5c9577-7lsxp" WorkloadEndpoint="ip--172--31--29--156-k8s-coredns--66bc5c9577--7lsxp-eth0" Mar 7 01:10:48.702603 containerd[1890]: time="2026-03-07T01:10:48.699066110Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 7 01:10:48.710032 containerd[1890]: time="2026-03-07T01:10:48.709976906Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5fffbd9bb8-8fmss,Uid:84013ff8-1caf-4210-8629-c308dc9cf92c,Namespace:calico-system,Attempt:1,} returns sandbox id \"10de6ce919483936bd11836cdc7c56f459922c3060d91051fb0cec1d419f0d20\"" Mar 7 01:10:48.737526 systemd-networkd[1818]: cali7cebaccf7e2: Link UP Mar 7 01:10:48.740157 systemd-networkd[1818]: cali7cebaccf7e2: Gained carrier Mar 7 01:10:48.813411 containerd[1890]: 2026-03-07 01:10:47.721 [ERROR][4560] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 7 01:10:48.813411 containerd[1890]: 2026-03-07 01:10:47.780 [INFO][4560] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--29--156-k8s-goldmane--cccfbd5cf--9kx9l-eth0 goldmane-cccfbd5cf- calico-system ec167290-b451-4cd1-a0d3-0912ddaf0ce2 892 0 2026-03-07 01:10:21 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:cccfbd5cf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-29-156 goldmane-cccfbd5cf-9kx9l eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali7cebaccf7e2 [] [] }} ContainerID="867c5012f53e58bf21093f43842105ca0cd66fae284286e046fd877136d0e274" Namespace="calico-system" Pod="goldmane-cccfbd5cf-9kx9l" WorkloadEndpoint="ip--172--31--29--156-k8s-goldmane--cccfbd5cf--9kx9l-" Mar 7 01:10:48.813411 containerd[1890]: 2026-03-07 01:10:47.780 [INFO][4560] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="867c5012f53e58bf21093f43842105ca0cd66fae284286e046fd877136d0e274" Namespace="calico-system" Pod="goldmane-cccfbd5cf-9kx9l" WorkloadEndpoint="ip--172--31--29--156-k8s-goldmane--cccfbd5cf--9kx9l-eth0" Mar 7 01:10:48.813411 containerd[1890]: 2026-03-07 01:10:47.997 [INFO][4626] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="867c5012f53e58bf21093f43842105ca0cd66fae284286e046fd877136d0e274" HandleID="k8s-pod-network.867c5012f53e58bf21093f43842105ca0cd66fae284286e046fd877136d0e274" Workload="ip--172--31--29--156-k8s-goldmane--cccfbd5cf--9kx9l-eth0" Mar 7 01:10:48.813411 containerd[1890]: 2026-03-07 01:10:48.036 [INFO][4626] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="867c5012f53e58bf21093f43842105ca0cd66fae284286e046fd877136d0e274" HandleID="k8s-pod-network.867c5012f53e58bf21093f43842105ca0cd66fae284286e046fd877136d0e274" Workload="ip--172--31--29--156-k8s-goldmane--cccfbd5cf--9kx9l-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000363b60), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-29-156", "pod":"goldmane-cccfbd5cf-9kx9l", "timestamp":"2026-03-07 01:10:47.997921849 +0000 UTC"}, Hostname:"ip-172-31-29-156", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0000ddb80)} Mar 7 01:10:48.813411 containerd[1890]: 2026-03-07 01:10:48.044 [INFO][4626] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:10:48.813411 containerd[1890]: 2026-03-07 01:10:48.566 [INFO][4626] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:10:48.813411 containerd[1890]: 2026-03-07 01:10:48.566 [INFO][4626] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-29-156' Mar 7 01:10:48.813411 containerd[1890]: 2026-03-07 01:10:48.581 [INFO][4626] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.867c5012f53e58bf21093f43842105ca0cd66fae284286e046fd877136d0e274" host="ip-172-31-29-156" Mar 7 01:10:48.813411 containerd[1890]: 2026-03-07 01:10:48.624 [INFO][4626] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-29-156" Mar 7 01:10:48.813411 containerd[1890]: 2026-03-07 01:10:48.661 [INFO][4626] ipam/ipam.go 526: Trying affinity for 192.168.2.0/26 host="ip-172-31-29-156" Mar 7 01:10:48.813411 containerd[1890]: 2026-03-07 01:10:48.670 [INFO][4626] ipam/ipam.go 160: Attempting to load block cidr=192.168.2.0/26 host="ip-172-31-29-156" Mar 7 01:10:48.813411 containerd[1890]: 2026-03-07 01:10:48.679 [INFO][4626] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.2.0/26 host="ip-172-31-29-156" Mar 7 01:10:48.813411 containerd[1890]: 2026-03-07 01:10:48.679 [INFO][4626] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.2.0/26 handle="k8s-pod-network.867c5012f53e58bf21093f43842105ca0cd66fae284286e046fd877136d0e274" host="ip-172-31-29-156" Mar 7 01:10:48.813411 containerd[1890]: 2026-03-07 01:10:48.688 [INFO][4626] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.867c5012f53e58bf21093f43842105ca0cd66fae284286e046fd877136d0e274 Mar 7 01:10:48.813411 containerd[1890]: 2026-03-07 01:10:48.698 [INFO][4626] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.2.0/26 handle="k8s-pod-network.867c5012f53e58bf21093f43842105ca0cd66fae284286e046fd877136d0e274" host="ip-172-31-29-156" Mar 7 01:10:48.813411 containerd[1890]: 2026-03-07 01:10:48.713 [INFO][4626] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.2.5/26] block=192.168.2.0/26 handle="k8s-pod-network.867c5012f53e58bf21093f43842105ca0cd66fae284286e046fd877136d0e274" host="ip-172-31-29-156" Mar 7 01:10:48.813411 containerd[1890]: 2026-03-07 01:10:48.713 [INFO][4626] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.2.5/26] handle="k8s-pod-network.867c5012f53e58bf21093f43842105ca0cd66fae284286e046fd877136d0e274" host="ip-172-31-29-156" Mar 7 01:10:48.813411 containerd[1890]: 2026-03-07 01:10:48.713 [INFO][4626] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:10:48.813411 containerd[1890]: 2026-03-07 01:10:48.713 [INFO][4626] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.2.5/26] IPv6=[] ContainerID="867c5012f53e58bf21093f43842105ca0cd66fae284286e046fd877136d0e274" HandleID="k8s-pod-network.867c5012f53e58bf21093f43842105ca0cd66fae284286e046fd877136d0e274" Workload="ip--172--31--29--156-k8s-goldmane--cccfbd5cf--9kx9l-eth0" Mar 7 01:10:48.814553 containerd[1890]: 2026-03-07 01:10:48.720 [INFO][4560] cni-plugin/k8s.go 418: Populated endpoint ContainerID="867c5012f53e58bf21093f43842105ca0cd66fae284286e046fd877136d0e274" Namespace="calico-system" Pod="goldmane-cccfbd5cf-9kx9l" WorkloadEndpoint="ip--172--31--29--156-k8s-goldmane--cccfbd5cf--9kx9l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--156-k8s-goldmane--cccfbd5cf--9kx9l-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"ec167290-b451-4cd1-a0d3-0912ddaf0ce2", ResourceVersion:"892", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 10, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-156", ContainerID:"", Pod:"goldmane-cccfbd5cf-9kx9l", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.2.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali7cebaccf7e2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:10:48.814553 containerd[1890]: 2026-03-07 01:10:48.721 [INFO][4560] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.2.5/32] ContainerID="867c5012f53e58bf21093f43842105ca0cd66fae284286e046fd877136d0e274" Namespace="calico-system" Pod="goldmane-cccfbd5cf-9kx9l" WorkloadEndpoint="ip--172--31--29--156-k8s-goldmane--cccfbd5cf--9kx9l-eth0" Mar 7 01:10:48.814553 containerd[1890]: 2026-03-07 01:10:48.723 [INFO][4560] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7cebaccf7e2 ContainerID="867c5012f53e58bf21093f43842105ca0cd66fae284286e046fd877136d0e274" Namespace="calico-system" Pod="goldmane-cccfbd5cf-9kx9l" WorkloadEndpoint="ip--172--31--29--156-k8s-goldmane--cccfbd5cf--9kx9l-eth0" Mar 7 01:10:48.814553 containerd[1890]: 2026-03-07 01:10:48.740 [INFO][4560] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="867c5012f53e58bf21093f43842105ca0cd66fae284286e046fd877136d0e274" Namespace="calico-system" Pod="goldmane-cccfbd5cf-9kx9l" WorkloadEndpoint="ip--172--31--29--156-k8s-goldmane--cccfbd5cf--9kx9l-eth0" Mar 7 01:10:48.814553 containerd[1890]: 2026-03-07 01:10:48.747 [INFO][4560] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="867c5012f53e58bf21093f43842105ca0cd66fae284286e046fd877136d0e274" Namespace="calico-system" Pod="goldmane-cccfbd5cf-9kx9l" WorkloadEndpoint="ip--172--31--29--156-k8s-goldmane--cccfbd5cf--9kx9l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--156-k8s-goldmane--cccfbd5cf--9kx9l-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"ec167290-b451-4cd1-a0d3-0912ddaf0ce2", ResourceVersion:"892", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 10, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-156", ContainerID:"867c5012f53e58bf21093f43842105ca0cd66fae284286e046fd877136d0e274", Pod:"goldmane-cccfbd5cf-9kx9l", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.2.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali7cebaccf7e2", MAC:"8a:e3:42:82:4e:18", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:10:48.814553 containerd[1890]: 2026-03-07 01:10:48.800 [INFO][4560] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="867c5012f53e58bf21093f43842105ca0cd66fae284286e046fd877136d0e274" Namespace="calico-system" Pod="goldmane-cccfbd5cf-9kx9l" WorkloadEndpoint="ip--172--31--29--156-k8s-goldmane--cccfbd5cf--9kx9l-eth0" Mar 7 01:10:48.879243 containerd[1890]: time="2026-03-07T01:10:48.876539271Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:10:48.879243 containerd[1890]: time="2026-03-07T01:10:48.876618665Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:10:48.879243 containerd[1890]: time="2026-03-07T01:10:48.876640185Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:10:48.879243 containerd[1890]: time="2026-03-07T01:10:48.876758669Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:10:48.935104 systemd-networkd[1818]: calib0c6c661f52: Link UP Mar 7 01:10:48.944176 systemd-networkd[1818]: calib0c6c661f52: Gained carrier Mar 7 01:10:49.023427 containerd[1890]: 2026-03-07 01:10:47.653 [ERROR][4528] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 7 01:10:49.023427 containerd[1890]: 2026-03-07 01:10:47.717 [INFO][4528] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--29--156-k8s-calico--kube--controllers--dd6b64fbf--c45tm-eth0 calico-kube-controllers-dd6b64fbf- calico-system b097a7a7-3ecc-4008-8602-5cdf4dd7e06f 897 0 2026-03-07 01:10:22 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:dd6b64fbf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-29-156 calico-kube-controllers-dd6b64fbf-c45tm eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calib0c6c661f52 [] [] }} ContainerID="ed9ef43286ec5aaab75d39f65107e7ad4ac0543f7a1bcec29aaea0e5622cdf94" Namespace="calico-system" Pod="calico-kube-controllers-dd6b64fbf-c45tm" WorkloadEndpoint="ip--172--31--29--156-k8s-calico--kube--controllers--dd6b64fbf--c45tm-" Mar 7 01:10:49.023427 containerd[1890]: 2026-03-07 01:10:47.717 [INFO][4528] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ed9ef43286ec5aaab75d39f65107e7ad4ac0543f7a1bcec29aaea0e5622cdf94" Namespace="calico-system" Pod="calico-kube-controllers-dd6b64fbf-c45tm" WorkloadEndpoint="ip--172--31--29--156-k8s-calico--kube--controllers--dd6b64fbf--c45tm-eth0" Mar 7 01:10:49.023427 containerd[1890]: 2026-03-07 01:10:48.021 [INFO][4615] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ed9ef43286ec5aaab75d39f65107e7ad4ac0543f7a1bcec29aaea0e5622cdf94" HandleID="k8s-pod-network.ed9ef43286ec5aaab75d39f65107e7ad4ac0543f7a1bcec29aaea0e5622cdf94" Workload="ip--172--31--29--156-k8s-calico--kube--controllers--dd6b64fbf--c45tm-eth0" Mar 7 01:10:49.023427 containerd[1890]: 2026-03-07 01:10:48.069 [INFO][4615] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="ed9ef43286ec5aaab75d39f65107e7ad4ac0543f7a1bcec29aaea0e5622cdf94" HandleID="k8s-pod-network.ed9ef43286ec5aaab75d39f65107e7ad4ac0543f7a1bcec29aaea0e5622cdf94" Workload="ip--172--31--29--156-k8s-calico--kube--controllers--dd6b64fbf--c45tm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000122f00), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-29-156", "pod":"calico-kube-controllers-dd6b64fbf-c45tm", "timestamp":"2026-03-07 01:10:48.021406334 +0000 UTC"}, Hostname:"ip-172-31-29-156", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000188b00)} Mar 7 01:10:49.023427 containerd[1890]: 2026-03-07 01:10:48.069 [INFO][4615] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:10:49.023427 containerd[1890]: 2026-03-07 01:10:48.715 [INFO][4615] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:10:49.023427 containerd[1890]: 2026-03-07 01:10:48.715 [INFO][4615] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-29-156' Mar 7 01:10:49.023427 containerd[1890]: 2026-03-07 01:10:48.729 [INFO][4615] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.ed9ef43286ec5aaab75d39f65107e7ad4ac0543f7a1bcec29aaea0e5622cdf94" host="ip-172-31-29-156" Mar 7 01:10:49.023427 containerd[1890]: 2026-03-07 01:10:48.773 [INFO][4615] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-29-156" Mar 7 01:10:49.023427 containerd[1890]: 2026-03-07 01:10:48.798 [INFO][4615] ipam/ipam.go 526: Trying affinity for 192.168.2.0/26 host="ip-172-31-29-156" Mar 7 01:10:49.023427 containerd[1890]: 2026-03-07 01:10:48.805 [INFO][4615] ipam/ipam.go 160: Attempting to load block cidr=192.168.2.0/26 host="ip-172-31-29-156" Mar 7 01:10:49.023427 containerd[1890]: 2026-03-07 01:10:48.818 [INFO][4615] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.2.0/26 host="ip-172-31-29-156" Mar 7 01:10:49.023427 containerd[1890]: 2026-03-07 01:10:48.818 [INFO][4615] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.2.0/26 handle="k8s-pod-network.ed9ef43286ec5aaab75d39f65107e7ad4ac0543f7a1bcec29aaea0e5622cdf94" host="ip-172-31-29-156" Mar 7 01:10:49.023427 containerd[1890]: 2026-03-07 01:10:48.829 [INFO][4615] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.ed9ef43286ec5aaab75d39f65107e7ad4ac0543f7a1bcec29aaea0e5622cdf94 Mar 7 01:10:49.023427 containerd[1890]: 2026-03-07 01:10:48.851 [INFO][4615] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.2.0/26 handle="k8s-pod-network.ed9ef43286ec5aaab75d39f65107e7ad4ac0543f7a1bcec29aaea0e5622cdf94" host="ip-172-31-29-156" Mar 7 01:10:49.023427 containerd[1890]: 2026-03-07 01:10:48.889 [INFO][4615] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.2.6/26] block=192.168.2.0/26 handle="k8s-pod-network.ed9ef43286ec5aaab75d39f65107e7ad4ac0543f7a1bcec29aaea0e5622cdf94" host="ip-172-31-29-156" Mar 7 01:10:49.023427 containerd[1890]: 2026-03-07 01:10:48.890 [INFO][4615] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.2.6/26] handle="k8s-pod-network.ed9ef43286ec5aaab75d39f65107e7ad4ac0543f7a1bcec29aaea0e5622cdf94" host="ip-172-31-29-156" Mar 7 01:10:49.023427 containerd[1890]: 2026-03-07 01:10:48.890 [INFO][4615] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:10:49.023427 containerd[1890]: 2026-03-07 01:10:48.890 [INFO][4615] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.2.6/26] IPv6=[] ContainerID="ed9ef43286ec5aaab75d39f65107e7ad4ac0543f7a1bcec29aaea0e5622cdf94" HandleID="k8s-pod-network.ed9ef43286ec5aaab75d39f65107e7ad4ac0543f7a1bcec29aaea0e5622cdf94" Workload="ip--172--31--29--156-k8s-calico--kube--controllers--dd6b64fbf--c45tm-eth0" Mar 7 01:10:49.026294 containerd[1890]: 2026-03-07 01:10:48.921 [INFO][4528] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ed9ef43286ec5aaab75d39f65107e7ad4ac0543f7a1bcec29aaea0e5622cdf94" Namespace="calico-system" Pod="calico-kube-controllers-dd6b64fbf-c45tm" WorkloadEndpoint="ip--172--31--29--156-k8s-calico--kube--controllers--dd6b64fbf--c45tm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--156-k8s-calico--kube--controllers--dd6b64fbf--c45tm-eth0", GenerateName:"calico-kube-controllers-dd6b64fbf-", Namespace:"calico-system", SelfLink:"", UID:"b097a7a7-3ecc-4008-8602-5cdf4dd7e06f", ResourceVersion:"897", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 10, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"dd6b64fbf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-156", ContainerID:"", Pod:"calico-kube-controllers-dd6b64fbf-c45tm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.2.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib0c6c661f52", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:10:49.026294 containerd[1890]: 2026-03-07 01:10:48.921 [INFO][4528] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.2.6/32] ContainerID="ed9ef43286ec5aaab75d39f65107e7ad4ac0543f7a1bcec29aaea0e5622cdf94" Namespace="calico-system" Pod="calico-kube-controllers-dd6b64fbf-c45tm" WorkloadEndpoint="ip--172--31--29--156-k8s-calico--kube--controllers--dd6b64fbf--c45tm-eth0" Mar 7 01:10:49.026294 containerd[1890]: 2026-03-07 01:10:48.921 [INFO][4528] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib0c6c661f52 ContainerID="ed9ef43286ec5aaab75d39f65107e7ad4ac0543f7a1bcec29aaea0e5622cdf94" Namespace="calico-system" Pod="calico-kube-controllers-dd6b64fbf-c45tm" WorkloadEndpoint="ip--172--31--29--156-k8s-calico--kube--controllers--dd6b64fbf--c45tm-eth0" Mar 7 01:10:49.026294 containerd[1890]: 2026-03-07 01:10:48.941 [INFO][4528] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ed9ef43286ec5aaab75d39f65107e7ad4ac0543f7a1bcec29aaea0e5622cdf94" Namespace="calico-system" Pod="calico-kube-controllers-dd6b64fbf-c45tm" WorkloadEndpoint="ip--172--31--29--156-k8s-calico--kube--controllers--dd6b64fbf--c45tm-eth0" Mar 7 01:10:49.026294 containerd[1890]: 2026-03-07 01:10:48.946 [INFO][4528] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ed9ef43286ec5aaab75d39f65107e7ad4ac0543f7a1bcec29aaea0e5622cdf94" Namespace="calico-system" Pod="calico-kube-controllers-dd6b64fbf-c45tm" WorkloadEndpoint="ip--172--31--29--156-k8s-calico--kube--controllers--dd6b64fbf--c45tm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--156-k8s-calico--kube--controllers--dd6b64fbf--c45tm-eth0", GenerateName:"calico-kube-controllers-dd6b64fbf-", Namespace:"calico-system", SelfLink:"", UID:"b097a7a7-3ecc-4008-8602-5cdf4dd7e06f", ResourceVersion:"897", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 10, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"dd6b64fbf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-156", ContainerID:"ed9ef43286ec5aaab75d39f65107e7ad4ac0543f7a1bcec29aaea0e5622cdf94", Pod:"calico-kube-controllers-dd6b64fbf-c45tm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.2.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib0c6c661f52", MAC:"ca:2f:7f:1f:6f:4e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:10:49.026294 containerd[1890]: 2026-03-07 01:10:49.006 [INFO][4528] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ed9ef43286ec5aaab75d39f65107e7ad4ac0543f7a1bcec29aaea0e5622cdf94" Namespace="calico-system" Pod="calico-kube-controllers-dd6b64fbf-c45tm" WorkloadEndpoint="ip--172--31--29--156-k8s-calico--kube--controllers--dd6b64fbf--c45tm-eth0" Mar 7 01:10:49.047695 containerd[1890]: time="2026-03-07T01:10:49.046819693Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:10:49.047695 containerd[1890]: time="2026-03-07T01:10:49.046948456Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:10:49.047695 containerd[1890]: time="2026-03-07T01:10:49.046968771Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:10:49.048171 containerd[1890]: time="2026-03-07T01:10:49.047140731Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:10:49.096469 systemd[1]: run-containerd-runc-k8s.io-7128e3de14d1a4ed371bc1231c34bf21f1f935e8b41b53d0dbf4a1afe61c1c12-runc.CmI8zX.mount: Deactivated successfully. Mar 7 01:10:49.112208 systemd-networkd[1818]: cali6536b6480c9: Link UP Mar 7 01:10:49.115624 systemd-networkd[1818]: cali6536b6480c9: Gained carrier Mar 7 01:10:49.133240 systemd[1]: Started cri-containerd-7128e3de14d1a4ed371bc1231c34bf21f1f935e8b41b53d0dbf4a1afe61c1c12.scope - libcontainer container 7128e3de14d1a4ed371bc1231c34bf21f1f935e8b41b53d0dbf4a1afe61c1c12. Mar 7 01:10:49.200831 systemd[1]: Started cri-containerd-867c5012f53e58bf21093f43842105ca0cd66fae284286e046fd877136d0e274.scope - libcontainer container 867c5012f53e58bf21093f43842105ca0cd66fae284286e046fd877136d0e274. Mar 7 01:10:49.250067 systemd-networkd[1818]: calie1792dc7213: Link UP Mar 7 01:10:49.255229 systemd-networkd[1818]: calie1792dc7213: Gained carrier Mar 7 01:10:49.261713 containerd[1890]: 2026-03-07 01:10:47.721 [ERROR][4558] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 7 01:10:49.261713 containerd[1890]: 2026-03-07 01:10:47.798 [INFO][4558] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--29--156-k8s-csi--node--driver--9sl5j-eth0 csi-node-driver- calico-system cb0ce17a-98ad-40fb-b4f7-1528d43404aa 894 0 2026-03-07 01:10:22 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:98cbb5577 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-29-156 csi-node-driver-9sl5j eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali6536b6480c9 [] [] }} ContainerID="cd668f9e73dec590a2e3ac013e82ca523c46cf9a3246cd1d3dd2abd3ba52af63" Namespace="calico-system" Pod="csi-node-driver-9sl5j" WorkloadEndpoint="ip--172--31--29--156-k8s-csi--node--driver--9sl5j-" Mar 7 01:10:49.261713 containerd[1890]: 2026-03-07 01:10:47.800 [INFO][4558] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cd668f9e73dec590a2e3ac013e82ca523c46cf9a3246cd1d3dd2abd3ba52af63" Namespace="calico-system" Pod="csi-node-driver-9sl5j" WorkloadEndpoint="ip--172--31--29--156-k8s-csi--node--driver--9sl5j-eth0" Mar 7 01:10:49.261713 containerd[1890]: 2026-03-07 01:10:48.061 [INFO][4636] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cd668f9e73dec590a2e3ac013e82ca523c46cf9a3246cd1d3dd2abd3ba52af63" HandleID="k8s-pod-network.cd668f9e73dec590a2e3ac013e82ca523c46cf9a3246cd1d3dd2abd3ba52af63" Workload="ip--172--31--29--156-k8s-csi--node--driver--9sl5j-eth0" Mar 7 01:10:49.261713 containerd[1890]: 2026-03-07 01:10:48.109 [INFO][4636] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="cd668f9e73dec590a2e3ac013e82ca523c46cf9a3246cd1d3dd2abd3ba52af63" HandleID="k8s-pod-network.cd668f9e73dec590a2e3ac013e82ca523c46cf9a3246cd1d3dd2abd3ba52af63" Workload="ip--172--31--29--156-k8s-csi--node--driver--9sl5j-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003577d0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-29-156", "pod":"csi-node-driver-9sl5j", "timestamp":"2026-03-07 01:10:48.061180286 +0000 UTC"}, Hostname:"ip-172-31-29-156", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0001942c0)} Mar 7 01:10:49.261713 containerd[1890]: 2026-03-07 01:10:48.110 [INFO][4636] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:10:49.261713 containerd[1890]: 2026-03-07 01:10:48.896 [INFO][4636] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:10:49.261713 containerd[1890]: 2026-03-07 01:10:48.896 [INFO][4636] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-29-156' Mar 7 01:10:49.261713 containerd[1890]: 2026-03-07 01:10:48.904 [INFO][4636] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.cd668f9e73dec590a2e3ac013e82ca523c46cf9a3246cd1d3dd2abd3ba52af63" host="ip-172-31-29-156" Mar 7 01:10:49.261713 containerd[1890]: 2026-03-07 01:10:48.923 [INFO][4636] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-29-156" Mar 7 01:10:49.261713 containerd[1890]: 2026-03-07 01:10:48.957 [INFO][4636] ipam/ipam.go 526: Trying affinity for 192.168.2.0/26 host="ip-172-31-29-156" Mar 7 01:10:49.261713 containerd[1890]: 2026-03-07 01:10:48.968 [INFO][4636] ipam/ipam.go 160: Attempting to load block cidr=192.168.2.0/26 host="ip-172-31-29-156" Mar 7 01:10:49.261713 containerd[1890]: 2026-03-07 01:10:48.975 [INFO][4636] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.2.0/26 host="ip-172-31-29-156" Mar 7 01:10:49.261713 containerd[1890]: 2026-03-07 01:10:48.976 [INFO][4636] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.2.0/26 handle="k8s-pod-network.cd668f9e73dec590a2e3ac013e82ca523c46cf9a3246cd1d3dd2abd3ba52af63" host="ip-172-31-29-156" Mar 7 01:10:49.261713 containerd[1890]: 2026-03-07 01:10:48.997 [INFO][4636] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.cd668f9e73dec590a2e3ac013e82ca523c46cf9a3246cd1d3dd2abd3ba52af63 Mar 7 01:10:49.261713 containerd[1890]: 2026-03-07 01:10:49.009 [INFO][4636] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.2.0/26 handle="k8s-pod-network.cd668f9e73dec590a2e3ac013e82ca523c46cf9a3246cd1d3dd2abd3ba52af63" host="ip-172-31-29-156" Mar 7 01:10:49.261713 containerd[1890]: 2026-03-07 01:10:49.034 [INFO][4636] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.2.7/26] block=192.168.2.0/26 handle="k8s-pod-network.cd668f9e73dec590a2e3ac013e82ca523c46cf9a3246cd1d3dd2abd3ba52af63" host="ip-172-31-29-156" Mar 7 01:10:49.261713 containerd[1890]: 2026-03-07 01:10:49.035 [INFO][4636] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.2.7/26] handle="k8s-pod-network.cd668f9e73dec590a2e3ac013e82ca523c46cf9a3246cd1d3dd2abd3ba52af63" host="ip-172-31-29-156" Mar 7 01:10:49.261713 containerd[1890]: 2026-03-07 01:10:49.037 [INFO][4636] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:10:49.261713 containerd[1890]: 2026-03-07 01:10:49.037 [INFO][4636] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.2.7/26] IPv6=[] ContainerID="cd668f9e73dec590a2e3ac013e82ca523c46cf9a3246cd1d3dd2abd3ba52af63" HandleID="k8s-pod-network.cd668f9e73dec590a2e3ac013e82ca523c46cf9a3246cd1d3dd2abd3ba52af63" Workload="ip--172--31--29--156-k8s-csi--node--driver--9sl5j-eth0" Mar 7 01:10:49.269554 containerd[1890]: 2026-03-07 01:10:49.072 [INFO][4558] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cd668f9e73dec590a2e3ac013e82ca523c46cf9a3246cd1d3dd2abd3ba52af63" Namespace="calico-system" Pod="csi-node-driver-9sl5j" WorkloadEndpoint="ip--172--31--29--156-k8s-csi--node--driver--9sl5j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--156-k8s-csi--node--driver--9sl5j-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"cb0ce17a-98ad-40fb-b4f7-1528d43404aa", ResourceVersion:"894", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 10, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-156", ContainerID:"", Pod:"csi-node-driver-9sl5j", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.2.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6536b6480c9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:10:49.269554 containerd[1890]: 2026-03-07 01:10:49.072 [INFO][4558] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.2.7/32] ContainerID="cd668f9e73dec590a2e3ac013e82ca523c46cf9a3246cd1d3dd2abd3ba52af63" Namespace="calico-system" Pod="csi-node-driver-9sl5j" WorkloadEndpoint="ip--172--31--29--156-k8s-csi--node--driver--9sl5j-eth0" Mar 7 01:10:49.269554 containerd[1890]: 2026-03-07 01:10:49.072 [INFO][4558] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6536b6480c9 ContainerID="cd668f9e73dec590a2e3ac013e82ca523c46cf9a3246cd1d3dd2abd3ba52af63" Namespace="calico-system" Pod="csi-node-driver-9sl5j" WorkloadEndpoint="ip--172--31--29--156-k8s-csi--node--driver--9sl5j-eth0" Mar 7 01:10:49.269554 containerd[1890]: 2026-03-07 01:10:49.113 [INFO][4558] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cd668f9e73dec590a2e3ac013e82ca523c46cf9a3246cd1d3dd2abd3ba52af63" Namespace="calico-system" Pod="csi-node-driver-9sl5j" WorkloadEndpoint="ip--172--31--29--156-k8s-csi--node--driver--9sl5j-eth0" Mar 7 01:10:49.269554 containerd[1890]: 2026-03-07 01:10:49.113 [INFO][4558] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cd668f9e73dec590a2e3ac013e82ca523c46cf9a3246cd1d3dd2abd3ba52af63" Namespace="calico-system" Pod="csi-node-driver-9sl5j" WorkloadEndpoint="ip--172--31--29--156-k8s-csi--node--driver--9sl5j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--156-k8s-csi--node--driver--9sl5j-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"cb0ce17a-98ad-40fb-b4f7-1528d43404aa", ResourceVersion:"894", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 10, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-156", ContainerID:"cd668f9e73dec590a2e3ac013e82ca523c46cf9a3246cd1d3dd2abd3ba52af63", Pod:"csi-node-driver-9sl5j", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.2.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6536b6480c9", MAC:"46:b7:f3:e8:7e:b8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:10:49.269554 containerd[1890]: 2026-03-07 01:10:49.154 [INFO][4558] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cd668f9e73dec590a2e3ac013e82ca523c46cf9a3246cd1d3dd2abd3ba52af63" Namespace="calico-system" Pod="csi-node-driver-9sl5j" WorkloadEndpoint="ip--172--31--29--156-k8s-csi--node--driver--9sl5j-eth0" Mar 7 01:10:49.269554 containerd[1890]: time="2026-03-07T01:10:49.267405799Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-669bdb6b65-nhkt8,Uid:e39111b2-0e10-4e35-8ff3-e8249485a878,Namespace:calico-system,Attempt:1,} returns sandbox id \"5237c383c7f60d776b4d1e7f0da8f300dfceabab4bad4d87ba9ba9d045bf7a10\"" Mar 7 01:10:49.318077 containerd[1890]: 2026-03-07 01:10:47.712 [ERROR][4551] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 7 01:10:49.318077 containerd[1890]: 2026-03-07 01:10:47.760 [INFO][4551] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--29--156-k8s-coredns--66bc5c9577--rvl56-eth0 coredns-66bc5c9577- kube-system c72f5271-c799-4260-ae9b-eddba6e2b3f5 895 0 2026-03-07 01:10:05 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-29-156 coredns-66bc5c9577-rvl56 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calie1792dc7213 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="26fb4ce5d5cfc782ae46bdb602f913f297d50e4a6a9519e7acbfad1fe407e828" Namespace="kube-system" Pod="coredns-66bc5c9577-rvl56" WorkloadEndpoint="ip--172--31--29--156-k8s-coredns--66bc5c9577--rvl56-" Mar 7 01:10:49.318077 containerd[1890]: 2026-03-07 01:10:47.760 [INFO][4551] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="26fb4ce5d5cfc782ae46bdb602f913f297d50e4a6a9519e7acbfad1fe407e828" Namespace="kube-system" Pod="coredns-66bc5c9577-rvl56" WorkloadEndpoint="ip--172--31--29--156-k8s-coredns--66bc5c9577--rvl56-eth0" Mar 7 01:10:49.318077 containerd[1890]: 2026-03-07 01:10:48.121 [INFO][4624] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="26fb4ce5d5cfc782ae46bdb602f913f297d50e4a6a9519e7acbfad1fe407e828" HandleID="k8s-pod-network.26fb4ce5d5cfc782ae46bdb602f913f297d50e4a6a9519e7acbfad1fe407e828" Workload="ip--172--31--29--156-k8s-coredns--66bc5c9577--rvl56-eth0" Mar 7 01:10:49.318077 containerd[1890]: 2026-03-07 01:10:48.187 [INFO][4624] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="26fb4ce5d5cfc782ae46bdb602f913f297d50e4a6a9519e7acbfad1fe407e828" HandleID="k8s-pod-network.26fb4ce5d5cfc782ae46bdb602f913f297d50e4a6a9519e7acbfad1fe407e828" Workload="ip--172--31--29--156-k8s-coredns--66bc5c9577--rvl56-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000123f20), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-29-156", "pod":"coredns-66bc5c9577-rvl56", "timestamp":"2026-03-07 01:10:48.121757374 +0000 UTC"}, Hostname:"ip-172-31-29-156", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0004ee580)} Mar 7 01:10:49.318077 containerd[1890]: 2026-03-07 01:10:48.187 [INFO][4624] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:10:49.318077 containerd[1890]: 2026-03-07 01:10:49.040 [INFO][4624] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:10:49.318077 containerd[1890]: 2026-03-07 01:10:49.040 [INFO][4624] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-29-156' Mar 7 01:10:49.318077 containerd[1890]: 2026-03-07 01:10:49.050 [INFO][4624] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.26fb4ce5d5cfc782ae46bdb602f913f297d50e4a6a9519e7acbfad1fe407e828" host="ip-172-31-29-156" Mar 7 01:10:49.318077 containerd[1890]: 2026-03-07 01:10:49.132 [INFO][4624] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-29-156" Mar 7 01:10:49.318077 containerd[1890]: 2026-03-07 01:10:49.167 [INFO][4624] ipam/ipam.go 526: Trying affinity for 192.168.2.0/26 host="ip-172-31-29-156" Mar 7 01:10:49.318077 containerd[1890]: 2026-03-07 01:10:49.173 [INFO][4624] ipam/ipam.go 160: Attempting to load block cidr=192.168.2.0/26 host="ip-172-31-29-156" Mar 7 01:10:49.318077 containerd[1890]: 2026-03-07 01:10:49.181 [INFO][4624] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.2.0/26 host="ip-172-31-29-156" Mar 7 01:10:49.318077 containerd[1890]: 2026-03-07 01:10:49.182 [INFO][4624] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.2.0/26 handle="k8s-pod-network.26fb4ce5d5cfc782ae46bdb602f913f297d50e4a6a9519e7acbfad1fe407e828" host="ip-172-31-29-156" Mar 7 01:10:49.318077 containerd[1890]: 2026-03-07 01:10:49.187 [INFO][4624] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.26fb4ce5d5cfc782ae46bdb602f913f297d50e4a6a9519e7acbfad1fe407e828 Mar 7 01:10:49.318077 containerd[1890]: 2026-03-07 01:10:49.199 [INFO][4624] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.2.0/26 handle="k8s-pod-network.26fb4ce5d5cfc782ae46bdb602f913f297d50e4a6a9519e7acbfad1fe407e828" host="ip-172-31-29-156" Mar 7 01:10:49.318077 containerd[1890]: 2026-03-07 01:10:49.225 [INFO][4624] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.2.8/26] block=192.168.2.0/26 handle="k8s-pod-network.26fb4ce5d5cfc782ae46bdb602f913f297d50e4a6a9519e7acbfad1fe407e828" host="ip-172-31-29-156" Mar 7 01:10:49.318077 containerd[1890]: 2026-03-07 01:10:49.226 [INFO][4624] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.2.8/26] handle="k8s-pod-network.26fb4ce5d5cfc782ae46bdb602f913f297d50e4a6a9519e7acbfad1fe407e828" host="ip-172-31-29-156" Mar 7 01:10:49.318077 containerd[1890]: 2026-03-07 01:10:49.226 [INFO][4624] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:10:49.318077 containerd[1890]: 2026-03-07 01:10:49.226 [INFO][4624] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.2.8/26] IPv6=[] ContainerID="26fb4ce5d5cfc782ae46bdb602f913f297d50e4a6a9519e7acbfad1fe407e828" HandleID="k8s-pod-network.26fb4ce5d5cfc782ae46bdb602f913f297d50e4a6a9519e7acbfad1fe407e828" Workload="ip--172--31--29--156-k8s-coredns--66bc5c9577--rvl56-eth0" Mar 7 01:10:49.322829 containerd[1890]: 2026-03-07 01:10:49.239 [INFO][4551] cni-plugin/k8s.go 418: Populated endpoint ContainerID="26fb4ce5d5cfc782ae46bdb602f913f297d50e4a6a9519e7acbfad1fe407e828" Namespace="kube-system" Pod="coredns-66bc5c9577-rvl56" WorkloadEndpoint="ip--172--31--29--156-k8s-coredns--66bc5c9577--rvl56-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--156-k8s-coredns--66bc5c9577--rvl56-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"c72f5271-c799-4260-ae9b-eddba6e2b3f5", ResourceVersion:"895", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 10, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-156", ContainerID:"", Pod:"coredns-66bc5c9577-rvl56", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.2.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie1792dc7213", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:10:49.322829 containerd[1890]: 2026-03-07 01:10:49.239 [INFO][4551] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.2.8/32] ContainerID="26fb4ce5d5cfc782ae46bdb602f913f297d50e4a6a9519e7acbfad1fe407e828" Namespace="kube-system" Pod="coredns-66bc5c9577-rvl56" WorkloadEndpoint="ip--172--31--29--156-k8s-coredns--66bc5c9577--rvl56-eth0" Mar 7 01:10:49.322829 containerd[1890]: 2026-03-07 01:10:49.239 [INFO][4551] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie1792dc7213 ContainerID="26fb4ce5d5cfc782ae46bdb602f913f297d50e4a6a9519e7acbfad1fe407e828" Namespace="kube-system" Pod="coredns-66bc5c9577-rvl56" WorkloadEndpoint="ip--172--31--29--156-k8s-coredns--66bc5c9577--rvl56-eth0" Mar 7 01:10:49.322829 containerd[1890]: 2026-03-07 01:10:49.262 [INFO][4551] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="26fb4ce5d5cfc782ae46bdb602f913f297d50e4a6a9519e7acbfad1fe407e828" Namespace="kube-system" Pod="coredns-66bc5c9577-rvl56" WorkloadEndpoint="ip--172--31--29--156-k8s-coredns--66bc5c9577--rvl56-eth0" Mar 7 01:10:49.322829 containerd[1890]: 2026-03-07 01:10:49.275 [INFO][4551] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="26fb4ce5d5cfc782ae46bdb602f913f297d50e4a6a9519e7acbfad1fe407e828" Namespace="kube-system" Pod="coredns-66bc5c9577-rvl56" WorkloadEndpoint="ip--172--31--29--156-k8s-coredns--66bc5c9577--rvl56-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--156-k8s-coredns--66bc5c9577--rvl56-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"c72f5271-c799-4260-ae9b-eddba6e2b3f5", ResourceVersion:"895", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 10, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-156", ContainerID:"26fb4ce5d5cfc782ae46bdb602f913f297d50e4a6a9519e7acbfad1fe407e828", Pod:"coredns-66bc5c9577-rvl56", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.2.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie1792dc7213", MAC:"2a:1a:2d:73:b8:0b", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:10:49.322829 containerd[1890]: 2026-03-07 01:10:49.298 [INFO][4551] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="26fb4ce5d5cfc782ae46bdb602f913f297d50e4a6a9519e7acbfad1fe407e828" Namespace="kube-system" Pod="coredns-66bc5c9577-rvl56" WorkloadEndpoint="ip--172--31--29--156-k8s-coredns--66bc5c9577--rvl56-eth0" Mar 7 01:10:49.356064 containerd[1890]: time="2026-03-07T01:10:49.331439427Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:10:49.356064 containerd[1890]: time="2026-03-07T01:10:49.331523132Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:10:49.356064 containerd[1890]: time="2026-03-07T01:10:49.331548069Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:10:49.356064 containerd[1890]: time="2026-03-07T01:10:49.331659046Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:10:49.392142 containerd[1890]: time="2026-03-07T01:10:49.392091597Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-7lsxp,Uid:c3318a07-edd7-4b14-bc1e-8289ed132c3a,Namespace:kube-system,Attempt:1,} returns sandbox id \"7128e3de14d1a4ed371bc1231c34bf21f1f935e8b41b53d0dbf4a1afe61c1c12\"" Mar 7 01:10:49.446473 containerd[1890]: time="2026-03-07T01:10:49.446232439Z" level=info msg="CreateContainer within sandbox \"7128e3de14d1a4ed371bc1231c34bf21f1f935e8b41b53d0dbf4a1afe61c1c12\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 7 01:10:49.481326 systemd[1]: Started cri-containerd-ed9ef43286ec5aaab75d39f65107e7ad4ac0543f7a1bcec29aaea0e5622cdf94.scope - libcontainer container ed9ef43286ec5aaab75d39f65107e7ad4ac0543f7a1bcec29aaea0e5622cdf94. Mar 7 01:10:49.543661 containerd[1890]: time="2026-03-07T01:10:49.543499078Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-9kx9l,Uid:ec167290-b451-4cd1-a0d3-0912ddaf0ce2,Namespace:calico-system,Attempt:1,} returns sandbox id \"867c5012f53e58bf21093f43842105ca0cd66fae284286e046fd877136d0e274\"" Mar 7 01:10:49.637190 systemd-networkd[1818]: calibc36f3d5211: Gained IPv6LL Mar 7 01:10:49.643799 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount296764072.mount: Deactivated successfully. Mar 7 01:10:49.649460 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1321677974.mount: Deactivated successfully. Mar 7 01:10:49.656994 containerd[1890]: time="2026-03-07T01:10:49.656948200Z" level=info msg="CreateContainer within sandbox \"7128e3de14d1a4ed371bc1231c34bf21f1f935e8b41b53d0dbf4a1afe61c1c12\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"14080cf6fbe6421ec7c83f857775acb53f235267f480b63be4d60743c0e23867\"" Mar 7 01:10:49.662187 containerd[1890]: time="2026-03-07T01:10:49.662080286Z" level=info msg="StartContainer for \"14080cf6fbe6421ec7c83f857775acb53f235267f480b63be4d60743c0e23867\"" Mar 7 01:10:49.701136 containerd[1890]: time="2026-03-07T01:10:49.683960409Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:10:49.701136 containerd[1890]: time="2026-03-07T01:10:49.684075563Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:10:49.701136 containerd[1890]: time="2026-03-07T01:10:49.684110489Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:10:49.701136 containerd[1890]: time="2026-03-07T01:10:49.684251174Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:10:49.728489 containerd[1890]: time="2026-03-07T01:10:49.728266531Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-dd6b64fbf-c45tm,Uid:b097a7a7-3ecc-4008-8602-5cdf4dd7e06f,Namespace:calico-system,Attempt:1,} returns sandbox id \"ed9ef43286ec5aaab75d39f65107e7ad4ac0543f7a1bcec29aaea0e5622cdf94\"" Mar 7 01:10:49.748974 containerd[1890]: time="2026-03-07T01:10:49.748498300Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:10:49.748974 containerd[1890]: time="2026-03-07T01:10:49.748584273Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:10:49.748974 containerd[1890]: time="2026-03-07T01:10:49.748608511Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:10:49.750070 containerd[1890]: time="2026-03-07T01:10:49.749464029Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:10:49.758094 systemd-networkd[1818]: calieecb1753e85: Gained IPv6LL Mar 7 01:10:49.782349 systemd[1]: Started cri-containerd-cd668f9e73dec590a2e3ac013e82ca523c46cf9a3246cd1d3dd2abd3ba52af63.scope - libcontainer container cd668f9e73dec590a2e3ac013e82ca523c46cf9a3246cd1d3dd2abd3ba52af63. Mar 7 01:10:49.792624 systemd[1]: Started cri-containerd-14080cf6fbe6421ec7c83f857775acb53f235267f480b63be4d60743c0e23867.scope - libcontainer container 14080cf6fbe6421ec7c83f857775acb53f235267f480b63be4d60743c0e23867. Mar 7 01:10:49.813244 systemd[1]: Started cri-containerd-26fb4ce5d5cfc782ae46bdb602f913f297d50e4a6a9519e7acbfad1fe407e828.scope - libcontainer container 26fb4ce5d5cfc782ae46bdb602f913f297d50e4a6a9519e7acbfad1fe407e828. Mar 7 01:10:49.874250 containerd[1890]: time="2026-03-07T01:10:49.874122737Z" level=info msg="StartContainer for \"14080cf6fbe6421ec7c83f857775acb53f235267f480b63be4d60743c0e23867\" returns successfully" Mar 7 01:10:49.903183 containerd[1890]: time="2026-03-07T01:10:49.903057552Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9sl5j,Uid:cb0ce17a-98ad-40fb-b4f7-1528d43404aa,Namespace:calico-system,Attempt:1,} returns sandbox id \"cd668f9e73dec590a2e3ac013e82ca523c46cf9a3246cd1d3dd2abd3ba52af63\"" Mar 7 01:10:49.939246 containerd[1890]: time="2026-03-07T01:10:49.939207817Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-rvl56,Uid:c72f5271-c799-4260-ae9b-eddba6e2b3f5,Namespace:kube-system,Attempt:1,} returns sandbox id \"26fb4ce5d5cfc782ae46bdb602f913f297d50e4a6a9519e7acbfad1fe407e828\"" Mar 7 01:10:49.949846 systemd-networkd[1818]: cali51d0eef42d2: Gained IPv6LL Mar 7 01:10:49.952855 containerd[1890]: time="2026-03-07T01:10:49.952491366Z" level=info msg="CreateContainer within sandbox \"26fb4ce5d5cfc782ae46bdb602f913f297d50e4a6a9519e7acbfad1fe407e828\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 7 01:10:49.983683 containerd[1890]: time="2026-03-07T01:10:49.983627084Z" level=info msg="CreateContainer within sandbox \"26fb4ce5d5cfc782ae46bdb602f913f297d50e4a6a9519e7acbfad1fe407e828\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"aca941cbe28ff7d197f57204fd51a00ae17fd1359f197782d51c66f2616e9c67\"" Mar 7 01:10:49.985593 containerd[1890]: time="2026-03-07T01:10:49.985546569Z" level=info msg="StartContainer for \"aca941cbe28ff7d197f57204fd51a00ae17fd1359f197782d51c66f2616e9c67\"" Mar 7 01:10:50.059221 systemd[1]: Started cri-containerd-aca941cbe28ff7d197f57204fd51a00ae17fd1359f197782d51c66f2616e9c67.scope - libcontainer container aca941cbe28ff7d197f57204fd51a00ae17fd1359f197782d51c66f2616e9c67. Mar 7 01:10:50.079064 systemd-networkd[1818]: cali99b024c8fdc: Gained IPv6LL Mar 7 01:10:50.126327 containerd[1890]: time="2026-03-07T01:10:50.126211373Z" level=info msg="StartContainer for \"aca941cbe28ff7d197f57204fd51a00ae17fd1359f197782d51c66f2616e9c67\" returns successfully" Mar 7 01:10:50.333612 systemd-networkd[1818]: cali6536b6480c9: Gained IPv6LL Mar 7 01:10:50.398055 kernel: calico-node[5047]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Mar 7 01:10:50.535366 systemd-networkd[1818]: cali7cebaccf7e2: Gained IPv6LL Mar 7 01:10:50.603633 kubelet[3114]: I0307 01:10:50.603180 3114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-rvl56" podStartSLOduration=45.588659018 podStartE2EDuration="45.588659018s" podCreationTimestamp="2026-03-07 01:10:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:10:50.587875173 +0000 UTC m=+51.769846757" watchObservedRunningTime="2026-03-07 01:10:50.588659018 +0000 UTC m=+51.770630198" Mar 7 01:10:50.783217 systemd-networkd[1818]: calib0c6c661f52: Gained IPv6LL Mar 7 01:10:51.038248 systemd-networkd[1818]: calie1792dc7213: Gained IPv6LL Mar 7 01:10:51.551598 kubelet[3114]: I0307 01:10:51.530990 3114 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 7 01:10:52.173358 systemd-networkd[1818]: vxlan.calico: Link UP Mar 7 01:10:52.173376 systemd-networkd[1818]: vxlan.calico: Gained carrier Mar 7 01:10:52.933046 systemd[1]: Started sshd@7-172.31.29.156:22-68.220.241.50:34240.service - OpenSSH per-connection server daemon (68.220.241.50:34240). Mar 7 01:10:53.525097 sshd[5311]: Accepted publickey for core from 68.220.241.50 port 34240 ssh2: RSA SHA256:0PS0FBgqn6GWl/nQsMeHlwIixP16R4Q8OHmWUJZFPy8 Mar 7 01:10:53.533480 sshd[5311]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:10:53.551024 kubelet[3114]: I0307 01:10:53.548615 3114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-7lsxp" podStartSLOduration=48.548591848 podStartE2EDuration="48.548591848s" podCreationTimestamp="2026-03-07 01:10:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:10:50.713734239 +0000 UTC m=+51.895705420" watchObservedRunningTime="2026-03-07 01:10:53.548591848 +0000 UTC m=+54.730563045" Mar 7 01:10:53.562975 systemd-logind[1880]: New session 8 of user core. Mar 7 01:10:53.569231 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 7 01:10:54.239127 systemd-networkd[1818]: vxlan.calico: Gained IPv6LL Mar 7 01:10:54.569675 containerd[1890]: time="2026-03-07T01:10:54.566110255Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=48415780" Mar 7 01:10:54.590330 containerd[1890]: time="2026-03-07T01:10:54.590273279Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:10:54.599326 containerd[1890]: time="2026-03-07T01:10:54.599259270Z" level=info msg="ImageCreate event name:\"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:10:54.668356 containerd[1890]: time="2026-03-07T01:10:54.668268399Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:10:54.676503 containerd[1890]: time="2026-03-07T01:10:54.676354370Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 5.975813035s" Mar 7 01:10:54.676503 containerd[1890]: time="2026-03-07T01:10:54.676398692Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Mar 7 01:10:54.679059 containerd[1890]: time="2026-03-07T01:10:54.678930950Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 7 01:10:54.697631 containerd[1890]: time="2026-03-07T01:10:54.697579619Z" level=info msg="CreateContainer within sandbox \"63bb7fe5e5f49f2d606f24a7c6471461b8206efd3ed866ae6b22f3b917325da3\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 7 01:10:54.738629 containerd[1890]: time="2026-03-07T01:10:54.738583366Z" level=info msg="CreateContainer within sandbox \"63bb7fe5e5f49f2d606f24a7c6471461b8206efd3ed866ae6b22f3b917325da3\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"48951daf275197d1a909c6f3f56fdda7947292693968c09d032db0eb83de1901\"" Mar 7 01:10:54.739423 containerd[1890]: time="2026-03-07T01:10:54.739393056Z" level=info msg="StartContainer for \"48951daf275197d1a909c6f3f56fdda7947292693968c09d032db0eb83de1901\"" Mar 7 01:10:54.764971 sshd[5311]: pam_unix(sshd:session): session closed for user core Mar 7 01:10:54.772284 systemd[1]: sshd@7-172.31.29.156:22-68.220.241.50:34240.service: Deactivated successfully. Mar 7 01:10:54.776984 systemd[1]: session-8.scope: Deactivated successfully. Mar 7 01:10:54.780185 systemd-logind[1880]: Session 8 logged out. Waiting for processes to exit. Mar 7 01:10:54.782154 systemd-logind[1880]: Removed session 8. Mar 7 01:10:54.837257 systemd[1]: Started cri-containerd-48951daf275197d1a909c6f3f56fdda7947292693968c09d032db0eb83de1901.scope - libcontainer container 48951daf275197d1a909c6f3f56fdda7947292693968c09d032db0eb83de1901. Mar 7 01:10:54.910695 containerd[1890]: time="2026-03-07T01:10:54.910645408Z" level=info msg="StartContainer for \"48951daf275197d1a909c6f3f56fdda7947292693968c09d032db0eb83de1901\" returns successfully" Mar 7 01:10:54.980896 containerd[1890]: time="2026-03-07T01:10:54.980845845Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:10:54.983836 containerd[1890]: time="2026-03-07T01:10:54.983775137Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Mar 7 01:10:54.987901 containerd[1890]: time="2026-03-07T01:10:54.987756714Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 308.769863ms" Mar 7 01:10:54.987901 containerd[1890]: time="2026-03-07T01:10:54.987904782Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Mar 7 01:10:54.989931 containerd[1890]: time="2026-03-07T01:10:54.989834994Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Mar 7 01:10:54.998474 containerd[1890]: time="2026-03-07T01:10:54.998282428Z" level=info msg="CreateContainer within sandbox \"10de6ce919483936bd11836cdc7c56f459922c3060d91051fb0cec1d419f0d20\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 7 01:10:55.032621 containerd[1890]: time="2026-03-07T01:10:55.031109339Z" level=info msg="CreateContainer within sandbox \"10de6ce919483936bd11836cdc7c56f459922c3060d91051fb0cec1d419f0d20\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"758ea35d52b37c976e74f28014dd1829e81eb4f11b966dd5e5733d1aeba922a2\"" Mar 7 01:10:55.032621 containerd[1890]: time="2026-03-07T01:10:55.032543465Z" level=info msg="StartContainer for \"758ea35d52b37c976e74f28014dd1829e81eb4f11b966dd5e5733d1aeba922a2\"" Mar 7 01:10:55.097229 systemd[1]: Started cri-containerd-758ea35d52b37c976e74f28014dd1829e81eb4f11b966dd5e5733d1aeba922a2.scope - libcontainer container 758ea35d52b37c976e74f28014dd1829e81eb4f11b966dd5e5733d1aeba922a2. Mar 7 01:10:55.177762 containerd[1890]: time="2026-03-07T01:10:55.177715078Z" level=info msg="StartContainer for \"758ea35d52b37c976e74f28014dd1829e81eb4f11b966dd5e5733d1aeba922a2\" returns successfully" Mar 7 01:10:55.727031 kubelet[3114]: I0307 01:10:55.725717 3114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-5fffbd9bb8-8fmss" podStartSLOduration=29.463626079 podStartE2EDuration="35.725682027s" podCreationTimestamp="2026-03-07 01:10:20 +0000 UTC" firstStartedPulling="2026-03-07 01:10:48.726929955 +0000 UTC m=+49.908901124" lastFinishedPulling="2026-03-07 01:10:54.988985907 +0000 UTC m=+56.170957072" observedRunningTime="2026-03-07 01:10:55.720833667 +0000 UTC m=+56.902804847" watchObservedRunningTime="2026-03-07 01:10:55.725682027 +0000 UTC m=+56.907653208" Mar 7 01:10:56.346254 ntpd[1872]: Listen normally on 8 vxlan.calico 192.168.2.0:123 Mar 7 01:10:56.350140 ntpd[1872]: 7 Mar 01:10:56 ntpd[1872]: Listen normally on 8 vxlan.calico 192.168.2.0:123 Mar 7 01:10:56.350140 ntpd[1872]: 7 Mar 01:10:56 ntpd[1872]: Listen normally on 9 calieecb1753e85 [fe80::ecee:eeff:feee:eeee%4]:123 Mar 7 01:10:56.350140 ntpd[1872]: 7 Mar 01:10:56 ntpd[1872]: Listen normally on 10 cali51d0eef42d2 [fe80::ecee:eeff:feee:eeee%5]:123 Mar 7 01:10:56.350140 ntpd[1872]: 7 Mar 01:10:56 ntpd[1872]: Listen normally on 11 cali99b024c8fdc [fe80::ecee:eeff:feee:eeee%6]:123 Mar 7 01:10:56.350140 ntpd[1872]: 7 Mar 01:10:56 ntpd[1872]: Listen normally on 12 calibc36f3d5211 [fe80::ecee:eeff:feee:eeee%7]:123 Mar 7 01:10:56.350140 ntpd[1872]: 7 Mar 01:10:56 ntpd[1872]: Listen normally on 13 cali7cebaccf7e2 [fe80::ecee:eeff:feee:eeee%8]:123 Mar 7 01:10:56.350140 ntpd[1872]: 7 Mar 01:10:56 ntpd[1872]: Listen normally on 14 calib0c6c661f52 [fe80::ecee:eeff:feee:eeee%9]:123 Mar 7 01:10:56.350140 ntpd[1872]: 7 Mar 01:10:56 ntpd[1872]: Listen normally on 15 cali6536b6480c9 [fe80::ecee:eeff:feee:eeee%10]:123 Mar 7 01:10:56.350140 ntpd[1872]: 7 Mar 01:10:56 ntpd[1872]: Listen normally on 16 calie1792dc7213 [fe80::ecee:eeff:feee:eeee%11]:123 Mar 7 01:10:56.350140 ntpd[1872]: 7 Mar 01:10:56 ntpd[1872]: Listen normally on 17 vxlan.calico [fe80::64c3:77ff:fe65:a60e%12]:123 Mar 7 01:10:56.346356 ntpd[1872]: Listen normally on 9 calieecb1753e85 [fe80::ecee:eeff:feee:eeee%4]:123 Mar 7 01:10:56.346420 ntpd[1872]: Listen normally on 10 cali51d0eef42d2 [fe80::ecee:eeff:feee:eeee%5]:123 Mar 7 01:10:56.346461 ntpd[1872]: Listen normally on 11 cali99b024c8fdc [fe80::ecee:eeff:feee:eeee%6]:123 Mar 7 01:10:56.346504 ntpd[1872]: Listen normally on 12 calibc36f3d5211 [fe80::ecee:eeff:feee:eeee%7]:123 Mar 7 01:10:56.346543 ntpd[1872]: Listen normally on 13 cali7cebaccf7e2 [fe80::ecee:eeff:feee:eeee%8]:123 Mar 7 01:10:56.346581 ntpd[1872]: Listen normally on 14 calib0c6c661f52 [fe80::ecee:eeff:feee:eeee%9]:123 Mar 7 01:10:56.346620 ntpd[1872]: Listen normally on 15 cali6536b6480c9 [fe80::ecee:eeff:feee:eeee%10]:123 Mar 7 01:10:56.346665 ntpd[1872]: Listen normally on 16 calie1792dc7213 [fe80::ecee:eeff:feee:eeee%11]:123 Mar 7 01:10:56.346704 ntpd[1872]: Listen normally on 17 vxlan.calico [fe80::64c3:77ff:fe65:a60e%12]:123 Mar 7 01:10:56.820527 kubelet[3114]: I0307 01:10:56.820490 3114 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 7 01:10:56.823927 kubelet[3114]: I0307 01:10:56.823432 3114 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 7 01:10:56.993858 containerd[1890]: time="2026-03-07T01:10:56.993474246Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:10:56.998365 containerd[1890]: time="2026-03-07T01:10:56.997382542Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=6039889" Mar 7 01:10:57.005912 containerd[1890]: time="2026-03-07T01:10:57.005245937Z" level=info msg="ImageCreate event name:\"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:10:57.015328 containerd[1890]: time="2026-03-07T01:10:57.015282544Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:10:57.033781 containerd[1890]: time="2026-03-07T01:10:57.033713714Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7595926\" in 2.043821412s" Mar 7 01:10:57.034154 containerd[1890]: time="2026-03-07T01:10:57.034042373Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\"" Mar 7 01:10:57.048034 containerd[1890]: time="2026-03-07T01:10:57.047282631Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Mar 7 01:10:57.053330 containerd[1890]: time="2026-03-07T01:10:57.053153069Z" level=info msg="CreateContainer within sandbox \"5237c383c7f60d776b4d1e7f0da8f300dfceabab4bad4d87ba9ba9d045bf7a10\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Mar 7 01:10:57.088393 containerd[1890]: time="2026-03-07T01:10:57.088184178Z" level=info msg="CreateContainer within sandbox \"5237c383c7f60d776b4d1e7f0da8f300dfceabab4bad4d87ba9ba9d045bf7a10\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"3cc94983e58e3d6743bef0a3ac0d0b41c35b4d4817ab99312344c63c19a4995f\"" Mar 7 01:10:57.088501 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2645140364.mount: Deactivated successfully. Mar 7 01:10:57.092519 containerd[1890]: time="2026-03-07T01:10:57.091709491Z" level=info msg="StartContainer for \"3cc94983e58e3d6743bef0a3ac0d0b41c35b4d4817ab99312344c63c19a4995f\"" Mar 7 01:10:57.267475 systemd[1]: run-containerd-runc-k8s.io-3cc94983e58e3d6743bef0a3ac0d0b41c35b4d4817ab99312344c63c19a4995f-runc.LVDQOn.mount: Deactivated successfully. Mar 7 01:10:57.275418 systemd[1]: Started cri-containerd-3cc94983e58e3d6743bef0a3ac0d0b41c35b4d4817ab99312344c63c19a4995f.scope - libcontainer container 3cc94983e58e3d6743bef0a3ac0d0b41c35b4d4817ab99312344c63c19a4995f. Mar 7 01:10:57.347585 containerd[1890]: time="2026-03-07T01:10:57.346243999Z" level=info msg="StartContainer for \"3cc94983e58e3d6743bef0a3ac0d0b41c35b4d4817ab99312344c63c19a4995f\" returns successfully" Mar 7 01:10:59.288415 containerd[1890]: time="2026-03-07T01:10:59.287531297Z" level=info msg="StopPodSandbox for \"5d0fcf4dab67c8d93f1e1124224a1e6311c7cfcf308268b210578a9b8f750e8a\"" Mar 7 01:10:59.880412 systemd[1]: Started sshd@8-172.31.29.156:22-68.220.241.50:34252.service - OpenSSH per-connection server daemon (68.220.241.50:34252). Mar 7 01:11:00.514118 sshd[5588]: Accepted publickey for core from 68.220.241.50 port 34252 ssh2: RSA SHA256:0PS0FBgqn6GWl/nQsMeHlwIixP16R4Q8OHmWUJZFPy8 Mar 7 01:11:00.518300 sshd[5588]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:11:00.535793 systemd-logind[1880]: New session 9 of user core. Mar 7 01:11:00.539963 containerd[1890]: 2026-03-07 01:10:59.837 [WARNING][5559] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5d0fcf4dab67c8d93f1e1124224a1e6311c7cfcf308268b210578a9b8f750e8a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--156-k8s-calico--apiserver--5fffbd9bb8--8fmss-eth0", GenerateName:"calico-apiserver-5fffbd9bb8-", Namespace:"calico-system", SelfLink:"", UID:"84013ff8-1caf-4210-8629-c308dc9cf92c", ResourceVersion:"1017", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 10, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5fffbd9bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-156", ContainerID:"10de6ce919483936bd11836cdc7c56f459922c3060d91051fb0cec1d419f0d20", Pod:"calico-apiserver-5fffbd9bb8-8fmss", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.2.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali51d0eef42d2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:11:00.539963 containerd[1890]: 2026-03-07 01:10:59.845 [INFO][5559] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="5d0fcf4dab67c8d93f1e1124224a1e6311c7cfcf308268b210578a9b8f750e8a" Mar 7 01:11:00.539963 containerd[1890]: 2026-03-07 01:10:59.845 [INFO][5559] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5d0fcf4dab67c8d93f1e1124224a1e6311c7cfcf308268b210578a9b8f750e8a" iface="eth0" netns="" Mar 7 01:11:00.539963 containerd[1890]: 2026-03-07 01:10:59.845 [INFO][5559] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="5d0fcf4dab67c8d93f1e1124224a1e6311c7cfcf308268b210578a9b8f750e8a" Mar 7 01:11:00.539963 containerd[1890]: 2026-03-07 01:10:59.845 [INFO][5559] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="5d0fcf4dab67c8d93f1e1124224a1e6311c7cfcf308268b210578a9b8f750e8a" Mar 7 01:11:00.539963 containerd[1890]: 2026-03-07 01:11:00.460 [INFO][5586] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="5d0fcf4dab67c8d93f1e1124224a1e6311c7cfcf308268b210578a9b8f750e8a" HandleID="k8s-pod-network.5d0fcf4dab67c8d93f1e1124224a1e6311c7cfcf308268b210578a9b8f750e8a" Workload="ip--172--31--29--156-k8s-calico--apiserver--5fffbd9bb8--8fmss-eth0" Mar 7 01:11:00.539963 containerd[1890]: 2026-03-07 01:11:00.466 [INFO][5586] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:11:00.539963 containerd[1890]: 2026-03-07 01:11:00.469 [INFO][5586] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:11:00.539963 containerd[1890]: 2026-03-07 01:11:00.504 [WARNING][5586] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="5d0fcf4dab67c8d93f1e1124224a1e6311c7cfcf308268b210578a9b8f750e8a" HandleID="k8s-pod-network.5d0fcf4dab67c8d93f1e1124224a1e6311c7cfcf308268b210578a9b8f750e8a" Workload="ip--172--31--29--156-k8s-calico--apiserver--5fffbd9bb8--8fmss-eth0" Mar 7 01:11:00.539963 containerd[1890]: 2026-03-07 01:11:00.504 [INFO][5586] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="5d0fcf4dab67c8d93f1e1124224a1e6311c7cfcf308268b210578a9b8f750e8a" HandleID="k8s-pod-network.5d0fcf4dab67c8d93f1e1124224a1e6311c7cfcf308268b210578a9b8f750e8a" Workload="ip--172--31--29--156-k8s-calico--apiserver--5fffbd9bb8--8fmss-eth0" Mar 7 01:11:00.539963 containerd[1890]: 2026-03-07 01:11:00.509 [INFO][5586] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:11:00.539963 containerd[1890]: 2026-03-07 01:11:00.522 [INFO][5559] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="5d0fcf4dab67c8d93f1e1124224a1e6311c7cfcf308268b210578a9b8f750e8a" Mar 7 01:11:00.539963 containerd[1890]: time="2026-03-07T01:11:00.539543299Z" level=info msg="TearDown network for sandbox \"5d0fcf4dab67c8d93f1e1124224a1e6311c7cfcf308268b210578a9b8f750e8a\" successfully" Mar 7 01:11:00.539963 containerd[1890]: time="2026-03-07T01:11:00.539571202Z" level=info msg="StopPodSandbox for \"5d0fcf4dab67c8d93f1e1124224a1e6311c7cfcf308268b210578a9b8f750e8a\" returns successfully" Mar 7 01:11:00.544810 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 7 01:11:00.680325 containerd[1890]: time="2026-03-07T01:11:00.680240997Z" level=info msg="RemovePodSandbox for \"5d0fcf4dab67c8d93f1e1124224a1e6311c7cfcf308268b210578a9b8f750e8a\"" Mar 7 01:11:00.684352 containerd[1890]: time="2026-03-07T01:11:00.684148442Z" level=info msg="Forcibly stopping sandbox \"5d0fcf4dab67c8d93f1e1124224a1e6311c7cfcf308268b210578a9b8f750e8a\"" Mar 7 01:11:00.903622 containerd[1890]: 2026-03-07 01:11:00.805 [WARNING][5610] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5d0fcf4dab67c8d93f1e1124224a1e6311c7cfcf308268b210578a9b8f750e8a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--156-k8s-calico--apiserver--5fffbd9bb8--8fmss-eth0", GenerateName:"calico-apiserver-5fffbd9bb8-", Namespace:"calico-system", SelfLink:"", UID:"84013ff8-1caf-4210-8629-c308dc9cf92c", ResourceVersion:"1017", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 10, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5fffbd9bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-156", ContainerID:"10de6ce919483936bd11836cdc7c56f459922c3060d91051fb0cec1d419f0d20", Pod:"calico-apiserver-5fffbd9bb8-8fmss", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.2.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali51d0eef42d2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:11:00.903622 containerd[1890]: 2026-03-07 01:11:00.807 [INFO][5610] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="5d0fcf4dab67c8d93f1e1124224a1e6311c7cfcf308268b210578a9b8f750e8a" Mar 7 01:11:00.903622 containerd[1890]: 2026-03-07 01:11:00.807 [INFO][5610] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5d0fcf4dab67c8d93f1e1124224a1e6311c7cfcf308268b210578a9b8f750e8a" iface="eth0" netns="" Mar 7 01:11:00.903622 containerd[1890]: 2026-03-07 01:11:00.807 [INFO][5610] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="5d0fcf4dab67c8d93f1e1124224a1e6311c7cfcf308268b210578a9b8f750e8a" Mar 7 01:11:00.903622 containerd[1890]: 2026-03-07 01:11:00.807 [INFO][5610] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="5d0fcf4dab67c8d93f1e1124224a1e6311c7cfcf308268b210578a9b8f750e8a" Mar 7 01:11:00.903622 containerd[1890]: 2026-03-07 01:11:00.862 [INFO][5621] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="5d0fcf4dab67c8d93f1e1124224a1e6311c7cfcf308268b210578a9b8f750e8a" HandleID="k8s-pod-network.5d0fcf4dab67c8d93f1e1124224a1e6311c7cfcf308268b210578a9b8f750e8a" Workload="ip--172--31--29--156-k8s-calico--apiserver--5fffbd9bb8--8fmss-eth0" Mar 7 01:11:00.903622 containerd[1890]: 2026-03-07 01:11:00.863 [INFO][5621] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:11:00.903622 containerd[1890]: 2026-03-07 01:11:00.863 [INFO][5621] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:11:00.903622 containerd[1890]: 2026-03-07 01:11:00.877 [WARNING][5621] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="5d0fcf4dab67c8d93f1e1124224a1e6311c7cfcf308268b210578a9b8f750e8a" HandleID="k8s-pod-network.5d0fcf4dab67c8d93f1e1124224a1e6311c7cfcf308268b210578a9b8f750e8a" Workload="ip--172--31--29--156-k8s-calico--apiserver--5fffbd9bb8--8fmss-eth0" Mar 7 01:11:00.903622 containerd[1890]: 2026-03-07 01:11:00.877 [INFO][5621] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="5d0fcf4dab67c8d93f1e1124224a1e6311c7cfcf308268b210578a9b8f750e8a" HandleID="k8s-pod-network.5d0fcf4dab67c8d93f1e1124224a1e6311c7cfcf308268b210578a9b8f750e8a" Workload="ip--172--31--29--156-k8s-calico--apiserver--5fffbd9bb8--8fmss-eth0" Mar 7 01:11:00.903622 containerd[1890]: 2026-03-07 01:11:00.881 [INFO][5621] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:11:00.903622 containerd[1890]: 2026-03-07 01:11:00.890 [INFO][5610] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="5d0fcf4dab67c8d93f1e1124224a1e6311c7cfcf308268b210578a9b8f750e8a" Mar 7 01:11:00.905137 containerd[1890]: time="2026-03-07T01:11:00.903670028Z" level=info msg="TearDown network for sandbox \"5d0fcf4dab67c8d93f1e1124224a1e6311c7cfcf308268b210578a9b8f750e8a\" successfully" Mar 7 01:11:01.005581 containerd[1890]: time="2026-03-07T01:11:01.004814455Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5d0fcf4dab67c8d93f1e1124224a1e6311c7cfcf308268b210578a9b8f750e8a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 01:11:01.085142 containerd[1890]: time="2026-03-07T01:11:01.083600012Z" level=info msg="RemovePodSandbox \"5d0fcf4dab67c8d93f1e1124224a1e6311c7cfcf308268b210578a9b8f750e8a\" returns successfully" Mar 7 01:11:01.107192 containerd[1890]: time="2026-03-07T01:11:01.106481537Z" level=info msg="StopPodSandbox for \"310709574521823a1da1e64f86c1ca73d58804ce949dd386cf59f43444a78613\"" Mar 7 01:11:01.369018 containerd[1890]: 2026-03-07 01:11:01.256 [WARNING][5639] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="310709574521823a1da1e64f86c1ca73d58804ce949dd386cf59f43444a78613" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--156-k8s-csi--node--driver--9sl5j-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"cb0ce17a-98ad-40fb-b4f7-1528d43404aa", ResourceVersion:"934", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 10, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-156", ContainerID:"cd668f9e73dec590a2e3ac013e82ca523c46cf9a3246cd1d3dd2abd3ba52af63", Pod:"csi-node-driver-9sl5j", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.2.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6536b6480c9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:11:01.369018 containerd[1890]: 2026-03-07 01:11:01.257 [INFO][5639] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="310709574521823a1da1e64f86c1ca73d58804ce949dd386cf59f43444a78613" Mar 7 01:11:01.369018 containerd[1890]: 2026-03-07 01:11:01.257 [INFO][5639] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="310709574521823a1da1e64f86c1ca73d58804ce949dd386cf59f43444a78613" iface="eth0" netns="" Mar 7 01:11:01.369018 containerd[1890]: 2026-03-07 01:11:01.257 [INFO][5639] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="310709574521823a1da1e64f86c1ca73d58804ce949dd386cf59f43444a78613" Mar 7 01:11:01.369018 containerd[1890]: 2026-03-07 01:11:01.257 [INFO][5639] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="310709574521823a1da1e64f86c1ca73d58804ce949dd386cf59f43444a78613" Mar 7 01:11:01.369018 containerd[1890]: 2026-03-07 01:11:01.335 [INFO][5646] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="310709574521823a1da1e64f86c1ca73d58804ce949dd386cf59f43444a78613" HandleID="k8s-pod-network.310709574521823a1da1e64f86c1ca73d58804ce949dd386cf59f43444a78613" Workload="ip--172--31--29--156-k8s-csi--node--driver--9sl5j-eth0" Mar 7 01:11:01.369018 containerd[1890]: 2026-03-07 01:11:01.336 [INFO][5646] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:11:01.369018 containerd[1890]: 2026-03-07 01:11:01.336 [INFO][5646] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:11:01.369018 containerd[1890]: 2026-03-07 01:11:01.351 [WARNING][5646] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="310709574521823a1da1e64f86c1ca73d58804ce949dd386cf59f43444a78613" HandleID="k8s-pod-network.310709574521823a1da1e64f86c1ca73d58804ce949dd386cf59f43444a78613" Workload="ip--172--31--29--156-k8s-csi--node--driver--9sl5j-eth0" Mar 7 01:11:01.369018 containerd[1890]: 2026-03-07 01:11:01.353 [INFO][5646] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="310709574521823a1da1e64f86c1ca73d58804ce949dd386cf59f43444a78613" HandleID="k8s-pod-network.310709574521823a1da1e64f86c1ca73d58804ce949dd386cf59f43444a78613" Workload="ip--172--31--29--156-k8s-csi--node--driver--9sl5j-eth0" Mar 7 01:11:01.369018 containerd[1890]: 2026-03-07 01:11:01.357 [INFO][5646] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:11:01.369018 containerd[1890]: 2026-03-07 01:11:01.361 [INFO][5639] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="310709574521823a1da1e64f86c1ca73d58804ce949dd386cf59f43444a78613" Mar 7 01:11:01.369018 containerd[1890]: time="2026-03-07T01:11:01.368497104Z" level=info msg="TearDown network for sandbox \"310709574521823a1da1e64f86c1ca73d58804ce949dd386cf59f43444a78613\" successfully" Mar 7 01:11:01.369018 containerd[1890]: time="2026-03-07T01:11:01.368529345Z" level=info msg="StopPodSandbox for \"310709574521823a1da1e64f86c1ca73d58804ce949dd386cf59f43444a78613\" returns successfully" Mar 7 01:11:01.371670 containerd[1890]: time="2026-03-07T01:11:01.369362050Z" level=info msg="RemovePodSandbox for \"310709574521823a1da1e64f86c1ca73d58804ce949dd386cf59f43444a78613\"" Mar 7 01:11:01.371670 containerd[1890]: time="2026-03-07T01:11:01.369401547Z" level=info msg="Forcibly stopping sandbox \"310709574521823a1da1e64f86c1ca73d58804ce949dd386cf59f43444a78613\"" Mar 7 01:11:01.667814 containerd[1890]: 2026-03-07 01:11:01.533 [WARNING][5660] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="310709574521823a1da1e64f86c1ca73d58804ce949dd386cf59f43444a78613" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--156-k8s-csi--node--driver--9sl5j-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"cb0ce17a-98ad-40fb-b4f7-1528d43404aa", ResourceVersion:"934", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 10, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-156", ContainerID:"cd668f9e73dec590a2e3ac013e82ca523c46cf9a3246cd1d3dd2abd3ba52af63", Pod:"csi-node-driver-9sl5j", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.2.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6536b6480c9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:11:01.667814 containerd[1890]: 2026-03-07 01:11:01.534 [INFO][5660] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="310709574521823a1da1e64f86c1ca73d58804ce949dd386cf59f43444a78613" Mar 7 01:11:01.667814 containerd[1890]: 2026-03-07 01:11:01.534 [INFO][5660] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="310709574521823a1da1e64f86c1ca73d58804ce949dd386cf59f43444a78613" iface="eth0" netns="" Mar 7 01:11:01.667814 containerd[1890]: 2026-03-07 01:11:01.534 [INFO][5660] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="310709574521823a1da1e64f86c1ca73d58804ce949dd386cf59f43444a78613" Mar 7 01:11:01.667814 containerd[1890]: 2026-03-07 01:11:01.534 [INFO][5660] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="310709574521823a1da1e64f86c1ca73d58804ce949dd386cf59f43444a78613" Mar 7 01:11:01.667814 containerd[1890]: 2026-03-07 01:11:01.623 [INFO][5667] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="310709574521823a1da1e64f86c1ca73d58804ce949dd386cf59f43444a78613" HandleID="k8s-pod-network.310709574521823a1da1e64f86c1ca73d58804ce949dd386cf59f43444a78613" Workload="ip--172--31--29--156-k8s-csi--node--driver--9sl5j-eth0" Mar 7 01:11:01.667814 containerd[1890]: 2026-03-07 01:11:01.624 [INFO][5667] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:11:01.667814 containerd[1890]: 2026-03-07 01:11:01.624 [INFO][5667] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:11:01.667814 containerd[1890]: 2026-03-07 01:11:01.654 [WARNING][5667] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="310709574521823a1da1e64f86c1ca73d58804ce949dd386cf59f43444a78613" HandleID="k8s-pod-network.310709574521823a1da1e64f86c1ca73d58804ce949dd386cf59f43444a78613" Workload="ip--172--31--29--156-k8s-csi--node--driver--9sl5j-eth0" Mar 7 01:11:01.667814 containerd[1890]: 2026-03-07 01:11:01.654 [INFO][5667] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="310709574521823a1da1e64f86c1ca73d58804ce949dd386cf59f43444a78613" HandleID="k8s-pod-network.310709574521823a1da1e64f86c1ca73d58804ce949dd386cf59f43444a78613" Workload="ip--172--31--29--156-k8s-csi--node--driver--9sl5j-eth0" Mar 7 01:11:01.667814 containerd[1890]: 2026-03-07 01:11:01.661 [INFO][5667] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:11:01.667814 containerd[1890]: 2026-03-07 01:11:01.664 [INFO][5660] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="310709574521823a1da1e64f86c1ca73d58804ce949dd386cf59f43444a78613" Mar 7 01:11:01.667814 containerd[1890]: time="2026-03-07T01:11:01.667717963Z" level=info msg="TearDown network for sandbox \"310709574521823a1da1e64f86c1ca73d58804ce949dd386cf59f43444a78613\" successfully" Mar 7 01:11:01.726854 containerd[1890]: time="2026-03-07T01:11:01.726426414Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"310709574521823a1da1e64f86c1ca73d58804ce949dd386cf59f43444a78613\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 01:11:01.726854 containerd[1890]: time="2026-03-07T01:11:01.726739242Z" level=info msg="RemovePodSandbox \"310709574521823a1da1e64f86c1ca73d58804ce949dd386cf59f43444a78613\" returns successfully" Mar 7 01:11:01.870831 containerd[1890]: time="2026-03-07T01:11:01.870773552Z" level=info msg="StopPodSandbox for \"ddd8e99e314d9b4f882513cffc793abbff7a3732819d6b39628bae8815cba902\"" Mar 7 01:11:02.331157 sshd[5588]: pam_unix(sshd:session): session closed for user core Mar 7 01:11:02.348157 systemd-logind[1880]: Session 9 logged out. Waiting for processes to exit. Mar 7 01:11:02.349324 systemd[1]: sshd@8-172.31.29.156:22-68.220.241.50:34252.service: Deactivated successfully. Mar 7 01:11:02.357834 systemd[1]: session-9.scope: Deactivated successfully. Mar 7 01:11:02.364398 systemd-logind[1880]: Removed session 9. Mar 7 01:11:02.510334 containerd[1890]: 2026-03-07 01:11:02.399 [WARNING][5683] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ddd8e99e314d9b4f882513cffc793abbff7a3732819d6b39628bae8815cba902" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--156-k8s-calico--kube--controllers--dd6b64fbf--c45tm-eth0", GenerateName:"calico-kube-controllers-dd6b64fbf-", Namespace:"calico-system", SelfLink:"", UID:"b097a7a7-3ecc-4008-8602-5cdf4dd7e06f", ResourceVersion:"931", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 10, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"dd6b64fbf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-156", ContainerID:"ed9ef43286ec5aaab75d39f65107e7ad4ac0543f7a1bcec29aaea0e5622cdf94", Pod:"calico-kube-controllers-dd6b64fbf-c45tm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.2.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib0c6c661f52", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:11:02.510334 containerd[1890]: 2026-03-07 01:11:02.400 [INFO][5683] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="ddd8e99e314d9b4f882513cffc793abbff7a3732819d6b39628bae8815cba902" Mar 7 01:11:02.510334 containerd[1890]: 2026-03-07 01:11:02.400 [INFO][5683] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ddd8e99e314d9b4f882513cffc793abbff7a3732819d6b39628bae8815cba902" iface="eth0" netns="" Mar 7 01:11:02.510334 containerd[1890]: 2026-03-07 01:11:02.400 [INFO][5683] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="ddd8e99e314d9b4f882513cffc793abbff7a3732819d6b39628bae8815cba902" Mar 7 01:11:02.510334 containerd[1890]: 2026-03-07 01:11:02.400 [INFO][5683] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="ddd8e99e314d9b4f882513cffc793abbff7a3732819d6b39628bae8815cba902" Mar 7 01:11:02.510334 containerd[1890]: 2026-03-07 01:11:02.451 [INFO][5693] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="ddd8e99e314d9b4f882513cffc793abbff7a3732819d6b39628bae8815cba902" HandleID="k8s-pod-network.ddd8e99e314d9b4f882513cffc793abbff7a3732819d6b39628bae8815cba902" Workload="ip--172--31--29--156-k8s-calico--kube--controllers--dd6b64fbf--c45tm-eth0" Mar 7 01:11:02.510334 containerd[1890]: 2026-03-07 01:11:02.452 [INFO][5693] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:11:02.510334 containerd[1890]: 2026-03-07 01:11:02.452 [INFO][5693] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:11:02.510334 containerd[1890]: 2026-03-07 01:11:02.464 [WARNING][5693] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="ddd8e99e314d9b4f882513cffc793abbff7a3732819d6b39628bae8815cba902" HandleID="k8s-pod-network.ddd8e99e314d9b4f882513cffc793abbff7a3732819d6b39628bae8815cba902" Workload="ip--172--31--29--156-k8s-calico--kube--controllers--dd6b64fbf--c45tm-eth0" Mar 7 01:11:02.510334 containerd[1890]: 2026-03-07 01:11:02.464 [INFO][5693] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="ddd8e99e314d9b4f882513cffc793abbff7a3732819d6b39628bae8815cba902" HandleID="k8s-pod-network.ddd8e99e314d9b4f882513cffc793abbff7a3732819d6b39628bae8815cba902" Workload="ip--172--31--29--156-k8s-calico--kube--controllers--dd6b64fbf--c45tm-eth0" Mar 7 01:11:02.510334 containerd[1890]: 2026-03-07 01:11:02.470 [INFO][5693] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:11:02.510334 containerd[1890]: 2026-03-07 01:11:02.477 [INFO][5683] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="ddd8e99e314d9b4f882513cffc793abbff7a3732819d6b39628bae8815cba902" Mar 7 01:11:02.520193 containerd[1890]: time="2026-03-07T01:11:02.510380416Z" level=info msg="TearDown network for sandbox \"ddd8e99e314d9b4f882513cffc793abbff7a3732819d6b39628bae8815cba902\" successfully" Mar 7 01:11:02.520193 containerd[1890]: time="2026-03-07T01:11:02.510426424Z" level=info msg="StopPodSandbox for \"ddd8e99e314d9b4f882513cffc793abbff7a3732819d6b39628bae8815cba902\" returns successfully" Mar 7 01:11:02.527995 containerd[1890]: time="2026-03-07T01:11:02.527947118Z" level=info msg="RemovePodSandbox for \"ddd8e99e314d9b4f882513cffc793abbff7a3732819d6b39628bae8815cba902\"" Mar 7 01:11:02.527995 containerd[1890]: time="2026-03-07T01:11:02.527990434Z" level=info msg="Forcibly stopping sandbox \"ddd8e99e314d9b4f882513cffc793abbff7a3732819d6b39628bae8815cba902\"" Mar 7 01:11:02.880102 containerd[1890]: 2026-03-07 01:11:02.784 [WARNING][5707] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ddd8e99e314d9b4f882513cffc793abbff7a3732819d6b39628bae8815cba902" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--156-k8s-calico--kube--controllers--dd6b64fbf--c45tm-eth0", GenerateName:"calico-kube-controllers-dd6b64fbf-", Namespace:"calico-system", SelfLink:"", UID:"b097a7a7-3ecc-4008-8602-5cdf4dd7e06f", ResourceVersion:"931", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 10, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"dd6b64fbf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-156", ContainerID:"ed9ef43286ec5aaab75d39f65107e7ad4ac0543f7a1bcec29aaea0e5622cdf94", Pod:"calico-kube-controllers-dd6b64fbf-c45tm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.2.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib0c6c661f52", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:11:02.880102 containerd[1890]: 2026-03-07 01:11:02.784 [INFO][5707] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="ddd8e99e314d9b4f882513cffc793abbff7a3732819d6b39628bae8815cba902" Mar 7 01:11:02.880102 containerd[1890]: 2026-03-07 01:11:02.784 [INFO][5707] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ddd8e99e314d9b4f882513cffc793abbff7a3732819d6b39628bae8815cba902" iface="eth0" netns="" Mar 7 01:11:02.880102 containerd[1890]: 2026-03-07 01:11:02.784 [INFO][5707] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="ddd8e99e314d9b4f882513cffc793abbff7a3732819d6b39628bae8815cba902" Mar 7 01:11:02.880102 containerd[1890]: 2026-03-07 01:11:02.784 [INFO][5707] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="ddd8e99e314d9b4f882513cffc793abbff7a3732819d6b39628bae8815cba902" Mar 7 01:11:02.880102 containerd[1890]: 2026-03-07 01:11:02.838 [INFO][5714] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="ddd8e99e314d9b4f882513cffc793abbff7a3732819d6b39628bae8815cba902" HandleID="k8s-pod-network.ddd8e99e314d9b4f882513cffc793abbff7a3732819d6b39628bae8815cba902" Workload="ip--172--31--29--156-k8s-calico--kube--controllers--dd6b64fbf--c45tm-eth0" Mar 7 01:11:02.880102 containerd[1890]: 2026-03-07 01:11:02.839 [INFO][5714] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:11:02.880102 containerd[1890]: 2026-03-07 01:11:02.839 [INFO][5714] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:11:02.880102 containerd[1890]: 2026-03-07 01:11:02.858 [WARNING][5714] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="ddd8e99e314d9b4f882513cffc793abbff7a3732819d6b39628bae8815cba902" HandleID="k8s-pod-network.ddd8e99e314d9b4f882513cffc793abbff7a3732819d6b39628bae8815cba902" Workload="ip--172--31--29--156-k8s-calico--kube--controllers--dd6b64fbf--c45tm-eth0" Mar 7 01:11:02.880102 containerd[1890]: 2026-03-07 01:11:02.858 [INFO][5714] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="ddd8e99e314d9b4f882513cffc793abbff7a3732819d6b39628bae8815cba902" HandleID="k8s-pod-network.ddd8e99e314d9b4f882513cffc793abbff7a3732819d6b39628bae8815cba902" Workload="ip--172--31--29--156-k8s-calico--kube--controllers--dd6b64fbf--c45tm-eth0" Mar 7 01:11:02.880102 containerd[1890]: 2026-03-07 01:11:02.864 [INFO][5714] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:11:02.880102 containerd[1890]: 2026-03-07 01:11:02.869 [INFO][5707] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="ddd8e99e314d9b4f882513cffc793abbff7a3732819d6b39628bae8815cba902" Mar 7 01:11:02.880102 containerd[1890]: time="2026-03-07T01:11:02.879288528Z" level=info msg="TearDown network for sandbox \"ddd8e99e314d9b4f882513cffc793abbff7a3732819d6b39628bae8815cba902\" successfully" Mar 7 01:11:02.892756 containerd[1890]: time="2026-03-07T01:11:02.892707103Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ddd8e99e314d9b4f882513cffc793abbff7a3732819d6b39628bae8815cba902\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 01:11:02.895669 containerd[1890]: time="2026-03-07T01:11:02.892796875Z" level=info msg="RemovePodSandbox \"ddd8e99e314d9b4f882513cffc793abbff7a3732819d6b39628bae8815cba902\" returns successfully" Mar 7 01:11:02.895669 containerd[1890]: time="2026-03-07T01:11:02.893584345Z" level=info msg="StopPodSandbox for \"661c9389056cc3660920971191063bb1f7531e643e2d78b53970713803121040\"" Mar 7 01:11:03.122136 containerd[1890]: 2026-03-07 01:11:03.033 [WARNING][5728] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="661c9389056cc3660920971191063bb1f7531e643e2d78b53970713803121040" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--156-k8s-coredns--66bc5c9577--rvl56-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"c72f5271-c799-4260-ae9b-eddba6e2b3f5", ResourceVersion:"953", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 10, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-156", ContainerID:"26fb4ce5d5cfc782ae46bdb602f913f297d50e4a6a9519e7acbfad1fe407e828", Pod:"coredns-66bc5c9577-rvl56", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.2.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie1792dc7213", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:11:03.122136 containerd[1890]: 2026-03-07 01:11:03.034 [INFO][5728] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="661c9389056cc3660920971191063bb1f7531e643e2d78b53970713803121040" Mar 7 01:11:03.122136 containerd[1890]: 2026-03-07 01:11:03.034 [INFO][5728] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="661c9389056cc3660920971191063bb1f7531e643e2d78b53970713803121040" iface="eth0" netns="" Mar 7 01:11:03.122136 containerd[1890]: 2026-03-07 01:11:03.034 [INFO][5728] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="661c9389056cc3660920971191063bb1f7531e643e2d78b53970713803121040" Mar 7 01:11:03.122136 containerd[1890]: 2026-03-07 01:11:03.034 [INFO][5728] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="661c9389056cc3660920971191063bb1f7531e643e2d78b53970713803121040" Mar 7 01:11:03.122136 containerd[1890]: 2026-03-07 01:11:03.098 [INFO][5736] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="661c9389056cc3660920971191063bb1f7531e643e2d78b53970713803121040" HandleID="k8s-pod-network.661c9389056cc3660920971191063bb1f7531e643e2d78b53970713803121040" Workload="ip--172--31--29--156-k8s-coredns--66bc5c9577--rvl56-eth0" Mar 7 01:11:03.122136 containerd[1890]: 2026-03-07 01:11:03.098 [INFO][5736] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:11:03.122136 containerd[1890]: 2026-03-07 01:11:03.098 [INFO][5736] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:11:03.122136 containerd[1890]: 2026-03-07 01:11:03.111 [WARNING][5736] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="661c9389056cc3660920971191063bb1f7531e643e2d78b53970713803121040" HandleID="k8s-pod-network.661c9389056cc3660920971191063bb1f7531e643e2d78b53970713803121040" Workload="ip--172--31--29--156-k8s-coredns--66bc5c9577--rvl56-eth0" Mar 7 01:11:03.122136 containerd[1890]: 2026-03-07 01:11:03.111 [INFO][5736] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="661c9389056cc3660920971191063bb1f7531e643e2d78b53970713803121040" HandleID="k8s-pod-network.661c9389056cc3660920971191063bb1f7531e643e2d78b53970713803121040" Workload="ip--172--31--29--156-k8s-coredns--66bc5c9577--rvl56-eth0" Mar 7 01:11:03.122136 containerd[1890]: 2026-03-07 01:11:03.114 [INFO][5736] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:11:03.122136 containerd[1890]: 2026-03-07 01:11:03.117 [INFO][5728] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="661c9389056cc3660920971191063bb1f7531e643e2d78b53970713803121040" Mar 7 01:11:03.123840 containerd[1890]: time="2026-03-07T01:11:03.122188125Z" level=info msg="TearDown network for sandbox \"661c9389056cc3660920971191063bb1f7531e643e2d78b53970713803121040\" successfully" Mar 7 01:11:03.123840 containerd[1890]: time="2026-03-07T01:11:03.122221978Z" level=info msg="StopPodSandbox for \"661c9389056cc3660920971191063bb1f7531e643e2d78b53970713803121040\" returns successfully" Mar 7 01:11:03.123840 containerd[1890]: time="2026-03-07T01:11:03.122786984Z" level=info msg="RemovePodSandbox for \"661c9389056cc3660920971191063bb1f7531e643e2d78b53970713803121040\"" Mar 7 01:11:03.123840 containerd[1890]: time="2026-03-07T01:11:03.122821348Z" level=info msg="Forcibly stopping sandbox \"661c9389056cc3660920971191063bb1f7531e643e2d78b53970713803121040\"" Mar 7 01:11:03.179964 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount915397267.mount: Deactivated successfully. Mar 7 01:11:03.297551 containerd[1890]: 2026-03-07 01:11:03.207 [WARNING][5750] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="661c9389056cc3660920971191063bb1f7531e643e2d78b53970713803121040" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--156-k8s-coredns--66bc5c9577--rvl56-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"c72f5271-c799-4260-ae9b-eddba6e2b3f5", ResourceVersion:"953", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 10, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-156", ContainerID:"26fb4ce5d5cfc782ae46bdb602f913f297d50e4a6a9519e7acbfad1fe407e828", Pod:"coredns-66bc5c9577-rvl56", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.2.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie1792dc7213", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:11:03.297551 containerd[1890]: 2026-03-07 01:11:03.207 [INFO][5750] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="661c9389056cc3660920971191063bb1f7531e643e2d78b53970713803121040" Mar 7 01:11:03.297551 containerd[1890]: 2026-03-07 01:11:03.207 [INFO][5750] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="661c9389056cc3660920971191063bb1f7531e643e2d78b53970713803121040" iface="eth0" netns="" Mar 7 01:11:03.297551 containerd[1890]: 2026-03-07 01:11:03.207 [INFO][5750] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="661c9389056cc3660920971191063bb1f7531e643e2d78b53970713803121040" Mar 7 01:11:03.297551 containerd[1890]: 2026-03-07 01:11:03.207 [INFO][5750] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="661c9389056cc3660920971191063bb1f7531e643e2d78b53970713803121040" Mar 7 01:11:03.297551 containerd[1890]: 2026-03-07 01:11:03.269 [INFO][5761] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="661c9389056cc3660920971191063bb1f7531e643e2d78b53970713803121040" HandleID="k8s-pod-network.661c9389056cc3660920971191063bb1f7531e643e2d78b53970713803121040" Workload="ip--172--31--29--156-k8s-coredns--66bc5c9577--rvl56-eth0" Mar 7 01:11:03.297551 containerd[1890]: 2026-03-07 01:11:03.269 [INFO][5761] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:11:03.297551 containerd[1890]: 2026-03-07 01:11:03.269 [INFO][5761] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:11:03.297551 containerd[1890]: 2026-03-07 01:11:03.281 [WARNING][5761] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="661c9389056cc3660920971191063bb1f7531e643e2d78b53970713803121040" HandleID="k8s-pod-network.661c9389056cc3660920971191063bb1f7531e643e2d78b53970713803121040" Workload="ip--172--31--29--156-k8s-coredns--66bc5c9577--rvl56-eth0" Mar 7 01:11:03.297551 containerd[1890]: 2026-03-07 01:11:03.281 [INFO][5761] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="661c9389056cc3660920971191063bb1f7531e643e2d78b53970713803121040" HandleID="k8s-pod-network.661c9389056cc3660920971191063bb1f7531e643e2d78b53970713803121040" Workload="ip--172--31--29--156-k8s-coredns--66bc5c9577--rvl56-eth0" Mar 7 01:11:03.297551 containerd[1890]: 2026-03-07 01:11:03.285 [INFO][5761] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:11:03.297551 containerd[1890]: 2026-03-07 01:11:03.294 [INFO][5750] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="661c9389056cc3660920971191063bb1f7531e643e2d78b53970713803121040" Mar 7 01:11:03.298414 containerd[1890]: time="2026-03-07T01:11:03.297599473Z" level=info msg="TearDown network for sandbox \"661c9389056cc3660920971191063bb1f7531e643e2d78b53970713803121040\" successfully" Mar 7 01:11:03.306670 containerd[1890]: time="2026-03-07T01:11:03.306404118Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"661c9389056cc3660920971191063bb1f7531e643e2d78b53970713803121040\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 01:11:03.306670 containerd[1890]: time="2026-03-07T01:11:03.306490759Z" level=info msg="RemovePodSandbox \"661c9389056cc3660920971191063bb1f7531e643e2d78b53970713803121040\" returns successfully" Mar 7 01:11:03.307511 containerd[1890]: time="2026-03-07T01:11:03.307366624Z" level=info msg="StopPodSandbox for \"9120a8cca8fc27f512ef3ea5c6ca6c5ad0e99e37ebf0e7d9a053d26ab94ed0b0\"" Mar 7 01:11:03.472104 containerd[1890]: 2026-03-07 01:11:03.377 [WARNING][5775] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9120a8cca8fc27f512ef3ea5c6ca6c5ad0e99e37ebf0e7d9a053d26ab94ed0b0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--156-k8s-coredns--66bc5c9577--7lsxp-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"c3318a07-edd7-4b14-bc1e-8289ed132c3a", ResourceVersion:"959", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 10, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-156", ContainerID:"7128e3de14d1a4ed371bc1231c34bf21f1f935e8b41b53d0dbf4a1afe61c1c12", Pod:"coredns-66bc5c9577-7lsxp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.2.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibc36f3d5211", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:11:03.472104 containerd[1890]: 2026-03-07 01:11:03.377 [INFO][5775] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="9120a8cca8fc27f512ef3ea5c6ca6c5ad0e99e37ebf0e7d9a053d26ab94ed0b0" Mar 7 01:11:03.472104 containerd[1890]: 2026-03-07 01:11:03.377 [INFO][5775] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9120a8cca8fc27f512ef3ea5c6ca6c5ad0e99e37ebf0e7d9a053d26ab94ed0b0" iface="eth0" netns="" Mar 7 01:11:03.472104 containerd[1890]: 2026-03-07 01:11:03.377 [INFO][5775] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="9120a8cca8fc27f512ef3ea5c6ca6c5ad0e99e37ebf0e7d9a053d26ab94ed0b0" Mar 7 01:11:03.472104 containerd[1890]: 2026-03-07 01:11:03.377 [INFO][5775] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="9120a8cca8fc27f512ef3ea5c6ca6c5ad0e99e37ebf0e7d9a053d26ab94ed0b0" Mar 7 01:11:03.472104 containerd[1890]: 2026-03-07 01:11:03.431 [INFO][5782] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="9120a8cca8fc27f512ef3ea5c6ca6c5ad0e99e37ebf0e7d9a053d26ab94ed0b0" HandleID="k8s-pod-network.9120a8cca8fc27f512ef3ea5c6ca6c5ad0e99e37ebf0e7d9a053d26ab94ed0b0" Workload="ip--172--31--29--156-k8s-coredns--66bc5c9577--7lsxp-eth0" Mar 7 01:11:03.472104 containerd[1890]: 2026-03-07 01:11:03.433 [INFO][5782] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:11:03.472104 containerd[1890]: 2026-03-07 01:11:03.433 [INFO][5782] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:11:03.472104 containerd[1890]: 2026-03-07 01:11:03.449 [WARNING][5782] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="9120a8cca8fc27f512ef3ea5c6ca6c5ad0e99e37ebf0e7d9a053d26ab94ed0b0" HandleID="k8s-pod-network.9120a8cca8fc27f512ef3ea5c6ca6c5ad0e99e37ebf0e7d9a053d26ab94ed0b0" Workload="ip--172--31--29--156-k8s-coredns--66bc5c9577--7lsxp-eth0" Mar 7 01:11:03.472104 containerd[1890]: 2026-03-07 01:11:03.449 [INFO][5782] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="9120a8cca8fc27f512ef3ea5c6ca6c5ad0e99e37ebf0e7d9a053d26ab94ed0b0" HandleID="k8s-pod-network.9120a8cca8fc27f512ef3ea5c6ca6c5ad0e99e37ebf0e7d9a053d26ab94ed0b0" Workload="ip--172--31--29--156-k8s-coredns--66bc5c9577--7lsxp-eth0" Mar 7 01:11:03.472104 containerd[1890]: 2026-03-07 01:11:03.455 [INFO][5782] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:11:03.472104 containerd[1890]: 2026-03-07 01:11:03.464 [INFO][5775] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="9120a8cca8fc27f512ef3ea5c6ca6c5ad0e99e37ebf0e7d9a053d26ab94ed0b0" Mar 7 01:11:03.472104 containerd[1890]: time="2026-03-07T01:11:03.471830262Z" level=info msg="TearDown network for sandbox \"9120a8cca8fc27f512ef3ea5c6ca6c5ad0e99e37ebf0e7d9a053d26ab94ed0b0\" successfully" Mar 7 01:11:03.472104 containerd[1890]: time="2026-03-07T01:11:03.471860960Z" level=info msg="StopPodSandbox for \"9120a8cca8fc27f512ef3ea5c6ca6c5ad0e99e37ebf0e7d9a053d26ab94ed0b0\" returns successfully" Mar 7 01:11:03.480977 containerd[1890]: time="2026-03-07T01:11:03.477162342Z" level=info msg="RemovePodSandbox for \"9120a8cca8fc27f512ef3ea5c6ca6c5ad0e99e37ebf0e7d9a053d26ab94ed0b0\"" Mar 7 01:11:03.480977 containerd[1890]: time="2026-03-07T01:11:03.477521594Z" level=info msg="Forcibly stopping sandbox \"9120a8cca8fc27f512ef3ea5c6ca6c5ad0e99e37ebf0e7d9a053d26ab94ed0b0\"" Mar 7 01:11:03.675512 containerd[1890]: 2026-03-07 01:11:03.597 [WARNING][5802] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9120a8cca8fc27f512ef3ea5c6ca6c5ad0e99e37ebf0e7d9a053d26ab94ed0b0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--156-k8s-coredns--66bc5c9577--7lsxp-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"c3318a07-edd7-4b14-bc1e-8289ed132c3a", ResourceVersion:"959", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 10, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-156", ContainerID:"7128e3de14d1a4ed371bc1231c34bf21f1f935e8b41b53d0dbf4a1afe61c1c12", Pod:"coredns-66bc5c9577-7lsxp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.2.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibc36f3d5211", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:11:03.675512 containerd[1890]: 2026-03-07 01:11:03.597 [INFO][5802] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="9120a8cca8fc27f512ef3ea5c6ca6c5ad0e99e37ebf0e7d9a053d26ab94ed0b0" Mar 7 01:11:03.675512 containerd[1890]: 2026-03-07 01:11:03.597 [INFO][5802] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9120a8cca8fc27f512ef3ea5c6ca6c5ad0e99e37ebf0e7d9a053d26ab94ed0b0" iface="eth0" netns="" Mar 7 01:11:03.675512 containerd[1890]: 2026-03-07 01:11:03.597 [INFO][5802] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="9120a8cca8fc27f512ef3ea5c6ca6c5ad0e99e37ebf0e7d9a053d26ab94ed0b0" Mar 7 01:11:03.675512 containerd[1890]: 2026-03-07 01:11:03.597 [INFO][5802] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="9120a8cca8fc27f512ef3ea5c6ca6c5ad0e99e37ebf0e7d9a053d26ab94ed0b0" Mar 7 01:11:03.675512 containerd[1890]: 2026-03-07 01:11:03.653 [INFO][5815] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="9120a8cca8fc27f512ef3ea5c6ca6c5ad0e99e37ebf0e7d9a053d26ab94ed0b0" HandleID="k8s-pod-network.9120a8cca8fc27f512ef3ea5c6ca6c5ad0e99e37ebf0e7d9a053d26ab94ed0b0" Workload="ip--172--31--29--156-k8s-coredns--66bc5c9577--7lsxp-eth0" Mar 7 01:11:03.675512 containerd[1890]: 2026-03-07 01:11:03.653 [INFO][5815] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:11:03.675512 containerd[1890]: 2026-03-07 01:11:03.653 [INFO][5815] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:11:03.675512 containerd[1890]: 2026-03-07 01:11:03.664 [WARNING][5815] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="9120a8cca8fc27f512ef3ea5c6ca6c5ad0e99e37ebf0e7d9a053d26ab94ed0b0" HandleID="k8s-pod-network.9120a8cca8fc27f512ef3ea5c6ca6c5ad0e99e37ebf0e7d9a053d26ab94ed0b0" Workload="ip--172--31--29--156-k8s-coredns--66bc5c9577--7lsxp-eth0" Mar 7 01:11:03.675512 containerd[1890]: 2026-03-07 01:11:03.664 [INFO][5815] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="9120a8cca8fc27f512ef3ea5c6ca6c5ad0e99e37ebf0e7d9a053d26ab94ed0b0" HandleID="k8s-pod-network.9120a8cca8fc27f512ef3ea5c6ca6c5ad0e99e37ebf0e7d9a053d26ab94ed0b0" Workload="ip--172--31--29--156-k8s-coredns--66bc5c9577--7lsxp-eth0" Mar 7 01:11:03.675512 containerd[1890]: 2026-03-07 01:11:03.667 [INFO][5815] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:11:03.675512 containerd[1890]: 2026-03-07 01:11:03.671 [INFO][5802] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="9120a8cca8fc27f512ef3ea5c6ca6c5ad0e99e37ebf0e7d9a053d26ab94ed0b0" Mar 7 01:11:03.677352 containerd[1890]: time="2026-03-07T01:11:03.675608822Z" level=info msg="TearDown network for sandbox \"9120a8cca8fc27f512ef3ea5c6ca6c5ad0e99e37ebf0e7d9a053d26ab94ed0b0\" successfully" Mar 7 01:11:03.687243 containerd[1890]: time="2026-03-07T01:11:03.687189531Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9120a8cca8fc27f512ef3ea5c6ca6c5ad0e99e37ebf0e7d9a053d26ab94ed0b0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 01:11:03.687243 containerd[1890]: time="2026-03-07T01:11:03.687279039Z" level=info msg="RemovePodSandbox \"9120a8cca8fc27f512ef3ea5c6ca6c5ad0e99e37ebf0e7d9a053d26ab94ed0b0\" returns successfully" Mar 7 01:11:03.688132 containerd[1890]: time="2026-03-07T01:11:03.687931755Z" level=info msg="StopPodSandbox for \"ba7b2e28a7d246b29bded19b23b3a56ceef7edb20e861d0af8e111b2f8763254\"" Mar 7 01:11:03.852755 containerd[1890]: 2026-03-07 01:11:03.768 [WARNING][5828] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ba7b2e28a7d246b29bded19b23b3a56ceef7edb20e861d0af8e111b2f8763254" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--156-k8s-calico--apiserver--5fffbd9bb8--p7j5c-eth0", GenerateName:"calico-apiserver-5fffbd9bb8-", Namespace:"calico-system", SelfLink:"", UID:"31e4d6ab-37b1-4c95-b346-1277ec27fa0d", ResourceVersion:"1021", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 10, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5fffbd9bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-156", ContainerID:"63bb7fe5e5f49f2d606f24a7c6471461b8206efd3ed866ae6b22f3b917325da3", Pod:"calico-apiserver-5fffbd9bb8-p7j5c", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.2.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calieecb1753e85", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:11:03.852755 containerd[1890]: 2026-03-07 01:11:03.769 [INFO][5828] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="ba7b2e28a7d246b29bded19b23b3a56ceef7edb20e861d0af8e111b2f8763254" Mar 7 01:11:03.852755 containerd[1890]: 2026-03-07 01:11:03.769 [INFO][5828] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ba7b2e28a7d246b29bded19b23b3a56ceef7edb20e861d0af8e111b2f8763254" iface="eth0" netns="" Mar 7 01:11:03.852755 containerd[1890]: 2026-03-07 01:11:03.769 [INFO][5828] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="ba7b2e28a7d246b29bded19b23b3a56ceef7edb20e861d0af8e111b2f8763254" Mar 7 01:11:03.852755 containerd[1890]: 2026-03-07 01:11:03.769 [INFO][5828] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="ba7b2e28a7d246b29bded19b23b3a56ceef7edb20e861d0af8e111b2f8763254" Mar 7 01:11:03.852755 containerd[1890]: 2026-03-07 01:11:03.829 [INFO][5835] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="ba7b2e28a7d246b29bded19b23b3a56ceef7edb20e861d0af8e111b2f8763254" HandleID="k8s-pod-network.ba7b2e28a7d246b29bded19b23b3a56ceef7edb20e861d0af8e111b2f8763254" Workload="ip--172--31--29--156-k8s-calico--apiserver--5fffbd9bb8--p7j5c-eth0" Mar 7 01:11:03.852755 containerd[1890]: 2026-03-07 01:11:03.829 [INFO][5835] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:11:03.852755 containerd[1890]: 2026-03-07 01:11:03.829 [INFO][5835] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:11:03.852755 containerd[1890]: 2026-03-07 01:11:03.841 [WARNING][5835] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="ba7b2e28a7d246b29bded19b23b3a56ceef7edb20e861d0af8e111b2f8763254" HandleID="k8s-pod-network.ba7b2e28a7d246b29bded19b23b3a56ceef7edb20e861d0af8e111b2f8763254" Workload="ip--172--31--29--156-k8s-calico--apiserver--5fffbd9bb8--p7j5c-eth0" Mar 7 01:11:03.852755 containerd[1890]: 2026-03-07 01:11:03.841 [INFO][5835] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="ba7b2e28a7d246b29bded19b23b3a56ceef7edb20e861d0af8e111b2f8763254" HandleID="k8s-pod-network.ba7b2e28a7d246b29bded19b23b3a56ceef7edb20e861d0af8e111b2f8763254" Workload="ip--172--31--29--156-k8s-calico--apiserver--5fffbd9bb8--p7j5c-eth0" Mar 7 01:11:03.852755 containerd[1890]: 2026-03-07 01:11:03.844 [INFO][5835] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:11:03.852755 containerd[1890]: 2026-03-07 01:11:03.849 [INFO][5828] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="ba7b2e28a7d246b29bded19b23b3a56ceef7edb20e861d0af8e111b2f8763254" Mar 7 01:11:03.852755 containerd[1890]: time="2026-03-07T01:11:03.852735358Z" level=info msg="TearDown network for sandbox \"ba7b2e28a7d246b29bded19b23b3a56ceef7edb20e861d0af8e111b2f8763254\" successfully" Mar 7 01:11:03.855041 containerd[1890]: time="2026-03-07T01:11:03.852766793Z" level=info msg="StopPodSandbox for \"ba7b2e28a7d246b29bded19b23b3a56ceef7edb20e861d0af8e111b2f8763254\" returns successfully" Mar 7 01:11:03.855580 containerd[1890]: time="2026-03-07T01:11:03.855542438Z" level=info msg="RemovePodSandbox for \"ba7b2e28a7d246b29bded19b23b3a56ceef7edb20e861d0af8e111b2f8763254\"" Mar 7 01:11:03.855765 containerd[1890]: time="2026-03-07T01:11:03.855590555Z" level=info msg="Forcibly stopping sandbox \"ba7b2e28a7d246b29bded19b23b3a56ceef7edb20e861d0af8e111b2f8763254\"" Mar 7 01:11:04.047375 containerd[1890]: 2026-03-07 01:11:03.953 [WARNING][5850] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ba7b2e28a7d246b29bded19b23b3a56ceef7edb20e861d0af8e111b2f8763254" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--156-k8s-calico--apiserver--5fffbd9bb8--p7j5c-eth0", GenerateName:"calico-apiserver-5fffbd9bb8-", Namespace:"calico-system", SelfLink:"", UID:"31e4d6ab-37b1-4c95-b346-1277ec27fa0d", ResourceVersion:"1021", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 10, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5fffbd9bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-156", ContainerID:"63bb7fe5e5f49f2d606f24a7c6471461b8206efd3ed866ae6b22f3b917325da3", Pod:"calico-apiserver-5fffbd9bb8-p7j5c", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.2.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calieecb1753e85", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:11:04.047375 containerd[1890]: 2026-03-07 01:11:03.953 [INFO][5850] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="ba7b2e28a7d246b29bded19b23b3a56ceef7edb20e861d0af8e111b2f8763254" Mar 7 01:11:04.047375 containerd[1890]: 2026-03-07 01:11:03.953 [INFO][5850] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ba7b2e28a7d246b29bded19b23b3a56ceef7edb20e861d0af8e111b2f8763254" iface="eth0" netns="" Mar 7 01:11:04.047375 containerd[1890]: 2026-03-07 01:11:03.953 [INFO][5850] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="ba7b2e28a7d246b29bded19b23b3a56ceef7edb20e861d0af8e111b2f8763254" Mar 7 01:11:04.047375 containerd[1890]: 2026-03-07 01:11:03.953 [INFO][5850] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="ba7b2e28a7d246b29bded19b23b3a56ceef7edb20e861d0af8e111b2f8763254" Mar 7 01:11:04.047375 containerd[1890]: 2026-03-07 01:11:04.017 [INFO][5858] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="ba7b2e28a7d246b29bded19b23b3a56ceef7edb20e861d0af8e111b2f8763254" HandleID="k8s-pod-network.ba7b2e28a7d246b29bded19b23b3a56ceef7edb20e861d0af8e111b2f8763254" Workload="ip--172--31--29--156-k8s-calico--apiserver--5fffbd9bb8--p7j5c-eth0" Mar 7 01:11:04.047375 containerd[1890]: 2026-03-07 01:11:04.017 [INFO][5858] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:11:04.047375 containerd[1890]: 2026-03-07 01:11:04.017 [INFO][5858] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:11:04.047375 containerd[1890]: 2026-03-07 01:11:04.030 [WARNING][5858] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="ba7b2e28a7d246b29bded19b23b3a56ceef7edb20e861d0af8e111b2f8763254" HandleID="k8s-pod-network.ba7b2e28a7d246b29bded19b23b3a56ceef7edb20e861d0af8e111b2f8763254" Workload="ip--172--31--29--156-k8s-calico--apiserver--5fffbd9bb8--p7j5c-eth0" Mar 7 01:11:04.047375 containerd[1890]: 2026-03-07 01:11:04.030 [INFO][5858] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="ba7b2e28a7d246b29bded19b23b3a56ceef7edb20e861d0af8e111b2f8763254" HandleID="k8s-pod-network.ba7b2e28a7d246b29bded19b23b3a56ceef7edb20e861d0af8e111b2f8763254" Workload="ip--172--31--29--156-k8s-calico--apiserver--5fffbd9bb8--p7j5c-eth0" Mar 7 01:11:04.047375 containerd[1890]: 2026-03-07 01:11:04.033 [INFO][5858] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:11:04.047375 containerd[1890]: 2026-03-07 01:11:04.036 [INFO][5850] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="ba7b2e28a7d246b29bded19b23b3a56ceef7edb20e861d0af8e111b2f8763254" Mar 7 01:11:04.050074 containerd[1890]: time="2026-03-07T01:11:04.047419625Z" level=info msg="TearDown network for sandbox \"ba7b2e28a7d246b29bded19b23b3a56ceef7edb20e861d0af8e111b2f8763254\" successfully" Mar 7 01:11:04.060339 containerd[1890]: time="2026-03-07T01:11:04.060082752Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ba7b2e28a7d246b29bded19b23b3a56ceef7edb20e861d0af8e111b2f8763254\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 01:11:04.060339 containerd[1890]: time="2026-03-07T01:11:04.060167299Z" level=info msg="RemovePodSandbox \"ba7b2e28a7d246b29bded19b23b3a56ceef7edb20e861d0af8e111b2f8763254\" returns successfully" Mar 7 01:11:04.061695 containerd[1890]: time="2026-03-07T01:11:04.061332790Z" level=info msg="StopPodSandbox for \"7b65c902c69d759ce2d3e6171f3b29f6d48f79083b7b1eb3061112466fd9bfa7\"" Mar 7 01:11:04.224312 containerd[1890]: 2026-03-07 01:11:04.145 [WARNING][5872] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7b65c902c69d759ce2d3e6171f3b29f6d48f79083b7b1eb3061112466fd9bfa7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--156-k8s-goldmane--cccfbd5cf--9kx9l-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"ec167290-b451-4cd1-a0d3-0912ddaf0ce2", ResourceVersion:"928", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 10, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-156", ContainerID:"867c5012f53e58bf21093f43842105ca0cd66fae284286e046fd877136d0e274", Pod:"goldmane-cccfbd5cf-9kx9l", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.2.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali7cebaccf7e2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:11:04.224312 containerd[1890]: 2026-03-07 01:11:04.145 [INFO][5872] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="7b65c902c69d759ce2d3e6171f3b29f6d48f79083b7b1eb3061112466fd9bfa7" Mar 7 01:11:04.224312 containerd[1890]: 2026-03-07 01:11:04.145 [INFO][5872] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7b65c902c69d759ce2d3e6171f3b29f6d48f79083b7b1eb3061112466fd9bfa7" iface="eth0" netns="" Mar 7 01:11:04.224312 containerd[1890]: 2026-03-07 01:11:04.145 [INFO][5872] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="7b65c902c69d759ce2d3e6171f3b29f6d48f79083b7b1eb3061112466fd9bfa7" Mar 7 01:11:04.224312 containerd[1890]: 2026-03-07 01:11:04.146 [INFO][5872] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="7b65c902c69d759ce2d3e6171f3b29f6d48f79083b7b1eb3061112466fd9bfa7" Mar 7 01:11:04.224312 containerd[1890]: 2026-03-07 01:11:04.206 [INFO][5880] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="7b65c902c69d759ce2d3e6171f3b29f6d48f79083b7b1eb3061112466fd9bfa7" HandleID="k8s-pod-network.7b65c902c69d759ce2d3e6171f3b29f6d48f79083b7b1eb3061112466fd9bfa7" Workload="ip--172--31--29--156-k8s-goldmane--cccfbd5cf--9kx9l-eth0" Mar 7 01:11:04.224312 containerd[1890]: 2026-03-07 01:11:04.206 [INFO][5880] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:11:04.224312 containerd[1890]: 2026-03-07 01:11:04.206 [INFO][5880] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:11:04.224312 containerd[1890]: 2026-03-07 01:11:04.215 [WARNING][5880] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="7b65c902c69d759ce2d3e6171f3b29f6d48f79083b7b1eb3061112466fd9bfa7" HandleID="k8s-pod-network.7b65c902c69d759ce2d3e6171f3b29f6d48f79083b7b1eb3061112466fd9bfa7" Workload="ip--172--31--29--156-k8s-goldmane--cccfbd5cf--9kx9l-eth0" Mar 7 01:11:04.224312 containerd[1890]: 2026-03-07 01:11:04.215 [INFO][5880] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="7b65c902c69d759ce2d3e6171f3b29f6d48f79083b7b1eb3061112466fd9bfa7" HandleID="k8s-pod-network.7b65c902c69d759ce2d3e6171f3b29f6d48f79083b7b1eb3061112466fd9bfa7" Workload="ip--172--31--29--156-k8s-goldmane--cccfbd5cf--9kx9l-eth0" Mar 7 01:11:04.224312 containerd[1890]: 2026-03-07 01:11:04.217 [INFO][5880] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:11:04.224312 containerd[1890]: 2026-03-07 01:11:04.221 [INFO][5872] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="7b65c902c69d759ce2d3e6171f3b29f6d48f79083b7b1eb3061112466fd9bfa7" Mar 7 01:11:04.225410 containerd[1890]: time="2026-03-07T01:11:04.224354272Z" level=info msg="TearDown network for sandbox \"7b65c902c69d759ce2d3e6171f3b29f6d48f79083b7b1eb3061112466fd9bfa7\" successfully" Mar 7 01:11:04.225410 containerd[1890]: time="2026-03-07T01:11:04.224383416Z" level=info msg="StopPodSandbox for \"7b65c902c69d759ce2d3e6171f3b29f6d48f79083b7b1eb3061112466fd9bfa7\" returns successfully" Mar 7 01:11:04.225410 containerd[1890]: time="2026-03-07T01:11:04.224994153Z" level=info msg="RemovePodSandbox for \"7b65c902c69d759ce2d3e6171f3b29f6d48f79083b7b1eb3061112466fd9bfa7\"" Mar 7 01:11:04.225410 containerd[1890]: time="2026-03-07T01:11:04.225041583Z" level=info msg="Forcibly stopping sandbox \"7b65c902c69d759ce2d3e6171f3b29f6d48f79083b7b1eb3061112466fd9bfa7\"" Mar 7 01:11:04.383876 containerd[1890]: 2026-03-07 01:11:04.299 [WARNING][5895] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7b65c902c69d759ce2d3e6171f3b29f6d48f79083b7b1eb3061112466fd9bfa7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--156-k8s-goldmane--cccfbd5cf--9kx9l-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"ec167290-b451-4cd1-a0d3-0912ddaf0ce2", ResourceVersion:"928", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 10, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-156", ContainerID:"867c5012f53e58bf21093f43842105ca0cd66fae284286e046fd877136d0e274", Pod:"goldmane-cccfbd5cf-9kx9l", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.2.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali7cebaccf7e2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:11:04.383876 containerd[1890]: 2026-03-07 01:11:04.299 [INFO][5895] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="7b65c902c69d759ce2d3e6171f3b29f6d48f79083b7b1eb3061112466fd9bfa7" Mar 7 01:11:04.383876 containerd[1890]: 2026-03-07 01:11:04.299 [INFO][5895] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7b65c902c69d759ce2d3e6171f3b29f6d48f79083b7b1eb3061112466fd9bfa7" iface="eth0" netns="" Mar 7 01:11:04.383876 containerd[1890]: 2026-03-07 01:11:04.299 [INFO][5895] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="7b65c902c69d759ce2d3e6171f3b29f6d48f79083b7b1eb3061112466fd9bfa7" Mar 7 01:11:04.383876 containerd[1890]: 2026-03-07 01:11:04.300 [INFO][5895] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="7b65c902c69d759ce2d3e6171f3b29f6d48f79083b7b1eb3061112466fd9bfa7" Mar 7 01:11:04.383876 containerd[1890]: 2026-03-07 01:11:04.341 [INFO][5902] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="7b65c902c69d759ce2d3e6171f3b29f6d48f79083b7b1eb3061112466fd9bfa7" HandleID="k8s-pod-network.7b65c902c69d759ce2d3e6171f3b29f6d48f79083b7b1eb3061112466fd9bfa7" Workload="ip--172--31--29--156-k8s-goldmane--cccfbd5cf--9kx9l-eth0" Mar 7 01:11:04.383876 containerd[1890]: 2026-03-07 01:11:04.341 [INFO][5902] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:11:04.383876 containerd[1890]: 2026-03-07 01:11:04.341 [INFO][5902] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:11:04.383876 containerd[1890]: 2026-03-07 01:11:04.353 [WARNING][5902] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="7b65c902c69d759ce2d3e6171f3b29f6d48f79083b7b1eb3061112466fd9bfa7" HandleID="k8s-pod-network.7b65c902c69d759ce2d3e6171f3b29f6d48f79083b7b1eb3061112466fd9bfa7" Workload="ip--172--31--29--156-k8s-goldmane--cccfbd5cf--9kx9l-eth0" Mar 7 01:11:04.383876 containerd[1890]: 2026-03-07 01:11:04.353 [INFO][5902] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="7b65c902c69d759ce2d3e6171f3b29f6d48f79083b7b1eb3061112466fd9bfa7" HandleID="k8s-pod-network.7b65c902c69d759ce2d3e6171f3b29f6d48f79083b7b1eb3061112466fd9bfa7" Workload="ip--172--31--29--156-k8s-goldmane--cccfbd5cf--9kx9l-eth0" Mar 7 01:11:04.383876 containerd[1890]: 2026-03-07 01:11:04.358 [INFO][5902] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:11:04.383876 containerd[1890]: 2026-03-07 01:11:04.362 [INFO][5895] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="7b65c902c69d759ce2d3e6171f3b29f6d48f79083b7b1eb3061112466fd9bfa7" Mar 7 01:11:04.383876 containerd[1890]: time="2026-03-07T01:11:04.382705197Z" level=info msg="TearDown network for sandbox \"7b65c902c69d759ce2d3e6171f3b29f6d48f79083b7b1eb3061112466fd9bfa7\" successfully" Mar 7 01:11:04.385593 containerd[1890]: time="2026-03-07T01:11:04.385533672Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=55623386" Mar 7 01:11:04.387284 containerd[1890]: time="2026-03-07T01:11:04.387243854Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:11:04.389038 containerd[1890]: time="2026-03-07T01:11:04.388907263Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7b65c902c69d759ce2d3e6171f3b29f6d48f79083b7b1eb3061112466fd9bfa7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 01:11:04.389038 containerd[1890]: time="2026-03-07T01:11:04.388986047Z" level=info msg="RemovePodSandbox \"7b65c902c69d759ce2d3e6171f3b29f6d48f79083b7b1eb3061112466fd9bfa7\" returns successfully" Mar 7 01:11:04.403515 containerd[1890]: time="2026-03-07T01:11:04.403147839Z" level=info msg="StopPodSandbox for \"99d0b5775a858fd6614d3b6b3e297a9efb07e6a6ac5b83e8eaca9c48e1ad6ffc\"" Mar 7 01:11:04.414265 containerd[1890]: time="2026-03-07T01:11:04.414215852Z" level=info msg="ImageCreate event name:\"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:11:04.419522 containerd[1890]: time="2026-03-07T01:11:04.419471212Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:11:04.423762 containerd[1890]: time="2026-03-07T01:11:04.422917300Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"55623232\" in 7.373124585s" Mar 7 01:11:04.423762 containerd[1890]: time="2026-03-07T01:11:04.422970130Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\"" Mar 7 01:11:04.459689 containerd[1890]: time="2026-03-07T01:11:04.459647216Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Mar 7 01:11:04.588906 containerd[1890]: 2026-03-07 01:11:04.477 [WARNING][5921] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="99d0b5775a858fd6614d3b6b3e297a9efb07e6a6ac5b83e8eaca9c48e1ad6ffc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--156-k8s-whisker--669bdb6b65--nhkt8-eth0", GenerateName:"whisker-669bdb6b65-", Namespace:"calico-system", SelfLink:"", UID:"e39111b2-0e10-4e35-8ff3-e8249485a878", ResourceVersion:"919", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 10, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"669bdb6b65", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-156", ContainerID:"5237c383c7f60d776b4d1e7f0da8f300dfceabab4bad4d87ba9ba9d045bf7a10", Pod:"whisker-669bdb6b65-nhkt8", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.2.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali99b024c8fdc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:11:04.588906 containerd[1890]: 2026-03-07 01:11:04.478 [INFO][5921] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="99d0b5775a858fd6614d3b6b3e297a9efb07e6a6ac5b83e8eaca9c48e1ad6ffc" Mar 7 01:11:04.588906 containerd[1890]: 2026-03-07 01:11:04.478 [INFO][5921] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="99d0b5775a858fd6614d3b6b3e297a9efb07e6a6ac5b83e8eaca9c48e1ad6ffc" iface="eth0" netns="" Mar 7 01:11:04.588906 containerd[1890]: 2026-03-07 01:11:04.478 [INFO][5921] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="99d0b5775a858fd6614d3b6b3e297a9efb07e6a6ac5b83e8eaca9c48e1ad6ffc" Mar 7 01:11:04.588906 containerd[1890]: 2026-03-07 01:11:04.478 [INFO][5921] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="99d0b5775a858fd6614d3b6b3e297a9efb07e6a6ac5b83e8eaca9c48e1ad6ffc" Mar 7 01:11:04.588906 containerd[1890]: 2026-03-07 01:11:04.545 [INFO][5928] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="99d0b5775a858fd6614d3b6b3e297a9efb07e6a6ac5b83e8eaca9c48e1ad6ffc" HandleID="k8s-pod-network.99d0b5775a858fd6614d3b6b3e297a9efb07e6a6ac5b83e8eaca9c48e1ad6ffc" Workload="ip--172--31--29--156-k8s-whisker--669bdb6b65--nhkt8-eth0" Mar 7 01:11:04.588906 containerd[1890]: 2026-03-07 01:11:04.545 [INFO][5928] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:11:04.588906 containerd[1890]: 2026-03-07 01:11:04.545 [INFO][5928] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:11:04.588906 containerd[1890]: 2026-03-07 01:11:04.573 [WARNING][5928] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="99d0b5775a858fd6614d3b6b3e297a9efb07e6a6ac5b83e8eaca9c48e1ad6ffc" HandleID="k8s-pod-network.99d0b5775a858fd6614d3b6b3e297a9efb07e6a6ac5b83e8eaca9c48e1ad6ffc" Workload="ip--172--31--29--156-k8s-whisker--669bdb6b65--nhkt8-eth0" Mar 7 01:11:04.588906 containerd[1890]: 2026-03-07 01:11:04.573 [INFO][5928] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="99d0b5775a858fd6614d3b6b3e297a9efb07e6a6ac5b83e8eaca9c48e1ad6ffc" HandleID="k8s-pod-network.99d0b5775a858fd6614d3b6b3e297a9efb07e6a6ac5b83e8eaca9c48e1ad6ffc" Workload="ip--172--31--29--156-k8s-whisker--669bdb6b65--nhkt8-eth0" Mar 7 01:11:04.588906 containerd[1890]: 2026-03-07 01:11:04.577 [INFO][5928] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:11:04.588906 containerd[1890]: 2026-03-07 01:11:04.583 [INFO][5921] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="99d0b5775a858fd6614d3b6b3e297a9efb07e6a6ac5b83e8eaca9c48e1ad6ffc" Mar 7 01:11:04.588906 containerd[1890]: time="2026-03-07T01:11:04.588795829Z" level=info msg="TearDown network for sandbox \"99d0b5775a858fd6614d3b6b3e297a9efb07e6a6ac5b83e8eaca9c48e1ad6ffc\" successfully" Mar 7 01:11:04.588906 containerd[1890]: time="2026-03-07T01:11:04.588817870Z" level=info msg="StopPodSandbox for \"99d0b5775a858fd6614d3b6b3e297a9efb07e6a6ac5b83e8eaca9c48e1ad6ffc\" returns successfully" Mar 7 01:11:04.664096 containerd[1890]: time="2026-03-07T01:11:04.664053862Z" level=info msg="RemovePodSandbox for \"99d0b5775a858fd6614d3b6b3e297a9efb07e6a6ac5b83e8eaca9c48e1ad6ffc\"" Mar 7 01:11:04.664699 containerd[1890]: time="2026-03-07T01:11:04.664403924Z" level=info msg="Forcibly stopping sandbox \"99d0b5775a858fd6614d3b6b3e297a9efb07e6a6ac5b83e8eaca9c48e1ad6ffc\"" Mar 7 01:11:04.867181 containerd[1890]: 2026-03-07 01:11:04.807 [WARNING][5942] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="99d0b5775a858fd6614d3b6b3e297a9efb07e6a6ac5b83e8eaca9c48e1ad6ffc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--156-k8s-whisker--669bdb6b65--nhkt8-eth0", GenerateName:"whisker-669bdb6b65-", Namespace:"calico-system", SelfLink:"", UID:"e39111b2-0e10-4e35-8ff3-e8249485a878", ResourceVersion:"919", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 10, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"669bdb6b65", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-156", ContainerID:"5237c383c7f60d776b4d1e7f0da8f300dfceabab4bad4d87ba9ba9d045bf7a10", Pod:"whisker-669bdb6b65-nhkt8", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.2.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali99b024c8fdc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:11:04.867181 containerd[1890]: 2026-03-07 01:11:04.807 [INFO][5942] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="99d0b5775a858fd6614d3b6b3e297a9efb07e6a6ac5b83e8eaca9c48e1ad6ffc" Mar 7 01:11:04.867181 containerd[1890]: 2026-03-07 01:11:04.807 [INFO][5942] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="99d0b5775a858fd6614d3b6b3e297a9efb07e6a6ac5b83e8eaca9c48e1ad6ffc" iface="eth0" netns="" Mar 7 01:11:04.867181 containerd[1890]: 2026-03-07 01:11:04.807 [INFO][5942] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="99d0b5775a858fd6614d3b6b3e297a9efb07e6a6ac5b83e8eaca9c48e1ad6ffc" Mar 7 01:11:04.867181 containerd[1890]: 2026-03-07 01:11:04.807 [INFO][5942] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="99d0b5775a858fd6614d3b6b3e297a9efb07e6a6ac5b83e8eaca9c48e1ad6ffc" Mar 7 01:11:04.867181 containerd[1890]: 2026-03-07 01:11:04.851 [INFO][5949] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="99d0b5775a858fd6614d3b6b3e297a9efb07e6a6ac5b83e8eaca9c48e1ad6ffc" HandleID="k8s-pod-network.99d0b5775a858fd6614d3b6b3e297a9efb07e6a6ac5b83e8eaca9c48e1ad6ffc" Workload="ip--172--31--29--156-k8s-whisker--669bdb6b65--nhkt8-eth0" Mar 7 01:11:04.867181 containerd[1890]: 2026-03-07 01:11:04.851 [INFO][5949] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:11:04.867181 containerd[1890]: 2026-03-07 01:11:04.851 [INFO][5949] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:11:04.867181 containerd[1890]: 2026-03-07 01:11:04.859 [WARNING][5949] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="99d0b5775a858fd6614d3b6b3e297a9efb07e6a6ac5b83e8eaca9c48e1ad6ffc" HandleID="k8s-pod-network.99d0b5775a858fd6614d3b6b3e297a9efb07e6a6ac5b83e8eaca9c48e1ad6ffc" Workload="ip--172--31--29--156-k8s-whisker--669bdb6b65--nhkt8-eth0" Mar 7 01:11:04.867181 containerd[1890]: 2026-03-07 01:11:04.859 [INFO][5949] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="99d0b5775a858fd6614d3b6b3e297a9efb07e6a6ac5b83e8eaca9c48e1ad6ffc" HandleID="k8s-pod-network.99d0b5775a858fd6614d3b6b3e297a9efb07e6a6ac5b83e8eaca9c48e1ad6ffc" Workload="ip--172--31--29--156-k8s-whisker--669bdb6b65--nhkt8-eth0" Mar 7 01:11:04.867181 containerd[1890]: 2026-03-07 01:11:04.861 [INFO][5949] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:11:04.867181 containerd[1890]: 2026-03-07 01:11:04.864 [INFO][5942] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="99d0b5775a858fd6614d3b6b3e297a9efb07e6a6ac5b83e8eaca9c48e1ad6ffc" Mar 7 01:11:04.868542 containerd[1890]: time="2026-03-07T01:11:04.867358941Z" level=info msg="TearDown network for sandbox \"99d0b5775a858fd6614d3b6b3e297a9efb07e6a6ac5b83e8eaca9c48e1ad6ffc\" successfully" Mar 7 01:11:04.872873 containerd[1890]: time="2026-03-07T01:11:04.872834524Z" level=info msg="CreateContainer within sandbox \"867c5012f53e58bf21093f43842105ca0cd66fae284286e046fd877136d0e274\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Mar 7 01:11:04.876372 containerd[1890]: time="2026-03-07T01:11:04.876112875Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"99d0b5775a858fd6614d3b6b3e297a9efb07e6a6ac5b83e8eaca9c48e1ad6ffc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 01:11:04.876372 containerd[1890]: time="2026-03-07T01:11:04.876195218Z" level=info msg="RemovePodSandbox \"99d0b5775a858fd6614d3b6b3e297a9efb07e6a6ac5b83e8eaca9c48e1ad6ffc\" returns successfully" Mar 7 01:11:04.999943 containerd[1890]: time="2026-03-07T01:11:04.999301369Z" level=info msg="CreateContainer within sandbox \"867c5012f53e58bf21093f43842105ca0cd66fae284286e046fd877136d0e274\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"70f01a5ab809fe17f47e1809d6725e5955ba292a68328d25a571019026a765fa\"" Mar 7 01:11:05.024792 containerd[1890]: time="2026-03-07T01:11:05.024232789Z" level=info msg="StartContainer for \"70f01a5ab809fe17f47e1809d6725e5955ba292a68328d25a571019026a765fa\"" Mar 7 01:11:05.492334 systemd[1]: Started cri-containerd-70f01a5ab809fe17f47e1809d6725e5955ba292a68328d25a571019026a765fa.scope - libcontainer container 70f01a5ab809fe17f47e1809d6725e5955ba292a68328d25a571019026a765fa. Mar 7 01:11:05.592292 containerd[1890]: time="2026-03-07T01:11:05.592244267Z" level=info msg="StartContainer for \"70f01a5ab809fe17f47e1809d6725e5955ba292a68328d25a571019026a765fa\" returns successfully" Mar 7 01:11:07.461463 systemd[1]: Started sshd@9-172.31.29.156:22-68.220.241.50:55226.service - OpenSSH per-connection server daemon (68.220.241.50:55226). Mar 7 01:11:07.730097 kubelet[3114]: I0307 01:11:07.721719 3114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-cccfbd5cf-9kx9l" podStartSLOduration=31.808374779 podStartE2EDuration="46.716591247s" podCreationTimestamp="2026-03-07 01:10:21 +0000 UTC" firstStartedPulling="2026-03-07 01:10:49.560578286 +0000 UTC m=+50.742549449" lastFinishedPulling="2026-03-07 01:11:04.468794737 +0000 UTC m=+65.650765917" observedRunningTime="2026-03-07 01:11:07.691261198 +0000 UTC m=+68.873232378" watchObservedRunningTime="2026-03-07 01:11:07.716591247 +0000 UTC m=+68.898562427" Mar 7 01:11:07.741584 kubelet[3114]: I0307 01:11:07.733544 3114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-5fffbd9bb8-p7j5c" podStartSLOduration=41.729911964 podStartE2EDuration="47.733140067s" podCreationTimestamp="2026-03-07 01:10:20 +0000 UTC" firstStartedPulling="2026-03-07 01:10:48.675345924 +0000 UTC m=+49.857317095" lastFinishedPulling="2026-03-07 01:10:54.67857404 +0000 UTC m=+55.860545198" observedRunningTime="2026-03-07 01:10:55.760542022 +0000 UTC m=+56.942513201" watchObservedRunningTime="2026-03-07 01:11:07.733140067 +0000 UTC m=+68.915111248" Mar 7 01:11:08.070101 containerd[1890]: time="2026-03-07T01:11:08.069962220Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:11:08.087124 containerd[1890]: time="2026-03-07T01:11:08.087058144Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=52406348" Mar 7 01:11:08.094052 containerd[1890]: time="2026-03-07T01:11:08.093485717Z" level=info msg="ImageCreate event name:\"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:11:08.097394 containerd[1890]: time="2026-03-07T01:11:08.097337029Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:11:08.098798 containerd[1890]: time="2026-03-07T01:11:08.098740402Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"53962361\" in 3.639031988s" Mar 7 01:11:08.098958 containerd[1890]: time="2026-03-07T01:11:08.098935465Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\"" Mar 7 01:11:08.113700 sshd[6002]: Accepted publickey for core from 68.220.241.50 port 55226 ssh2: RSA SHA256:0PS0FBgqn6GWl/nQsMeHlwIixP16R4Q8OHmWUJZFPy8 Mar 7 01:11:08.118169 sshd[6002]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:11:08.124260 containerd[1890]: time="2026-03-07T01:11:08.118948361Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Mar 7 01:11:08.130596 systemd-logind[1880]: New session 10 of user core. Mar 7 01:11:08.136780 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 7 01:11:08.212664 containerd[1890]: time="2026-03-07T01:11:08.212567312Z" level=info msg="CreateContainer within sandbox \"ed9ef43286ec5aaab75d39f65107e7ad4ac0543f7a1bcec29aaea0e5622cdf94\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Mar 7 01:11:08.269994 containerd[1890]: time="2026-03-07T01:11:08.267700170Z" level=info msg="CreateContainer within sandbox \"ed9ef43286ec5aaab75d39f65107e7ad4ac0543f7a1bcec29aaea0e5622cdf94\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"f9bca9b0b8c6f15ce6dd01faf12ad04ee0042184d812d5eef2438b105481af23\"" Mar 7 01:11:08.269994 containerd[1890]: time="2026-03-07T01:11:08.269470982Z" level=info msg="StartContainer for \"f9bca9b0b8c6f15ce6dd01faf12ad04ee0042184d812d5eef2438b105481af23\"" Mar 7 01:11:08.794943 systemd[1]: run-containerd-runc-k8s.io-70f01a5ab809fe17f47e1809d6725e5955ba292a68328d25a571019026a765fa-runc.rbmZQV.mount: Deactivated successfully. Mar 7 01:11:08.903262 systemd[1]: Started cri-containerd-f9bca9b0b8c6f15ce6dd01faf12ad04ee0042184d812d5eef2438b105481af23.scope - libcontainer container f9bca9b0b8c6f15ce6dd01faf12ad04ee0042184d812d5eef2438b105481af23. Mar 7 01:11:09.060669 containerd[1890]: time="2026-03-07T01:11:09.060571577Z" level=info msg="StartContainer for \"f9bca9b0b8c6f15ce6dd01faf12ad04ee0042184d812d5eef2438b105481af23\" returns successfully" Mar 7 01:11:09.492739 systemd[1]: run-containerd-runc-k8s.io-f9bca9b0b8c6f15ce6dd01faf12ad04ee0042184d812d5eef2438b105481af23-runc.8NMh1w.mount: Deactivated successfully. Mar 7 01:11:09.591230 systemd[1]: run-containerd-runc-k8s.io-70f01a5ab809fe17f47e1809d6725e5955ba292a68328d25a571019026a765fa-runc.tK9J0J.mount: Deactivated successfully. Mar 7 01:11:09.618678 kubelet[3114]: I0307 01:11:09.618292 3114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-dd6b64fbf-c45tm" podStartSLOduration=29.23254325 podStartE2EDuration="47.618268991s" podCreationTimestamp="2026-03-07 01:10:22 +0000 UTC" firstStartedPulling="2026-03-07 01:10:49.732367786 +0000 UTC m=+50.914338953" lastFinishedPulling="2026-03-07 01:11:08.118093536 +0000 UTC m=+69.300064694" observedRunningTime="2026-03-07 01:11:09.370544522 +0000 UTC m=+70.552515701" watchObservedRunningTime="2026-03-07 01:11:09.618268991 +0000 UTC m=+70.800240174" Mar 7 01:11:10.083265 sshd[6002]: pam_unix(sshd:session): session closed for user core Mar 7 01:11:10.091109 systemd[1]: sshd@9-172.31.29.156:22-68.220.241.50:55226.service: Deactivated successfully. Mar 7 01:11:10.095304 systemd[1]: session-10.scope: Deactivated successfully. Mar 7 01:11:10.096466 systemd-logind[1880]: Session 10 logged out. Waiting for processes to exit. Mar 7 01:11:10.097777 systemd-logind[1880]: Removed session 10. Mar 7 01:11:11.136521 containerd[1890]: time="2026-03-07T01:11:11.136459375Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:11:11.138466 containerd[1890]: time="2026-03-07T01:11:11.138267517Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8792502" Mar 7 01:11:11.140980 containerd[1890]: time="2026-03-07T01:11:11.140535979Z" level=info msg="ImageCreate event name:\"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:11:11.145964 containerd[1890]: time="2026-03-07T01:11:11.145188726Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:11:11.146391 containerd[1890]: time="2026-03-07T01:11:11.146336510Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"10348547\" in 3.027339182s" Mar 7 01:11:11.146522 containerd[1890]: time="2026-03-07T01:11:11.146498460Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\"" Mar 7 01:11:11.147599 containerd[1890]: time="2026-03-07T01:11:11.147572745Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Mar 7 01:11:11.158690 containerd[1890]: time="2026-03-07T01:11:11.158636775Z" level=info msg="CreateContainer within sandbox \"cd668f9e73dec590a2e3ac013e82ca523c46cf9a3246cd1d3dd2abd3ba52af63\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Mar 7 01:11:11.202752 containerd[1890]: time="2026-03-07T01:11:11.202689567Z" level=info msg="CreateContainer within sandbox \"cd668f9e73dec590a2e3ac013e82ca523c46cf9a3246cd1d3dd2abd3ba52af63\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"3f4e82b066a610c7a1d0fbaf65c8e0bac1e1149f2a0adf142b41793163cb2ce4\"" Mar 7 01:11:11.203744 containerd[1890]: time="2026-03-07T01:11:11.203689340Z" level=info msg="StartContainer for \"3f4e82b066a610c7a1d0fbaf65c8e0bac1e1149f2a0adf142b41793163cb2ce4\"" Mar 7 01:11:11.316297 systemd[1]: Started cri-containerd-3f4e82b066a610c7a1d0fbaf65c8e0bac1e1149f2a0adf142b41793163cb2ce4.scope - libcontainer container 3f4e82b066a610c7a1d0fbaf65c8e0bac1e1149f2a0adf142b41793163cb2ce4. Mar 7 01:11:11.367747 containerd[1890]: time="2026-03-07T01:11:11.367236284Z" level=info msg="StartContainer for \"3f4e82b066a610c7a1d0fbaf65c8e0bac1e1149f2a0adf142b41793163cb2ce4\" returns successfully" Mar 7 01:11:12.865673 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2973538817.mount: Deactivated successfully. Mar 7 01:11:12.968493 containerd[1890]: time="2026-03-07T01:11:12.967525128Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:11:12.972432 containerd[1890]: time="2026-03-07T01:11:12.972352179Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=17609475" Mar 7 01:11:12.977941 containerd[1890]: time="2026-03-07T01:11:12.977692145Z" level=info msg="ImageCreate event name:\"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:11:12.986450 containerd[1890]: time="2026-03-07T01:11:12.985978777Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:11:12.988183 containerd[1890]: time="2026-03-07T01:11:12.987393577Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"17609305\" in 1.839785083s" Mar 7 01:11:12.988183 containerd[1890]: time="2026-03-07T01:11:12.987441403Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\"" Mar 7 01:11:12.992370 containerd[1890]: time="2026-03-07T01:11:12.992337318Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Mar 7 01:11:13.043668 containerd[1890]: time="2026-03-07T01:11:13.043588005Z" level=info msg="CreateContainer within sandbox \"5237c383c7f60d776b4d1e7f0da8f300dfceabab4bad4d87ba9ba9d045bf7a10\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Mar 7 01:11:13.071859 containerd[1890]: time="2026-03-07T01:11:13.071817106Z" level=info msg="CreateContainer within sandbox \"5237c383c7f60d776b4d1e7f0da8f300dfceabab4bad4d87ba9ba9d045bf7a10\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"a36db223aa94d0e22eba9618cfd5f8cc7fc9610e5162efb3feed488dd2eae6a2\"" Mar 7 01:11:13.073103 containerd[1890]: time="2026-03-07T01:11:13.073070900Z" level=info msg="StartContainer for \"a36db223aa94d0e22eba9618cfd5f8cc7fc9610e5162efb3feed488dd2eae6a2\"" Mar 7 01:11:13.134336 systemd[1]: Started cri-containerd-a36db223aa94d0e22eba9618cfd5f8cc7fc9610e5162efb3feed488dd2eae6a2.scope - libcontainer container a36db223aa94d0e22eba9618cfd5f8cc7fc9610e5162efb3feed488dd2eae6a2. Mar 7 01:11:13.220921 containerd[1890]: time="2026-03-07T01:11:13.220870224Z" level=info msg="StartContainer for \"a36db223aa94d0e22eba9618cfd5f8cc7fc9610e5162efb3feed488dd2eae6a2\" returns successfully" Mar 7 01:11:13.442100 kubelet[3114]: I0307 01:11:13.441923 3114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-669bdb6b65-nhkt8" podStartSLOduration=24.731608853 podStartE2EDuration="48.441898238s" podCreationTimestamp="2026-03-07 01:10:25 +0000 UTC" firstStartedPulling="2026-03-07 01:10:49.278140497 +0000 UTC m=+50.460111672" lastFinishedPulling="2026-03-07 01:11:12.988429899 +0000 UTC m=+74.170401057" observedRunningTime="2026-03-07 01:11:13.441079554 +0000 UTC m=+74.623050734" watchObservedRunningTime="2026-03-07 01:11:13.441898238 +0000 UTC m=+74.623869418" Mar 7 01:11:13.584057 containerd[1890]: time="2026-03-07T01:11:13.583861454Z" level=info msg="StopContainer for \"a36db223aa94d0e22eba9618cfd5f8cc7fc9610e5162efb3feed488dd2eae6a2\" with timeout 30 (s)" Mar 7 01:11:13.584057 containerd[1890]: time="2026-03-07T01:11:13.583913423Z" level=info msg="StopContainer for \"3cc94983e58e3d6743bef0a3ac0d0b41c35b4d4817ab99312344c63c19a4995f\" with timeout 30 (s)" Mar 7 01:11:13.584376 containerd[1890]: time="2026-03-07T01:11:13.584346470Z" level=info msg="Stop container \"a36db223aa94d0e22eba9618cfd5f8cc7fc9610e5162efb3feed488dd2eae6a2\" with signal terminated" Mar 7 01:11:13.584822 containerd[1890]: time="2026-03-07T01:11:13.584528901Z" level=info msg="Stop container \"3cc94983e58e3d6743bef0a3ac0d0b41c35b4d4817ab99312344c63c19a4995f\" with signal terminated" Mar 7 01:11:13.600883 systemd[1]: cri-containerd-a36db223aa94d0e22eba9618cfd5f8cc7fc9610e5162efb3feed488dd2eae6a2.scope: Deactivated successfully. Mar 7 01:11:13.628639 systemd[1]: cri-containerd-3cc94983e58e3d6743bef0a3ac0d0b41c35b4d4817ab99312344c63c19a4995f.scope: Deactivated successfully. Mar 7 01:11:13.652940 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a36db223aa94d0e22eba9618cfd5f8cc7fc9610e5162efb3feed488dd2eae6a2-rootfs.mount: Deactivated successfully. Mar 7 01:11:13.673215 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3cc94983e58e3d6743bef0a3ac0d0b41c35b4d4817ab99312344c63c19a4995f-rootfs.mount: Deactivated successfully. Mar 7 01:11:13.679720 containerd[1890]: time="2026-03-07T01:11:13.668647000Z" level=info msg="shim disconnected" id=3cc94983e58e3d6743bef0a3ac0d0b41c35b4d4817ab99312344c63c19a4995f namespace=k8s.io Mar 7 01:11:13.679996 containerd[1890]: time="2026-03-07T01:11:13.679721604Z" level=warning msg="cleaning up after shim disconnected" id=3cc94983e58e3d6743bef0a3ac0d0b41c35b4d4817ab99312344c63c19a4995f namespace=k8s.io Mar 7 01:11:13.679996 containerd[1890]: time="2026-03-07T01:11:13.679742423Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:11:13.681191 containerd[1890]: time="2026-03-07T01:11:13.674568937Z" level=info msg="shim disconnected" id=a36db223aa94d0e22eba9618cfd5f8cc7fc9610e5162efb3feed488dd2eae6a2 namespace=k8s.io Mar 7 01:11:13.681191 containerd[1890]: time="2026-03-07T01:11:13.680199210Z" level=warning msg="cleaning up after shim disconnected" id=a36db223aa94d0e22eba9618cfd5f8cc7fc9610e5162efb3feed488dd2eae6a2 namespace=k8s.io Mar 7 01:11:13.681191 containerd[1890]: time="2026-03-07T01:11:13.680221034Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:11:13.706147 containerd[1890]: time="2026-03-07T01:11:13.705258253Z" level=warning msg="cleanup warnings time=\"2026-03-07T01:11:13Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 7 01:11:13.713938 containerd[1890]: time="2026-03-07T01:11:13.713890535Z" level=info msg="StopContainer for \"a36db223aa94d0e22eba9618cfd5f8cc7fc9610e5162efb3feed488dd2eae6a2\" returns successfully" Mar 7 01:11:13.728504 containerd[1890]: time="2026-03-07T01:11:13.728457949Z" level=info msg="StopContainer for \"3cc94983e58e3d6743bef0a3ac0d0b41c35b4d4817ab99312344c63c19a4995f\" returns successfully" Mar 7 01:11:13.729223 containerd[1890]: time="2026-03-07T01:11:13.729175685Z" level=info msg="StopPodSandbox for \"5237c383c7f60d776b4d1e7f0da8f300dfceabab4bad4d87ba9ba9d045bf7a10\"" Mar 7 01:11:13.729340 containerd[1890]: time="2026-03-07T01:11:13.729239585Z" level=info msg="Container to stop \"3cc94983e58e3d6743bef0a3ac0d0b41c35b4d4817ab99312344c63c19a4995f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 7 01:11:13.729340 containerd[1890]: time="2026-03-07T01:11:13.729258512Z" level=info msg="Container to stop \"a36db223aa94d0e22eba9618cfd5f8cc7fc9610e5162efb3feed488dd2eae6a2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 7 01:11:13.735546 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5237c383c7f60d776b4d1e7f0da8f300dfceabab4bad4d87ba9ba9d045bf7a10-shm.mount: Deactivated successfully. Mar 7 01:11:13.742787 systemd[1]: cri-containerd-5237c383c7f60d776b4d1e7f0da8f300dfceabab4bad4d87ba9ba9d045bf7a10.scope: Deactivated successfully. Mar 7 01:11:13.780112 containerd[1890]: time="2026-03-07T01:11:13.779838616Z" level=info msg="shim disconnected" id=5237c383c7f60d776b4d1e7f0da8f300dfceabab4bad4d87ba9ba9d045bf7a10 namespace=k8s.io Mar 7 01:11:13.780112 containerd[1890]: time="2026-03-07T01:11:13.779900680Z" level=warning msg="cleaning up after shim disconnected" id=5237c383c7f60d776b4d1e7f0da8f300dfceabab4bad4d87ba9ba9d045bf7a10 namespace=k8s.io Mar 7 01:11:13.780112 containerd[1890]: time="2026-03-07T01:11:13.779913283Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:11:13.782544 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5237c383c7f60d776b4d1e7f0da8f300dfceabab4bad4d87ba9ba9d045bf7a10-rootfs.mount: Deactivated successfully. Mar 7 01:11:14.304722 systemd-networkd[1818]: cali99b024c8fdc: Link DOWN Mar 7 01:11:14.304733 systemd-networkd[1818]: cali99b024c8fdc: Lost carrier Mar 7 01:11:14.389230 kubelet[3114]: I0307 01:11:14.388347 3114 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5237c383c7f60d776b4d1e7f0da8f300dfceabab4bad4d87ba9ba9d045bf7a10" Mar 7 01:11:14.809138 containerd[1890]: 2026-03-07 01:11:14.271 [INFO][6346] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="5237c383c7f60d776b4d1e7f0da8f300dfceabab4bad4d87ba9ba9d045bf7a10" Mar 7 01:11:14.809138 containerd[1890]: 2026-03-07 01:11:14.276 [INFO][6346] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5237c383c7f60d776b4d1e7f0da8f300dfceabab4bad4d87ba9ba9d045bf7a10" iface="eth0" netns="/var/run/netns/cni-6b34dd88-60c3-fb9c-6fd4-098eede21615" Mar 7 01:11:14.809138 containerd[1890]: 2026-03-07 01:11:14.276 [INFO][6346] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5237c383c7f60d776b4d1e7f0da8f300dfceabab4bad4d87ba9ba9d045bf7a10" iface="eth0" netns="/var/run/netns/cni-6b34dd88-60c3-fb9c-6fd4-098eede21615" Mar 7 01:11:14.809138 containerd[1890]: 2026-03-07 01:11:14.306 [INFO][6346] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="5237c383c7f60d776b4d1e7f0da8f300dfceabab4bad4d87ba9ba9d045bf7a10" after=30.075079ms iface="eth0" netns="/var/run/netns/cni-6b34dd88-60c3-fb9c-6fd4-098eede21615" Mar 7 01:11:14.809138 containerd[1890]: 2026-03-07 01:11:14.306 [INFO][6346] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="5237c383c7f60d776b4d1e7f0da8f300dfceabab4bad4d87ba9ba9d045bf7a10" Mar 7 01:11:14.809138 containerd[1890]: 2026-03-07 01:11:14.306 [INFO][6346] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="5237c383c7f60d776b4d1e7f0da8f300dfceabab4bad4d87ba9ba9d045bf7a10" Mar 7 01:11:14.809138 containerd[1890]: 2026-03-07 01:11:14.724 [INFO][6353] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="5237c383c7f60d776b4d1e7f0da8f300dfceabab4bad4d87ba9ba9d045bf7a10" HandleID="k8s-pod-network.5237c383c7f60d776b4d1e7f0da8f300dfceabab4bad4d87ba9ba9d045bf7a10" Workload="ip--172--31--29--156-k8s-whisker--669bdb6b65--nhkt8-eth0" Mar 7 01:11:14.809138 containerd[1890]: 2026-03-07 01:11:14.729 [INFO][6353] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:11:14.809138 containerd[1890]: 2026-03-07 01:11:14.729 [INFO][6353] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:11:14.809138 containerd[1890]: 2026-03-07 01:11:14.802 [INFO][6353] ipam/ipam_plugin.go 516: Released address using handleID ContainerID="5237c383c7f60d776b4d1e7f0da8f300dfceabab4bad4d87ba9ba9d045bf7a10" HandleID="k8s-pod-network.5237c383c7f60d776b4d1e7f0da8f300dfceabab4bad4d87ba9ba9d045bf7a10" Workload="ip--172--31--29--156-k8s-whisker--669bdb6b65--nhkt8-eth0" Mar 7 01:11:14.809138 containerd[1890]: 2026-03-07 01:11:14.802 [INFO][6353] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="5237c383c7f60d776b4d1e7f0da8f300dfceabab4bad4d87ba9ba9d045bf7a10" HandleID="k8s-pod-network.5237c383c7f60d776b4d1e7f0da8f300dfceabab4bad4d87ba9ba9d045bf7a10" Workload="ip--172--31--29--156-k8s-whisker--669bdb6b65--nhkt8-eth0" Mar 7 01:11:14.809138 containerd[1890]: 2026-03-07 01:11:14.804 [INFO][6353] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:11:14.809138 containerd[1890]: 2026-03-07 01:11:14.806 [INFO][6346] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="5237c383c7f60d776b4d1e7f0da8f300dfceabab4bad4d87ba9ba9d045bf7a10" Mar 7 01:11:14.817075 systemd[1]: run-netns-cni\x2d6b34dd88\x2d60c3\x2dfb9c\x2d6fd4\x2d098eede21615.mount: Deactivated successfully. Mar 7 01:11:14.821352 containerd[1890]: time="2026-03-07T01:11:14.821299218Z" level=info msg="TearDown network for sandbox \"5237c383c7f60d776b4d1e7f0da8f300dfceabab4bad4d87ba9ba9d045bf7a10\" successfully" Mar 7 01:11:14.821352 containerd[1890]: time="2026-03-07T01:11:14.821347118Z" level=info msg="StopPodSandbox for \"5237c383c7f60d776b4d1e7f0da8f300dfceabab4bad4d87ba9ba9d045bf7a10\" returns successfully" Mar 7 01:11:15.089446 kubelet[3114]: I0307 01:11:15.089313 3114 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e39111b2-0e10-4e35-8ff3-e8249485a878-whisker-backend-key-pair\") pod \"e39111b2-0e10-4e35-8ff3-e8249485a878\" (UID: \"e39111b2-0e10-4e35-8ff3-e8249485a878\") " Mar 7 01:11:15.089446 kubelet[3114]: I0307 01:11:15.089396 3114 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bzw6w\" (UniqueName: \"kubernetes.io/projected/e39111b2-0e10-4e35-8ff3-e8249485a878-kube-api-access-bzw6w\") pod \"e39111b2-0e10-4e35-8ff3-e8249485a878\" (UID: \"e39111b2-0e10-4e35-8ff3-e8249485a878\") " Mar 7 01:11:15.089446 kubelet[3114]: I0307 01:11:15.089442 3114 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/e39111b2-0e10-4e35-8ff3-e8249485a878-nginx-config\") pod \"e39111b2-0e10-4e35-8ff3-e8249485a878\" (UID: \"e39111b2-0e10-4e35-8ff3-e8249485a878\") " Mar 7 01:11:15.090152 kubelet[3114]: I0307 01:11:15.089469 3114 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e39111b2-0e10-4e35-8ff3-e8249485a878-whisker-ca-bundle\") pod \"e39111b2-0e10-4e35-8ff3-e8249485a878\" (UID: \"e39111b2-0e10-4e35-8ff3-e8249485a878\") " Mar 7 01:11:15.164972 systemd[1]: var-lib-kubelet-pods-e39111b2\x2d0e10\x2d4e35\x2d8ff3\x2de8249485a878-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dbzw6w.mount: Deactivated successfully. Mar 7 01:11:15.172477 kubelet[3114]: I0307 01:11:15.170594 3114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e39111b2-0e10-4e35-8ff3-e8249485a878-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "e39111b2-0e10-4e35-8ff3-e8249485a878" (UID: "e39111b2-0e10-4e35-8ff3-e8249485a878"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 7 01:11:15.173955 systemd[1]: var-lib-kubelet-pods-e39111b2\x2d0e10\x2d4e35\x2d8ff3\x2de8249485a878-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Mar 7 01:11:15.179332 kubelet[3114]: I0307 01:11:15.179293 3114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e39111b2-0e10-4e35-8ff3-e8249485a878-kube-api-access-bzw6w" (OuterVolumeSpecName: "kube-api-access-bzw6w") pod "e39111b2-0e10-4e35-8ff3-e8249485a878" (UID: "e39111b2-0e10-4e35-8ff3-e8249485a878"). InnerVolumeSpecName "kube-api-access-bzw6w". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 7 01:11:15.179551 kubelet[3114]: I0307 01:11:15.179534 3114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e39111b2-0e10-4e35-8ff3-e8249485a878-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "e39111b2-0e10-4e35-8ff3-e8249485a878" (UID: "e39111b2-0e10-4e35-8ff3-e8249485a878"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 7 01:11:15.179709 kubelet[3114]: I0307 01:11:15.162875 3114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e39111b2-0e10-4e35-8ff3-e8249485a878-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "e39111b2-0e10-4e35-8ff3-e8249485a878" (UID: "e39111b2-0e10-4e35-8ff3-e8249485a878"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 7 01:11:15.194530 systemd[1]: Started sshd@10-172.31.29.156:22-68.220.241.50:39042.service - OpenSSH per-connection server daemon (68.220.241.50:39042). Mar 7 01:11:15.196293 kubelet[3114]: I0307 01:11:15.196203 3114 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e39111b2-0e10-4e35-8ff3-e8249485a878-whisker-backend-key-pair\") on node \"ip-172-31-29-156\" DevicePath \"\"" Mar 7 01:11:15.196293 kubelet[3114]: I0307 01:11:15.196235 3114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-bzw6w\" (UniqueName: \"kubernetes.io/projected/e39111b2-0e10-4e35-8ff3-e8249485a878-kube-api-access-bzw6w\") on node \"ip-172-31-29-156\" DevicePath \"\"" Mar 7 01:11:15.196293 kubelet[3114]: I0307 01:11:15.196254 3114 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/e39111b2-0e10-4e35-8ff3-e8249485a878-nginx-config\") on node \"ip-172-31-29-156\" DevicePath \"\"" Mar 7 01:11:15.196293 kubelet[3114]: I0307 01:11:15.196267 3114 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e39111b2-0e10-4e35-8ff3-e8249485a878-whisker-ca-bundle\") on node \"ip-172-31-29-156\" DevicePath \"\"" Mar 7 01:11:15.447266 systemd[1]: Removed slice kubepods-besteffort-pode39111b2_0e10_4e35_8ff3_e8249485a878.slice - libcontainer container kubepods-besteffort-pode39111b2_0e10_4e35_8ff3_e8249485a878.slice. Mar 7 01:11:15.769858 sshd[6381]: Accepted publickey for core from 68.220.241.50 port 39042 ssh2: RSA SHA256:0PS0FBgqn6GWl/nQsMeHlwIixP16R4Q8OHmWUJZFPy8 Mar 7 01:11:15.774059 sshd[6381]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:11:15.785818 systemd-logind[1880]: New session 11 of user core. Mar 7 01:11:15.792505 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 7 01:11:16.167938 containerd[1890]: time="2026-03-07T01:11:16.165967726Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:11:16.171034 containerd[1890]: time="2026-03-07T01:11:16.170280189Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=14704317" Mar 7 01:11:16.175714 containerd[1890]: time="2026-03-07T01:11:16.175660466Z" level=info msg="ImageCreate event name:\"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:11:16.181527 containerd[1890]: time="2026-03-07T01:11:16.181459957Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:11:16.184454 containerd[1890]: time="2026-03-07T01:11:16.182294152Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"16260314\" in 3.189915129s" Mar 7 01:11:16.184454 containerd[1890]: time="2026-03-07T01:11:16.182337931Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\"" Mar 7 01:11:16.193492 containerd[1890]: time="2026-03-07T01:11:16.193194488Z" level=info msg="CreateContainer within sandbox \"cd668f9e73dec590a2e3ac013e82ca523c46cf9a3246cd1d3dd2abd3ba52af63\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Mar 7 01:11:16.225751 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2156633942.mount: Deactivated successfully. Mar 7 01:11:16.235739 containerd[1890]: time="2026-03-07T01:11:16.235685490Z" level=info msg="CreateContainer within sandbox \"cd668f9e73dec590a2e3ac013e82ca523c46cf9a3246cd1d3dd2abd3ba52af63\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"3b80d4a1618fcb4f1b3179d3cd08f90fdf6b5119762eb49d5961bdb6d0033f5e\"" Mar 7 01:11:16.237482 containerd[1890]: time="2026-03-07T01:11:16.237438111Z" level=info msg="StartContainer for \"3b80d4a1618fcb4f1b3179d3cd08f90fdf6b5119762eb49d5961bdb6d0033f5e\"" Mar 7 01:11:16.346391 ntpd[1872]: Deleting interface #11 cali99b024c8fdc, fe80::ecee:eeff:feee:eeee%6#123, interface stats: received=0, sent=0, dropped=0, active_time=20 secs Mar 7 01:11:16.348614 ntpd[1872]: 7 Mar 01:11:16 ntpd[1872]: Deleting interface #11 cali99b024c8fdc, fe80::ecee:eeff:feee:eeee%6#123, interface stats: received=0, sent=0, dropped=0, active_time=20 secs Mar 7 01:11:16.369242 systemd[1]: Started cri-containerd-3b80d4a1618fcb4f1b3179d3cd08f90fdf6b5119762eb49d5961bdb6d0033f5e.scope - libcontainer container 3b80d4a1618fcb4f1b3179d3cd08f90fdf6b5119762eb49d5961bdb6d0033f5e. Mar 7 01:11:16.464968 containerd[1890]: time="2026-03-07T01:11:16.462603068Z" level=info msg="StartContainer for \"3b80d4a1618fcb4f1b3179d3cd08f90fdf6b5119762eb49d5961bdb6d0033f5e\" returns successfully" Mar 7 01:11:17.008779 kubelet[3114]: I0307 01:11:17.003052 3114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e39111b2-0e10-4e35-8ff3-e8249485a878" path="/var/lib/kubelet/pods/e39111b2-0e10-4e35-8ff3-e8249485a878/volumes" Mar 7 01:11:17.192753 sshd[6381]: pam_unix(sshd:session): session closed for user core Mar 7 01:11:17.197362 systemd[1]: sshd@10-172.31.29.156:22-68.220.241.50:39042.service: Deactivated successfully. Mar 7 01:11:17.199824 systemd[1]: session-11.scope: Deactivated successfully. Mar 7 01:11:17.204158 systemd-logind[1880]: Session 11 logged out. Waiting for processes to exit. Mar 7 01:11:17.207391 systemd-logind[1880]: Removed session 11. Mar 7 01:11:17.288801 systemd[1]: Started sshd@11-172.31.29.156:22-68.220.241.50:39052.service - OpenSSH per-connection server daemon (68.220.241.50:39052). Mar 7 01:11:17.398883 kubelet[3114]: I0307 01:11:17.397094 3114 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Mar 7 01:11:17.402123 kubelet[3114]: I0307 01:11:17.402082 3114 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Mar 7 01:11:17.442522 kubelet[3114]: I0307 01:11:17.441719 3114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-9sl5j" podStartSLOduration=29.165800972 podStartE2EDuration="55.441701662s" podCreationTimestamp="2026-03-07 01:10:22 +0000 UTC" firstStartedPulling="2026-03-07 01:10:49.907920067 +0000 UTC m=+51.089891236" lastFinishedPulling="2026-03-07 01:11:16.183820754 +0000 UTC m=+77.365791926" observedRunningTime="2026-03-07 01:11:17.437565187 +0000 UTC m=+78.619536367" watchObservedRunningTime="2026-03-07 01:11:17.441701662 +0000 UTC m=+78.623672873" Mar 7 01:11:17.819136 sshd[6437]: Accepted publickey for core from 68.220.241.50 port 39052 ssh2: RSA SHA256:0PS0FBgqn6GWl/nQsMeHlwIixP16R4Q8OHmWUJZFPy8 Mar 7 01:11:17.820888 sshd[6437]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:11:17.826889 systemd-logind[1880]: New session 12 of user core. Mar 7 01:11:17.833281 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 7 01:11:18.363119 sshd[6437]: pam_unix(sshd:session): session closed for user core Mar 7 01:11:18.367674 systemd[1]: sshd@11-172.31.29.156:22-68.220.241.50:39052.service: Deactivated successfully. Mar 7 01:11:18.370815 systemd[1]: session-12.scope: Deactivated successfully. Mar 7 01:11:18.372843 systemd-logind[1880]: Session 12 logged out. Waiting for processes to exit. Mar 7 01:11:18.374390 systemd-logind[1880]: Removed session 12. Mar 7 01:11:18.454370 systemd[1]: Started sshd@12-172.31.29.156:22-68.220.241.50:39056.service - OpenSSH per-connection server daemon (68.220.241.50:39056). Mar 7 01:11:18.972298 sshd[6447]: Accepted publickey for core from 68.220.241.50 port 39056 ssh2: RSA SHA256:0PS0FBgqn6GWl/nQsMeHlwIixP16R4Q8OHmWUJZFPy8 Mar 7 01:11:18.973368 sshd[6447]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:11:18.982096 systemd-logind[1880]: New session 13 of user core. Mar 7 01:11:18.986215 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 7 01:11:19.451125 sshd[6447]: pam_unix(sshd:session): session closed for user core Mar 7 01:11:19.456737 systemd[1]: sshd@12-172.31.29.156:22-68.220.241.50:39056.service: Deactivated successfully. Mar 7 01:11:19.460813 systemd[1]: session-13.scope: Deactivated successfully. Mar 7 01:11:19.463185 systemd-logind[1880]: Session 13 logged out. Waiting for processes to exit. Mar 7 01:11:19.466402 systemd-logind[1880]: Removed session 13. Mar 7 01:11:24.543405 systemd[1]: Started sshd@13-172.31.29.156:22-68.220.241.50:33228.service - OpenSSH per-connection server daemon (68.220.241.50:33228). Mar 7 01:11:25.112676 sshd[6521]: Accepted publickey for core from 68.220.241.50 port 33228 ssh2: RSA SHA256:0PS0FBgqn6GWl/nQsMeHlwIixP16R4Q8OHmWUJZFPy8 Mar 7 01:11:25.116165 sshd[6521]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:11:25.122235 systemd-logind[1880]: New session 14 of user core. Mar 7 01:11:25.127237 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 7 01:11:25.853924 sshd[6521]: pam_unix(sshd:session): session closed for user core Mar 7 01:11:25.858905 systemd-logind[1880]: Session 14 logged out. Waiting for processes to exit. Mar 7 01:11:25.859799 systemd[1]: sshd@13-172.31.29.156:22-68.220.241.50:33228.service: Deactivated successfully. Mar 7 01:11:25.862240 systemd[1]: session-14.scope: Deactivated successfully. Mar 7 01:11:25.863786 systemd-logind[1880]: Removed session 14. Mar 7 01:11:25.943360 systemd[1]: Started sshd@14-172.31.29.156:22-68.220.241.50:33232.service - OpenSSH per-connection server daemon (68.220.241.50:33232). Mar 7 01:11:26.437269 sshd[6536]: Accepted publickey for core from 68.220.241.50 port 33232 ssh2: RSA SHA256:0PS0FBgqn6GWl/nQsMeHlwIixP16R4Q8OHmWUJZFPy8 Mar 7 01:11:26.439109 sshd[6536]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:11:26.445890 systemd-logind[1880]: New session 15 of user core. Mar 7 01:11:26.453253 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 7 01:11:27.326779 sshd[6536]: pam_unix(sshd:session): session closed for user core Mar 7 01:11:27.331598 systemd-logind[1880]: Session 15 logged out. Waiting for processes to exit. Mar 7 01:11:27.332983 systemd[1]: sshd@14-172.31.29.156:22-68.220.241.50:33232.service: Deactivated successfully. Mar 7 01:11:27.335500 systemd[1]: session-15.scope: Deactivated successfully. Mar 7 01:11:27.338506 systemd-logind[1880]: Removed session 15. Mar 7 01:11:27.417347 systemd[1]: Started sshd@15-172.31.29.156:22-68.220.241.50:33242.service - OpenSSH per-connection server daemon (68.220.241.50:33242). Mar 7 01:11:27.954790 sshd[6552]: Accepted publickey for core from 68.220.241.50 port 33242 ssh2: RSA SHA256:0PS0FBgqn6GWl/nQsMeHlwIixP16R4Q8OHmWUJZFPy8 Mar 7 01:11:27.956553 sshd[6552]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:11:27.962959 systemd-logind[1880]: New session 16 of user core. Mar 7 01:11:27.967228 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 7 01:11:28.941288 sshd[6552]: pam_unix(sshd:session): session closed for user core Mar 7 01:11:28.957666 systemd[1]: sshd@15-172.31.29.156:22-68.220.241.50:33242.service: Deactivated successfully. Mar 7 01:11:28.962468 systemd[1]: session-16.scope: Deactivated successfully. Mar 7 01:11:28.965083 systemd-logind[1880]: Session 16 logged out. Waiting for processes to exit. Mar 7 01:11:28.967158 systemd-logind[1880]: Removed session 16. Mar 7 01:11:29.025749 systemd[1]: Started sshd@16-172.31.29.156:22-68.220.241.50:33254.service - OpenSSH per-connection server daemon (68.220.241.50:33254). Mar 7 01:11:29.557821 sshd[6579]: Accepted publickey for core from 68.220.241.50 port 33254 ssh2: RSA SHA256:0PS0FBgqn6GWl/nQsMeHlwIixP16R4Q8OHmWUJZFPy8 Mar 7 01:11:29.560925 sshd[6579]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:11:29.566303 systemd-logind[1880]: New session 17 of user core. Mar 7 01:11:29.571241 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 7 01:11:30.755112 sshd[6579]: pam_unix(sshd:session): session closed for user core Mar 7 01:11:30.764780 systemd-logind[1880]: Session 17 logged out. Waiting for processes to exit. Mar 7 01:11:30.766345 systemd[1]: sshd@16-172.31.29.156:22-68.220.241.50:33254.service: Deactivated successfully. Mar 7 01:11:30.768865 systemd[1]: session-17.scope: Deactivated successfully. Mar 7 01:11:30.770294 systemd-logind[1880]: Removed session 17. Mar 7 01:11:30.839685 systemd[1]: Started sshd@17-172.31.29.156:22-68.220.241.50:33264.service - OpenSSH per-connection server daemon (68.220.241.50:33264). Mar 7 01:11:30.855915 kubelet[3114]: I0307 01:11:30.841492 3114 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 7 01:11:31.384376 sshd[6592]: Accepted publickey for core from 68.220.241.50 port 33264 ssh2: RSA SHA256:0PS0FBgqn6GWl/nQsMeHlwIixP16R4Q8OHmWUJZFPy8 Mar 7 01:11:31.386297 sshd[6592]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:11:31.392086 systemd-logind[1880]: New session 18 of user core. Mar 7 01:11:31.397236 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 7 01:11:31.918277 sshd[6592]: pam_unix(sshd:session): session closed for user core Mar 7 01:11:31.922710 systemd-logind[1880]: Session 18 logged out. Waiting for processes to exit. Mar 7 01:11:31.923931 systemd[1]: sshd@17-172.31.29.156:22-68.220.241.50:33264.service: Deactivated successfully. Mar 7 01:11:31.926489 systemd[1]: session-18.scope: Deactivated successfully. Mar 7 01:11:31.929390 systemd-logind[1880]: Removed session 18. Mar 7 01:11:37.012451 systemd[1]: Started sshd@18-172.31.29.156:22-68.220.241.50:45748.service - OpenSSH per-connection server daemon (68.220.241.50:45748). Mar 7 01:11:37.583017 sshd[6627]: Accepted publickey for core from 68.220.241.50 port 45748 ssh2: RSA SHA256:0PS0FBgqn6GWl/nQsMeHlwIixP16R4Q8OHmWUJZFPy8 Mar 7 01:11:37.585214 sshd[6627]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:11:37.590678 systemd-logind[1880]: New session 19 of user core. Mar 7 01:11:37.597323 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 7 01:11:38.515758 sshd[6627]: pam_unix(sshd:session): session closed for user core Mar 7 01:11:38.520089 systemd[1]: sshd@18-172.31.29.156:22-68.220.241.50:45748.service: Deactivated successfully. Mar 7 01:11:38.522468 systemd[1]: session-19.scope: Deactivated successfully. Mar 7 01:11:38.524625 systemd-logind[1880]: Session 19 logged out. Waiting for processes to exit. Mar 7 01:11:38.526081 systemd-logind[1880]: Removed session 19. Mar 7 01:11:38.948658 kubelet[3114]: I0307 01:11:38.947982 3114 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 7 01:11:43.611807 systemd[1]: Started sshd@19-172.31.29.156:22-68.220.241.50:39470.service - OpenSSH per-connection server daemon (68.220.241.50:39470). Mar 7 01:11:44.158390 sshd[6682]: Accepted publickey for core from 68.220.241.50 port 39470 ssh2: RSA SHA256:0PS0FBgqn6GWl/nQsMeHlwIixP16R4Q8OHmWUJZFPy8 Mar 7 01:11:44.161258 sshd[6682]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:11:44.166924 systemd-logind[1880]: New session 20 of user core. Mar 7 01:11:44.172225 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 7 01:11:45.163555 sshd[6682]: pam_unix(sshd:session): session closed for user core Mar 7 01:11:45.169612 systemd[1]: sshd@19-172.31.29.156:22-68.220.241.50:39470.service: Deactivated successfully. Mar 7 01:11:45.173228 systemd[1]: session-20.scope: Deactivated successfully. Mar 7 01:11:45.174131 systemd-logind[1880]: Session 20 logged out. Waiting for processes to exit. Mar 7 01:11:45.175502 systemd-logind[1880]: Removed session 20. Mar 7 01:11:50.258663 systemd[1]: Started sshd@20-172.31.29.156:22-68.220.241.50:39482.service - OpenSSH per-connection server daemon (68.220.241.50:39482). Mar 7 01:11:50.824440 sshd[6696]: Accepted publickey for core from 68.220.241.50 port 39482 ssh2: RSA SHA256:0PS0FBgqn6GWl/nQsMeHlwIixP16R4Q8OHmWUJZFPy8 Mar 7 01:11:50.827055 sshd[6696]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:11:50.833546 systemd-logind[1880]: New session 21 of user core. Mar 7 01:11:50.838225 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 7 01:11:51.855990 sshd[6696]: pam_unix(sshd:session): session closed for user core Mar 7 01:11:51.862859 systemd[1]: sshd@20-172.31.29.156:22-68.220.241.50:39482.service: Deactivated successfully. Mar 7 01:11:51.867214 systemd[1]: session-21.scope: Deactivated successfully. Mar 7 01:11:51.870591 systemd-logind[1880]: Session 21 logged out. Waiting for processes to exit. Mar 7 01:11:51.872704 systemd-logind[1880]: Removed session 21. Mar 7 01:11:56.948401 systemd[1]: Started sshd@21-172.31.29.156:22-68.220.241.50:42456.service - OpenSSH per-connection server daemon (68.220.241.50:42456). Mar 7 01:11:57.530824 sshd[6758]: Accepted publickey for core from 68.220.241.50 port 42456 ssh2: RSA SHA256:0PS0FBgqn6GWl/nQsMeHlwIixP16R4Q8OHmWUJZFPy8 Mar 7 01:11:57.534662 sshd[6758]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:11:57.541182 systemd-logind[1880]: New session 22 of user core. Mar 7 01:11:57.545211 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 7 01:11:58.741244 sshd[6758]: pam_unix(sshd:session): session closed for user core Mar 7 01:11:58.746127 systemd-logind[1880]: Session 22 logged out. Waiting for processes to exit. Mar 7 01:11:58.746941 systemd[1]: sshd@21-172.31.29.156:22-68.220.241.50:42456.service: Deactivated successfully. Mar 7 01:11:58.749783 systemd[1]: session-22.scope: Deactivated successfully. Mar 7 01:11:58.751306 systemd-logind[1880]: Removed session 22. Mar 7 01:12:04.934242 kubelet[3114]: I0307 01:12:04.934190 3114 scope.go:117] "RemoveContainer" containerID="3cc94983e58e3d6743bef0a3ac0d0b41c35b4d4817ab99312344c63c19a4995f" Mar 7 01:12:05.218889 containerd[1890]: time="2026-03-07T01:12:05.184138536Z" level=info msg="RemoveContainer for \"3cc94983e58e3d6743bef0a3ac0d0b41c35b4d4817ab99312344c63c19a4995f\"" Mar 7 01:12:05.357541 containerd[1890]: time="2026-03-07T01:12:05.357465615Z" level=info msg="RemoveContainer for \"3cc94983e58e3d6743bef0a3ac0d0b41c35b4d4817ab99312344c63c19a4995f\" returns successfully" Mar 7 01:12:05.369320 kubelet[3114]: I0307 01:12:05.369284 3114 scope.go:117] "RemoveContainer" containerID="a36db223aa94d0e22eba9618cfd5f8cc7fc9610e5162efb3feed488dd2eae6a2" Mar 7 01:12:05.372170 containerd[1890]: time="2026-03-07T01:12:05.371787898Z" level=info msg="RemoveContainer for \"a36db223aa94d0e22eba9618cfd5f8cc7fc9610e5162efb3feed488dd2eae6a2\"" Mar 7 01:12:05.378150 containerd[1890]: time="2026-03-07T01:12:05.378096090Z" level=info msg="RemoveContainer for \"a36db223aa94d0e22eba9618cfd5f8cc7fc9610e5162efb3feed488dd2eae6a2\" returns successfully" Mar 7 01:12:05.382904 containerd[1890]: time="2026-03-07T01:12:05.382842233Z" level=info msg="StopPodSandbox for \"5237c383c7f60d776b4d1e7f0da8f300dfceabab4bad4d87ba9ba9d045bf7a10\"" Mar 7 01:12:06.312143 containerd[1890]: 2026-03-07 01:12:05.886 [WARNING][6780] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="5237c383c7f60d776b4d1e7f0da8f300dfceabab4bad4d87ba9ba9d045bf7a10" WorkloadEndpoint="ip--172--31--29--156-k8s-whisker--669bdb6b65--nhkt8-eth0" Mar 7 01:12:06.312143 containerd[1890]: 2026-03-07 01:12:05.894 [INFO][6780] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="5237c383c7f60d776b4d1e7f0da8f300dfceabab4bad4d87ba9ba9d045bf7a10" Mar 7 01:12:06.312143 containerd[1890]: 2026-03-07 01:12:05.894 [INFO][6780] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5237c383c7f60d776b4d1e7f0da8f300dfceabab4bad4d87ba9ba9d045bf7a10" iface="eth0" netns="" Mar 7 01:12:06.312143 containerd[1890]: 2026-03-07 01:12:05.894 [INFO][6780] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="5237c383c7f60d776b4d1e7f0da8f300dfceabab4bad4d87ba9ba9d045bf7a10" Mar 7 01:12:06.312143 containerd[1890]: 2026-03-07 01:12:05.894 [INFO][6780] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="5237c383c7f60d776b4d1e7f0da8f300dfceabab4bad4d87ba9ba9d045bf7a10" Mar 7 01:12:06.312143 containerd[1890]: 2026-03-07 01:12:06.286 [INFO][6787] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="5237c383c7f60d776b4d1e7f0da8f300dfceabab4bad4d87ba9ba9d045bf7a10" HandleID="k8s-pod-network.5237c383c7f60d776b4d1e7f0da8f300dfceabab4bad4d87ba9ba9d045bf7a10" Workload="ip--172--31--29--156-k8s-whisker--669bdb6b65--nhkt8-eth0" Mar 7 01:12:06.312143 containerd[1890]: 2026-03-07 01:12:06.290 [INFO][6787] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:12:06.312143 containerd[1890]: 2026-03-07 01:12:06.291 [INFO][6787] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:12:06.312143 containerd[1890]: 2026-03-07 01:12:06.305 [WARNING][6787] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="5237c383c7f60d776b4d1e7f0da8f300dfceabab4bad4d87ba9ba9d045bf7a10" HandleID="k8s-pod-network.5237c383c7f60d776b4d1e7f0da8f300dfceabab4bad4d87ba9ba9d045bf7a10" Workload="ip--172--31--29--156-k8s-whisker--669bdb6b65--nhkt8-eth0" Mar 7 01:12:06.312143 containerd[1890]: 2026-03-07 01:12:06.305 [INFO][6787] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="5237c383c7f60d776b4d1e7f0da8f300dfceabab4bad4d87ba9ba9d045bf7a10" HandleID="k8s-pod-network.5237c383c7f60d776b4d1e7f0da8f300dfceabab4bad4d87ba9ba9d045bf7a10" Workload="ip--172--31--29--156-k8s-whisker--669bdb6b65--nhkt8-eth0" Mar 7 01:12:06.312143 containerd[1890]: 2026-03-07 01:12:06.307 [INFO][6787] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:12:06.312143 containerd[1890]: 2026-03-07 01:12:06.309 [INFO][6780] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="5237c383c7f60d776b4d1e7f0da8f300dfceabab4bad4d87ba9ba9d045bf7a10" Mar 7 01:12:06.314668 containerd[1890]: time="2026-03-07T01:12:06.312193057Z" level=info msg="TearDown network for sandbox \"5237c383c7f60d776b4d1e7f0da8f300dfceabab4bad4d87ba9ba9d045bf7a10\" successfully" Mar 7 01:12:06.314668 containerd[1890]: time="2026-03-07T01:12:06.312223796Z" level=info msg="StopPodSandbox for \"5237c383c7f60d776b4d1e7f0da8f300dfceabab4bad4d87ba9ba9d045bf7a10\" returns successfully" Mar 7 01:12:06.314668 containerd[1890]: time="2026-03-07T01:12:06.313682608Z" level=info msg="RemovePodSandbox for \"5237c383c7f60d776b4d1e7f0da8f300dfceabab4bad4d87ba9ba9d045bf7a10\"" Mar 7 01:12:06.319705 containerd[1890]: time="2026-03-07T01:12:06.319656944Z" level=info msg="Forcibly stopping sandbox \"5237c383c7f60d776b4d1e7f0da8f300dfceabab4bad4d87ba9ba9d045bf7a10\"" Mar 7 01:12:06.418106 containerd[1890]: 2026-03-07 01:12:06.367 [WARNING][6804] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="5237c383c7f60d776b4d1e7f0da8f300dfceabab4bad4d87ba9ba9d045bf7a10" WorkloadEndpoint="ip--172--31--29--156-k8s-whisker--669bdb6b65--nhkt8-eth0" Mar 7 01:12:06.418106 containerd[1890]: 2026-03-07 01:12:06.367 [INFO][6804] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="5237c383c7f60d776b4d1e7f0da8f300dfceabab4bad4d87ba9ba9d045bf7a10" Mar 7 01:12:06.418106 containerd[1890]: 2026-03-07 01:12:06.367 [INFO][6804] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5237c383c7f60d776b4d1e7f0da8f300dfceabab4bad4d87ba9ba9d045bf7a10" iface="eth0" netns="" Mar 7 01:12:06.418106 containerd[1890]: 2026-03-07 01:12:06.367 [INFO][6804] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="5237c383c7f60d776b4d1e7f0da8f300dfceabab4bad4d87ba9ba9d045bf7a10" Mar 7 01:12:06.418106 containerd[1890]: 2026-03-07 01:12:06.367 [INFO][6804] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="5237c383c7f60d776b4d1e7f0da8f300dfceabab4bad4d87ba9ba9d045bf7a10" Mar 7 01:12:06.418106 containerd[1890]: 2026-03-07 01:12:06.402 [INFO][6811] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="5237c383c7f60d776b4d1e7f0da8f300dfceabab4bad4d87ba9ba9d045bf7a10" HandleID="k8s-pod-network.5237c383c7f60d776b4d1e7f0da8f300dfceabab4bad4d87ba9ba9d045bf7a10" Workload="ip--172--31--29--156-k8s-whisker--669bdb6b65--nhkt8-eth0" Mar 7 01:12:06.418106 containerd[1890]: 2026-03-07 01:12:06.402 [INFO][6811] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:12:06.418106 containerd[1890]: 2026-03-07 01:12:06.402 [INFO][6811] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:12:06.418106 containerd[1890]: 2026-03-07 01:12:06.409 [WARNING][6811] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="5237c383c7f60d776b4d1e7f0da8f300dfceabab4bad4d87ba9ba9d045bf7a10" HandleID="k8s-pod-network.5237c383c7f60d776b4d1e7f0da8f300dfceabab4bad4d87ba9ba9d045bf7a10" Workload="ip--172--31--29--156-k8s-whisker--669bdb6b65--nhkt8-eth0" Mar 7 01:12:06.418106 containerd[1890]: 2026-03-07 01:12:06.409 [INFO][6811] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="5237c383c7f60d776b4d1e7f0da8f300dfceabab4bad4d87ba9ba9d045bf7a10" HandleID="k8s-pod-network.5237c383c7f60d776b4d1e7f0da8f300dfceabab4bad4d87ba9ba9d045bf7a10" Workload="ip--172--31--29--156-k8s-whisker--669bdb6b65--nhkt8-eth0" Mar 7 01:12:06.418106 containerd[1890]: 2026-03-07 01:12:06.410 [INFO][6811] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:12:06.418106 containerd[1890]: 2026-03-07 01:12:06.414 [INFO][6804] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="5237c383c7f60d776b4d1e7f0da8f300dfceabab4bad4d87ba9ba9d045bf7a10" Mar 7 01:12:06.419479 containerd[1890]: time="2026-03-07T01:12:06.418155865Z" level=info msg="TearDown network for sandbox \"5237c383c7f60d776b4d1e7f0da8f300dfceabab4bad4d87ba9ba9d045bf7a10\" successfully" Mar 7 01:12:06.432682 containerd[1890]: time="2026-03-07T01:12:06.432622044Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5237c383c7f60d776b4d1e7f0da8f300dfceabab4bad4d87ba9ba9d045bf7a10\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 01:12:06.432839 containerd[1890]: time="2026-03-07T01:12:06.432748438Z" level=info msg="RemovePodSandbox \"5237c383c7f60d776b4d1e7f0da8f300dfceabab4bad4d87ba9ba9d045bf7a10\" returns successfully" Mar 7 01:12:23.165914 systemd[1]: run-containerd-runc-k8s.io-3651d5704743824a348ecfd91e5ff1a82a49bee489794f6ec5b07b0b2a57fc26-runc.ekdiuE.mount: Deactivated successfully. Mar 7 01:12:47.165656 systemd[1]: cri-containerd-307ee737de7083e97f9b4000537014464cc41f8fcbb481bd0f2af2d8a7301565.scope: Deactivated successfully. Mar 7 01:12:47.165969 systemd[1]: cri-containerd-307ee737de7083e97f9b4000537014464cc41f8fcbb481bd0f2af2d8a7301565.scope: Consumed 12.760s CPU time. Mar 7 01:12:47.471959 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-307ee737de7083e97f9b4000537014464cc41f8fcbb481bd0f2af2d8a7301565-rootfs.mount: Deactivated successfully. Mar 7 01:12:47.552819 containerd[1890]: time="2026-03-07T01:12:47.511416123Z" level=info msg="shim disconnected" id=307ee737de7083e97f9b4000537014464cc41f8fcbb481bd0f2af2d8a7301565 namespace=k8s.io Mar 7 01:12:47.553409 containerd[1890]: time="2026-03-07T01:12:47.552822603Z" level=warning msg="cleaning up after shim disconnected" id=307ee737de7083e97f9b4000537014464cc41f8fcbb481bd0f2af2d8a7301565 namespace=k8s.io Mar 7 01:12:47.553409 containerd[1890]: time="2026-03-07T01:12:47.552847732Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:12:47.869817 kubelet[3114]: I0307 01:12:47.869667 3114 scope.go:117] "RemoveContainer" containerID="307ee737de7083e97f9b4000537014464cc41f8fcbb481bd0f2af2d8a7301565" Mar 7 01:12:47.998305 containerd[1890]: time="2026-03-07T01:12:47.998258084Z" level=info msg="CreateContainer within sandbox \"1ba4891a65be5041c41045c434b1257b1cf6af86e70305a89a360d34ea6efbce\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Mar 7 01:12:48.122960 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2210669121.mount: Deactivated successfully. Mar 7 01:12:48.134985 containerd[1890]: time="2026-03-07T01:12:48.134927729Z" level=info msg="CreateContainer within sandbox \"1ba4891a65be5041c41045c434b1257b1cf6af86e70305a89a360d34ea6efbce\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"ff450f754c4d40ed0f8c1f11e59c67d4763a636de1ad52114a0ad8fa9e338085\"" Mar 7 01:12:48.140985 containerd[1890]: time="2026-03-07T01:12:48.140941795Z" level=info msg="StartContainer for \"ff450f754c4d40ed0f8c1f11e59c67d4763a636de1ad52114a0ad8fa9e338085\"" Mar 7 01:12:48.237250 systemd[1]: Started cri-containerd-ff450f754c4d40ed0f8c1f11e59c67d4763a636de1ad52114a0ad8fa9e338085.scope - libcontainer container ff450f754c4d40ed0f8c1f11e59c67d4763a636de1ad52114a0ad8fa9e338085. Mar 7 01:12:48.253157 systemd[1]: cri-containerd-ce289c94248eb7a06a50c4ed907dcec6161377d6dd7480c437ee4e9146b7c615.scope: Deactivated successfully. Mar 7 01:12:48.254662 systemd[1]: cri-containerd-ce289c94248eb7a06a50c4ed907dcec6161377d6dd7480c437ee4e9146b7c615.scope: Consumed 4.341s CPU time, 15.0M memory peak, 0B memory swap peak. Mar 7 01:12:48.326532 containerd[1890]: time="2026-03-07T01:12:48.326309409Z" level=info msg="shim disconnected" id=ce289c94248eb7a06a50c4ed907dcec6161377d6dd7480c437ee4e9146b7c615 namespace=k8s.io Mar 7 01:12:48.326532 containerd[1890]: time="2026-03-07T01:12:48.326374927Z" level=warning msg="cleaning up after shim disconnected" id=ce289c94248eb7a06a50c4ed907dcec6161377d6dd7480c437ee4e9146b7c615 namespace=k8s.io Mar 7 01:12:48.326532 containerd[1890]: time="2026-03-07T01:12:48.326389956Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:12:48.345668 containerd[1890]: time="2026-03-07T01:12:48.345624949Z" level=info msg="StartContainer for \"ff450f754c4d40ed0f8c1f11e59c67d4763a636de1ad52114a0ad8fa9e338085\" returns successfully" Mar 7 01:12:48.470515 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ce289c94248eb7a06a50c4ed907dcec6161377d6dd7480c437ee4e9146b7c615-rootfs.mount: Deactivated successfully. Mar 7 01:12:48.874504 kubelet[3114]: I0307 01:12:48.874397 3114 scope.go:117] "RemoveContainer" containerID="ce289c94248eb7a06a50c4ed907dcec6161377d6dd7480c437ee4e9146b7c615" Mar 7 01:12:48.881293 containerd[1890]: time="2026-03-07T01:12:48.881248962Z" level=info msg="CreateContainer within sandbox \"5c43528995d89973cd80ab1e761aedd97d63fc3a2c691dcb931f81ba24c6f692\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Mar 7 01:12:48.949063 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3514194410.mount: Deactivated successfully. Mar 7 01:12:48.956022 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1530233802.mount: Deactivated successfully. Mar 7 01:12:48.956399 containerd[1890]: time="2026-03-07T01:12:48.956348153Z" level=info msg="CreateContainer within sandbox \"5c43528995d89973cd80ab1e761aedd97d63fc3a2c691dcb931f81ba24c6f692\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"44e597424736841dc151e260779cba73607cb4de3d1fc00b41b52aac1de329ec\"" Mar 7 01:12:48.957917 containerd[1890]: time="2026-03-07T01:12:48.957882677Z" level=info msg="StartContainer for \"44e597424736841dc151e260779cba73607cb4de3d1fc00b41b52aac1de329ec\"" Mar 7 01:12:49.000390 systemd[1]: Started cri-containerd-44e597424736841dc151e260779cba73607cb4de3d1fc00b41b52aac1de329ec.scope - libcontainer container 44e597424736841dc151e260779cba73607cb4de3d1fc00b41b52aac1de329ec. Mar 7 01:12:49.079510 containerd[1890]: time="2026-03-07T01:12:49.079458041Z" level=info msg="StartContainer for \"44e597424736841dc151e260779cba73607cb4de3d1fc00b41b52aac1de329ec\" returns successfully" Mar 7 01:12:51.719872 systemd[1]: cri-containerd-dbf2459c4a4ab21c402605302fff902fdca2b20577d696f86b65769a910deda0.scope: Deactivated successfully. Mar 7 01:12:51.720571 systemd[1]: cri-containerd-dbf2459c4a4ab21c402605302fff902fdca2b20577d696f86b65769a910deda0.scope: Consumed 2.982s CPU time, 15.3M memory peak, 0B memory swap peak. Mar 7 01:12:51.752082 containerd[1890]: time="2026-03-07T01:12:51.750328836Z" level=info msg="shim disconnected" id=dbf2459c4a4ab21c402605302fff902fdca2b20577d696f86b65769a910deda0 namespace=k8s.io Mar 7 01:12:51.752082 containerd[1890]: time="2026-03-07T01:12:51.750399451Z" level=warning msg="cleaning up after shim disconnected" id=dbf2459c4a4ab21c402605302fff902fdca2b20577d696f86b65769a910deda0 namespace=k8s.io Mar 7 01:12:51.752082 containerd[1890]: time="2026-03-07T01:12:51.750411645Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:12:51.754196 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dbf2459c4a4ab21c402605302fff902fdca2b20577d696f86b65769a910deda0-rootfs.mount: Deactivated successfully. Mar 7 01:12:51.898491 kubelet[3114]: I0307 01:12:51.898461 3114 scope.go:117] "RemoveContainer" containerID="dbf2459c4a4ab21c402605302fff902fdca2b20577d696f86b65769a910deda0" Mar 7 01:12:51.901572 containerd[1890]: time="2026-03-07T01:12:51.901523602Z" level=info msg="CreateContainer within sandbox \"144703bab25d537d7469841d731817f086a52d45af6b6a0aff734182f19d09d7\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Mar 7 01:12:51.927193 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount732708540.mount: Deactivated successfully. Mar 7 01:12:51.931289 containerd[1890]: time="2026-03-07T01:12:51.931244840Z" level=info msg="CreateContainer within sandbox \"144703bab25d537d7469841d731817f086a52d45af6b6a0aff734182f19d09d7\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"8035ec618551f7bec14920de5287414ea3ad810ec50015bdd2d789afd594f92d\"" Mar 7 01:12:51.931951 containerd[1890]: time="2026-03-07T01:12:51.931919998Z" level=info msg="StartContainer for \"8035ec618551f7bec14920de5287414ea3ad810ec50015bdd2d789afd594f92d\"" Mar 7 01:12:51.978229 systemd[1]: Started cri-containerd-8035ec618551f7bec14920de5287414ea3ad810ec50015bdd2d789afd594f92d.scope - libcontainer container 8035ec618551f7bec14920de5287414ea3ad810ec50015bdd2d789afd594f92d. Mar 7 01:12:52.030543 containerd[1890]: time="2026-03-07T01:12:52.030480531Z" level=info msg="StartContainer for \"8035ec618551f7bec14920de5287414ea3ad810ec50015bdd2d789afd594f92d\" returns successfully" Mar 7 01:12:52.646032 kubelet[3114]: E0307 01:12:52.644042 3114 controller.go:195] "Failed to update lease" err="Put \"https://172.31.29.156:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-156?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 7 01:12:52.755272 systemd[1]: run-containerd-runc-k8s.io-8035ec618551f7bec14920de5287414ea3ad810ec50015bdd2d789afd594f92d-runc.b9g6xC.mount: Deactivated successfully. Mar 7 01:13:00.199306 systemd[1]: cri-containerd-ff450f754c4d40ed0f8c1f11e59c67d4763a636de1ad52114a0ad8fa9e338085.scope: Deactivated successfully. Mar 7 01:13:00.227629 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ff450f754c4d40ed0f8c1f11e59c67d4763a636de1ad52114a0ad8fa9e338085-rootfs.mount: Deactivated successfully. Mar 7 01:13:00.238298 containerd[1890]: time="2026-03-07T01:13:00.238202182Z" level=info msg="shim disconnected" id=ff450f754c4d40ed0f8c1f11e59c67d4763a636de1ad52114a0ad8fa9e338085 namespace=k8s.io Mar 7 01:13:00.238298 containerd[1890]: time="2026-03-07T01:13:00.238274472Z" level=warning msg="cleaning up after shim disconnected" id=ff450f754c4d40ed0f8c1f11e59c67d4763a636de1ad52114a0ad8fa9e338085 namespace=k8s.io Mar 7 01:13:00.238298 containerd[1890]: time="2026-03-07T01:13:00.238287526Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:13:00.954683 kubelet[3114]: I0307 01:13:00.954635 3114 scope.go:117] "RemoveContainer" containerID="307ee737de7083e97f9b4000537014464cc41f8fcbb481bd0f2af2d8a7301565" Mar 7 01:13:00.956384 kubelet[3114]: I0307 01:13:00.954841 3114 scope.go:117] "RemoveContainer" containerID="ff450f754c4d40ed0f8c1f11e59c67d4763a636de1ad52114a0ad8fa9e338085" Mar 7 01:13:00.970897 containerd[1890]: time="2026-03-07T01:13:00.970537936Z" level=info msg="RemoveContainer for \"307ee737de7083e97f9b4000537014464cc41f8fcbb481bd0f2af2d8a7301565\"" Mar 7 01:13:00.971304 kubelet[3114]: E0307 01:13:00.968833 3114 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-5588576f44-fqcvh_tigera-operator(10936a21-0d6e-4d13-a0f9-80061dc4a39c)\"" pod="tigera-operator/tigera-operator-5588576f44-fqcvh" podUID="10936a21-0d6e-4d13-a0f9-80061dc4a39c" Mar 7 01:13:00.976906 containerd[1890]: time="2026-03-07T01:13:00.976867339Z" level=info msg="RemoveContainer for \"307ee737de7083e97f9b4000537014464cc41f8fcbb481bd0f2af2d8a7301565\" returns successfully" Mar 7 01:13:02.644436 kubelet[3114]: E0307 01:13:02.644350 3114 controller.go:195] "Failed to update lease" err="Put \"https://172.31.29.156:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-156?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 7 01:13:09.580752 systemd[1]: run-containerd-runc-k8s.io-70f01a5ab809fe17f47e1809d6725e5955ba292a68328d25a571019026a765fa-runc.QjcFB5.mount: Deactivated successfully.