Mar 6 03:02:11.872876 kernel: Linux version 6.12.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Thu Mar 5 23:16:40 -00 2026 Mar 6 03:02:11.872909 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=5bef16c10382b6f77f9493af2297475832ff2f09f1ada4155425ad9b32dd6e53 Mar 6 03:02:11.872927 kernel: BIOS-provided physical RAM map: Mar 6 03:02:11.872937 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Mar 6 03:02:11.872948 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000786cdfff] usable Mar 6 03:02:11.872961 kernel: BIOS-e820: [mem 0x00000000786ce000-0x000000007894dfff] reserved Mar 6 03:02:11.874688 kernel: BIOS-e820: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Mar 6 03:02:11.874702 kernel: BIOS-e820: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Mar 6 03:02:11.874715 kernel: BIOS-e820: [mem 0x00000000789de000-0x000000007c97bfff] usable Mar 6 03:02:11.874727 kernel: BIOS-e820: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Mar 6 03:02:11.874739 kernel: NX (Execute Disable) protection: active Mar 6 03:02:11.874755 kernel: APIC: Static calls initialized Mar 6 03:02:11.874768 kernel: e820: update [mem 0x768c0018-0x768c8e57] usable ==> usable Mar 6 03:02:11.874781 kernel: extended physical RAM map: Mar 6 03:02:11.874796 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Mar 6 03:02:11.874809 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000768c0017] usable Mar 6 03:02:11.874825 kernel: reserve setup_data: [mem 0x00000000768c0018-0x00000000768c8e57] usable Mar 6 03:02:11.874838 kernel: reserve setup_data: [mem 0x00000000768c8e58-0x00000000786cdfff] usable Mar 6 03:02:11.874852 kernel: reserve setup_data: [mem 0x00000000786ce000-0x000000007894dfff] reserved Mar 6 03:02:11.874865 kernel: reserve setup_data: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Mar 6 03:02:11.874877 kernel: reserve setup_data: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Mar 6 03:02:11.874890 kernel: reserve setup_data: [mem 0x00000000789de000-0x000000007c97bfff] usable Mar 6 03:02:11.874903 kernel: reserve setup_data: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Mar 6 03:02:11.874916 kernel: efi: EFI v2.7 by EDK II Mar 6 03:02:11.874929 kernel: efi: SMBIOS=0x7886a000 ACPI=0x7895d000 ACPI 2.0=0x7895d014 MEMATTR=0x77015518 Mar 6 03:02:11.874942 kernel: secureboot: Secure boot disabled Mar 6 03:02:11.874955 kernel: SMBIOS 2.7 present. Mar 6 03:02:11.874970 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Mar 6 03:02:11.874983 kernel: DMI: Memory slots populated: 1/1 Mar 6 03:02:11.874996 kernel: Hypervisor detected: KVM Mar 6 03:02:11.875009 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Mar 6 03:02:11.875022 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 6 03:02:11.875035 kernel: kvm-clock: using sched offset of 5442545440 cycles Mar 6 03:02:11.875048 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 6 03:02:11.875076 kernel: tsc: Detected 2499.996 MHz processor Mar 6 03:02:11.875089 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 6 03:02:11.875103 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 6 03:02:11.875116 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Mar 6 03:02:11.875132 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Mar 6 03:02:11.875146 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 6 03:02:11.875165 kernel: Using GB pages for direct mapping Mar 6 03:02:11.875179 kernel: ACPI: Early table checksum verification disabled Mar 6 03:02:11.875192 kernel: ACPI: RSDP 0x000000007895D014 000024 (v02 AMAZON) Mar 6 03:02:11.875204 kernel: ACPI: XSDT 0x000000007895C0E8 00006C (v01 AMAZON AMZNFACP 00000001 01000013) Mar 6 03:02:11.875220 kernel: ACPI: FACP 0x0000000078955000 000114 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Mar 6 03:02:11.875233 kernel: ACPI: DSDT 0x0000000078956000 00115A (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Mar 6 03:02:11.875246 kernel: ACPI: FACS 0x00000000789D0000 000040 Mar 6 03:02:11.875261 kernel: ACPI: WAET 0x000000007895B000 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Mar 6 03:02:11.875273 kernel: ACPI: SLIT 0x000000007895A000 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Mar 6 03:02:11.875287 kernel: ACPI: APIC 0x0000000078959000 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Mar 6 03:02:11.875301 kernel: ACPI: SRAT 0x0000000078958000 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Mar 6 03:02:11.875314 kernel: ACPI: HPET 0x0000000078954000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Mar 6 03:02:11.875330 kernel: ACPI: SSDT 0x0000000078953000 000759 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Mar 6 03:02:11.875343 kernel: ACPI: SSDT 0x0000000078952000 00007F (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Mar 6 03:02:11.875356 kernel: ACPI: BGRT 0x0000000078951000 000038 (v01 AMAZON AMAZON 00000002 01000013) Mar 6 03:02:11.875369 kernel: ACPI: Reserving FACP table memory at [mem 0x78955000-0x78955113] Mar 6 03:02:11.875382 kernel: ACPI: Reserving DSDT table memory at [mem 0x78956000-0x78957159] Mar 6 03:02:11.875395 kernel: ACPI: Reserving FACS table memory at [mem 0x789d0000-0x789d003f] Mar 6 03:02:11.875407 kernel: ACPI: Reserving WAET table memory at [mem 0x7895b000-0x7895b027] Mar 6 03:02:11.875420 kernel: ACPI: Reserving SLIT table memory at [mem 0x7895a000-0x7895a06b] Mar 6 03:02:11.875433 kernel: ACPI: Reserving APIC table memory at [mem 0x78959000-0x78959075] Mar 6 03:02:11.875449 kernel: ACPI: Reserving SRAT table memory at [mem 0x78958000-0x7895809f] Mar 6 03:02:11.875463 kernel: ACPI: Reserving HPET table memory at [mem 0x78954000-0x78954037] Mar 6 03:02:11.875475 kernel: ACPI: Reserving SSDT table memory at [mem 0x78953000-0x78953758] Mar 6 03:02:11.875488 kernel: ACPI: Reserving SSDT table memory at [mem 0x78952000-0x7895207e] Mar 6 03:02:11.875501 kernel: ACPI: Reserving BGRT table memory at [mem 0x78951000-0x78951037] Mar 6 03:02:11.875514 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Mar 6 03:02:11.875527 kernel: NUMA: Initialized distance table, cnt=1 Mar 6 03:02:11.875540 kernel: NODE_DATA(0) allocated [mem 0x7a8eedc0-0x7a8f5fff] Mar 6 03:02:11.875553 kernel: Zone ranges: Mar 6 03:02:11.875569 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 6 03:02:11.875582 kernel: DMA32 [mem 0x0000000001000000-0x000000007c97bfff] Mar 6 03:02:11.875595 kernel: Normal empty Mar 6 03:02:11.875608 kernel: Device empty Mar 6 03:02:11.875621 kernel: Movable zone start for each node Mar 6 03:02:11.875633 kernel: Early memory node ranges Mar 6 03:02:11.875646 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Mar 6 03:02:11.875659 kernel: node 0: [mem 0x0000000000100000-0x00000000786cdfff] Mar 6 03:02:11.875671 kernel: node 0: [mem 0x00000000789de000-0x000000007c97bfff] Mar 6 03:02:11.875687 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007c97bfff] Mar 6 03:02:11.875701 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 6 03:02:11.875714 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Mar 6 03:02:11.875727 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Mar 6 03:02:11.875741 kernel: On node 0, zone DMA32: 13956 pages in unavailable ranges Mar 6 03:02:11.875754 kernel: ACPI: PM-Timer IO Port: 0xb008 Mar 6 03:02:11.875766 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 6 03:02:11.875780 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Mar 6 03:02:11.875793 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 6 03:02:11.875806 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 6 03:02:11.875822 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 6 03:02:11.875835 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 6 03:02:11.875847 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 6 03:02:11.875860 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 6 03:02:11.875873 kernel: TSC deadline timer available Mar 6 03:02:11.875887 kernel: CPU topo: Max. logical packages: 1 Mar 6 03:02:11.875900 kernel: CPU topo: Max. logical dies: 1 Mar 6 03:02:11.875913 kernel: CPU topo: Max. dies per package: 1 Mar 6 03:02:11.875926 kernel: CPU topo: Max. threads per core: 2 Mar 6 03:02:11.875941 kernel: CPU topo: Num. cores per package: 1 Mar 6 03:02:11.875954 kernel: CPU topo: Num. threads per package: 2 Mar 6 03:02:11.875967 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Mar 6 03:02:11.875980 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 6 03:02:11.875993 kernel: [mem 0x7ca00000-0xffffffff] available for PCI devices Mar 6 03:02:11.876006 kernel: Booting paravirtualized kernel on KVM Mar 6 03:02:11.876019 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 6 03:02:11.876032 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Mar 6 03:02:11.876045 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Mar 6 03:02:11.878113 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Mar 6 03:02:11.878145 kernel: pcpu-alloc: [0] 0 1 Mar 6 03:02:11.878162 kernel: kvm-guest: PV spinlocks enabled Mar 6 03:02:11.878178 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 6 03:02:11.878197 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=5bef16c10382b6f77f9493af2297475832ff2f09f1ada4155425ad9b32dd6e53 Mar 6 03:02:11.878212 kernel: random: crng init done Mar 6 03:02:11.878227 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 6 03:02:11.878242 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Mar 6 03:02:11.878262 kernel: Fallback order for Node 0: 0 Mar 6 03:02:11.878278 kernel: Built 1 zonelists, mobility grouping on. Total pages: 509451 Mar 6 03:02:11.878292 kernel: Policy zone: DMA32 Mar 6 03:02:11.878317 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 6 03:02:11.878335 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 6 03:02:11.878350 kernel: Kernel/User page tables isolation: enabled Mar 6 03:02:11.878366 kernel: ftrace: allocating 40099 entries in 157 pages Mar 6 03:02:11.878381 kernel: ftrace: allocated 157 pages with 5 groups Mar 6 03:02:11.878396 kernel: Dynamic Preempt: voluntary Mar 6 03:02:11.878411 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 6 03:02:11.878427 kernel: rcu: RCU event tracing is enabled. Mar 6 03:02:11.878443 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 6 03:02:11.878462 kernel: Trampoline variant of Tasks RCU enabled. Mar 6 03:02:11.878477 kernel: Rude variant of Tasks RCU enabled. Mar 6 03:02:11.878493 kernel: Tracing variant of Tasks RCU enabled. Mar 6 03:02:11.878508 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 6 03:02:11.878525 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 6 03:02:11.878544 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 6 03:02:11.878561 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 6 03:02:11.878578 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 6 03:02:11.878595 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Mar 6 03:02:11.878612 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 6 03:02:11.878629 kernel: Console: colour dummy device 80x25 Mar 6 03:02:11.878646 kernel: printk: legacy console [tty0] enabled Mar 6 03:02:11.878662 kernel: printk: legacy console [ttyS0] enabled Mar 6 03:02:11.878680 kernel: ACPI: Core revision 20240827 Mar 6 03:02:11.878701 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Mar 6 03:02:11.878718 kernel: APIC: Switch to symmetric I/O mode setup Mar 6 03:02:11.878734 kernel: x2apic enabled Mar 6 03:02:11.878751 kernel: APIC: Switched APIC routing to: physical x2apic Mar 6 03:02:11.878769 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Mar 6 03:02:11.878785 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499996) Mar 6 03:02:11.878802 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Mar 6 03:02:11.878818 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Mar 6 03:02:11.878835 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 6 03:02:11.878854 kernel: Spectre V2 : Mitigation: Retpolines Mar 6 03:02:11.878869 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Mar 6 03:02:11.878885 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Mar 6 03:02:11.878901 kernel: RETBleed: Vulnerable Mar 6 03:02:11.878915 kernel: Speculative Store Bypass: Vulnerable Mar 6 03:02:11.878931 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Mar 6 03:02:11.878946 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Mar 6 03:02:11.878961 kernel: GDS: Unknown: Dependent on hypervisor status Mar 6 03:02:11.878976 kernel: active return thunk: its_return_thunk Mar 6 03:02:11.878990 kernel: ITS: Mitigation: Aligned branch/return thunks Mar 6 03:02:11.879004 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 6 03:02:11.879023 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 6 03:02:11.879038 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 6 03:02:11.879054 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Mar 6 03:02:11.881106 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Mar 6 03:02:11.881130 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Mar 6 03:02:11.881145 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Mar 6 03:02:11.881160 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Mar 6 03:02:11.881177 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Mar 6 03:02:11.881192 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 6 03:02:11.881208 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Mar 6 03:02:11.881223 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Mar 6 03:02:11.881245 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Mar 6 03:02:11.881259 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Mar 6 03:02:11.881273 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Mar 6 03:02:11.881288 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Mar 6 03:02:11.881303 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Mar 6 03:02:11.881318 kernel: Freeing SMP alternatives memory: 32K Mar 6 03:02:11.881333 kernel: pid_max: default: 32768 minimum: 301 Mar 6 03:02:11.881347 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Mar 6 03:02:11.881364 kernel: landlock: Up and running. Mar 6 03:02:11.881379 kernel: SELinux: Initializing. Mar 6 03:02:11.881394 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Mar 6 03:02:11.881413 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Mar 6 03:02:11.881429 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Mar 6 03:02:11.881444 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Mar 6 03:02:11.881459 kernel: signal: max sigframe size: 3632 Mar 6 03:02:11.881475 kernel: rcu: Hierarchical SRCU implementation. Mar 6 03:02:11.881492 kernel: rcu: Max phase no-delay instances is 400. Mar 6 03:02:11.881508 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Mar 6 03:02:11.881524 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 6 03:02:11.881540 kernel: smp: Bringing up secondary CPUs ... Mar 6 03:02:11.881557 kernel: smpboot: x86: Booting SMP configuration: Mar 6 03:02:11.881575 kernel: .... node #0, CPUs: #1 Mar 6 03:02:11.881592 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Mar 6 03:02:11.881608 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Mar 6 03:02:11.881623 kernel: smp: Brought up 1 node, 2 CPUs Mar 6 03:02:11.881637 kernel: smpboot: Total of 2 processors activated (9999.98 BogoMIPS) Mar 6 03:02:11.881653 kernel: Memory: 1899856K/2037804K available (14336K kernel code, 2445K rwdata, 26064K rodata, 46196K init, 2564K bss, 133384K reserved, 0K cma-reserved) Mar 6 03:02:11.881667 kernel: devtmpfs: initialized Mar 6 03:02:11.881682 kernel: x86/mm: Memory block size: 128MB Mar 6 03:02:11.881703 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7895e000-0x789ddfff] (524288 bytes) Mar 6 03:02:11.881720 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 6 03:02:11.881735 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 6 03:02:11.881750 kernel: pinctrl core: initialized pinctrl subsystem Mar 6 03:02:11.881766 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 6 03:02:11.881780 kernel: audit: initializing netlink subsys (disabled) Mar 6 03:02:11.881794 kernel: audit: type=2000 audit(1772766129.309:1): state=initialized audit_enabled=0 res=1 Mar 6 03:02:11.881808 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 6 03:02:11.881823 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 6 03:02:11.881840 kernel: cpuidle: using governor menu Mar 6 03:02:11.881855 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 6 03:02:11.881869 kernel: dca service started, version 1.12.1 Mar 6 03:02:11.881884 kernel: PCI: Using configuration type 1 for base access Mar 6 03:02:11.881898 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 6 03:02:11.881913 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 6 03:02:11.881926 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 6 03:02:11.881942 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 6 03:02:11.881957 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 6 03:02:11.881977 kernel: ACPI: Added _OSI(Module Device) Mar 6 03:02:11.881993 kernel: ACPI: Added _OSI(Processor Device) Mar 6 03:02:11.882008 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 6 03:02:11.882024 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Mar 6 03:02:11.882039 kernel: ACPI: Interpreter enabled Mar 6 03:02:11.882055 kernel: ACPI: PM: (supports S0 S5) Mar 6 03:02:11.882101 kernel: ACPI: Using IOAPIC for interrupt routing Mar 6 03:02:11.882118 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 6 03:02:11.882133 kernel: PCI: Using E820 reservations for host bridge windows Mar 6 03:02:11.882153 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Mar 6 03:02:11.882169 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 6 03:02:11.882404 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Mar 6 03:02:11.882550 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Mar 6 03:02:11.882687 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Mar 6 03:02:11.882709 kernel: acpiphp: Slot [3] registered Mar 6 03:02:11.882725 kernel: acpiphp: Slot [4] registered Mar 6 03:02:11.882746 kernel: acpiphp: Slot [5] registered Mar 6 03:02:11.882761 kernel: acpiphp: Slot [6] registered Mar 6 03:02:11.882777 kernel: acpiphp: Slot [7] registered Mar 6 03:02:11.882792 kernel: acpiphp: Slot [8] registered Mar 6 03:02:11.882808 kernel: acpiphp: Slot [9] registered Mar 6 03:02:11.882824 kernel: acpiphp: Slot [10] registered Mar 6 03:02:11.882840 kernel: acpiphp: Slot [11] registered Mar 6 03:02:11.882856 kernel: acpiphp: Slot [12] registered Mar 6 03:02:11.882872 kernel: acpiphp: Slot [13] registered Mar 6 03:02:11.882888 kernel: acpiphp: Slot [14] registered Mar 6 03:02:11.882906 kernel: acpiphp: Slot [15] registered Mar 6 03:02:11.882922 kernel: acpiphp: Slot [16] registered Mar 6 03:02:11.882938 kernel: acpiphp: Slot [17] registered Mar 6 03:02:11.882954 kernel: acpiphp: Slot [18] registered Mar 6 03:02:11.882968 kernel: acpiphp: Slot [19] registered Mar 6 03:02:11.882984 kernel: acpiphp: Slot [20] registered Mar 6 03:02:11.882999 kernel: acpiphp: Slot [21] registered Mar 6 03:02:11.883015 kernel: acpiphp: Slot [22] registered Mar 6 03:02:11.883032 kernel: acpiphp: Slot [23] registered Mar 6 03:02:11.883053 kernel: acpiphp: Slot [24] registered Mar 6 03:02:11.883083 kernel: acpiphp: Slot [25] registered Mar 6 03:02:11.883095 kernel: acpiphp: Slot [26] registered Mar 6 03:02:11.883110 kernel: acpiphp: Slot [27] registered Mar 6 03:02:11.883122 kernel: acpiphp: Slot [28] registered Mar 6 03:02:11.883137 kernel: acpiphp: Slot [29] registered Mar 6 03:02:11.883152 kernel: acpiphp: Slot [30] registered Mar 6 03:02:11.883167 kernel: acpiphp: Slot [31] registered Mar 6 03:02:11.883182 kernel: PCI host bridge to bus 0000:00 Mar 6 03:02:11.883329 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 6 03:02:11.883454 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 6 03:02:11.883581 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 6 03:02:11.883701 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Mar 6 03:02:11.883822 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x2000ffffffff window] Mar 6 03:02:11.885274 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 6 03:02:11.885448 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint Mar 6 03:02:11.885616 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint Mar 6 03:02:11.885769 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 conventional PCI endpoint Mar 6 03:02:11.885915 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Mar 6 03:02:11.886058 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Mar 6 03:02:11.887319 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Mar 6 03:02:11.887475 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Mar 6 03:02:11.887630 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Mar 6 03:02:11.887780 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Mar 6 03:02:11.887923 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Mar 6 03:02:11.890123 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 conventional PCI endpoint Mar 6 03:02:11.890295 kernel: pci 0000:00:03.0: BAR 0 [mem 0x80000000-0x803fffff pref] Mar 6 03:02:11.890448 kernel: pci 0000:00:03.0: ROM [mem 0xffff0000-0xffffffff pref] Mar 6 03:02:11.890580 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 6 03:02:11.890723 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 PCIe Endpoint Mar 6 03:02:11.890850 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80404000-0x80407fff] Mar 6 03:02:11.890985 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 PCIe Endpoint Mar 6 03:02:11.891130 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80400000-0x80403fff] Mar 6 03:02:11.891148 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 6 03:02:11.891163 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 6 03:02:11.891177 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 6 03:02:11.891195 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 6 03:02:11.891209 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Mar 6 03:02:11.891223 kernel: iommu: Default domain type: Translated Mar 6 03:02:11.891237 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 6 03:02:11.891251 kernel: efivars: Registered efivars operations Mar 6 03:02:11.891265 kernel: PCI: Using ACPI for IRQ routing Mar 6 03:02:11.891279 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 6 03:02:11.891293 kernel: e820: reserve RAM buffer [mem 0x768c0018-0x77ffffff] Mar 6 03:02:11.891306 kernel: e820: reserve RAM buffer [mem 0x786ce000-0x7bffffff] Mar 6 03:02:11.891323 kernel: e820: reserve RAM buffer [mem 0x7c97c000-0x7fffffff] Mar 6 03:02:11.891446 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Mar 6 03:02:11.891569 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Mar 6 03:02:11.891694 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 6 03:02:11.891712 kernel: vgaarb: loaded Mar 6 03:02:11.891726 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Mar 6 03:02:11.891740 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Mar 6 03:02:11.891754 kernel: clocksource: Switched to clocksource kvm-clock Mar 6 03:02:11.891767 kernel: VFS: Disk quotas dquot_6.6.0 Mar 6 03:02:11.891784 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 6 03:02:11.891798 kernel: pnp: PnP ACPI init Mar 6 03:02:11.891812 kernel: pnp: PnP ACPI: found 5 devices Mar 6 03:02:11.891826 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 6 03:02:11.891840 kernel: NET: Registered PF_INET protocol family Mar 6 03:02:11.891855 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 6 03:02:11.891870 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Mar 6 03:02:11.891884 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 6 03:02:11.891901 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Mar 6 03:02:11.891915 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Mar 6 03:02:11.891930 kernel: TCP: Hash tables configured (established 16384 bind 16384) Mar 6 03:02:11.891944 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Mar 6 03:02:11.891958 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Mar 6 03:02:11.891972 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 6 03:02:11.891986 kernel: NET: Registered PF_XDP protocol family Mar 6 03:02:11.892786 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 6 03:02:11.892924 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 6 03:02:11.893049 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 6 03:02:11.894230 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Mar 6 03:02:11.894359 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x2000ffffffff window] Mar 6 03:02:11.894498 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Mar 6 03:02:11.894520 kernel: PCI: CLS 0 bytes, default 64 Mar 6 03:02:11.894536 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Mar 6 03:02:11.894553 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Mar 6 03:02:11.894569 kernel: clocksource: Switched to clocksource tsc Mar 6 03:02:11.894590 kernel: Initialise system trusted keyrings Mar 6 03:02:11.894606 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Mar 6 03:02:11.894622 kernel: Key type asymmetric registered Mar 6 03:02:11.894637 kernel: Asymmetric key parser 'x509' registered Mar 6 03:02:11.894653 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Mar 6 03:02:11.894670 kernel: io scheduler mq-deadline registered Mar 6 03:02:11.894686 kernel: io scheduler kyber registered Mar 6 03:02:11.894702 kernel: io scheduler bfq registered Mar 6 03:02:11.894718 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 6 03:02:11.894737 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 6 03:02:11.894753 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 6 03:02:11.894769 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 6 03:02:11.894785 kernel: i8042: Warning: Keylock active Mar 6 03:02:11.894801 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 6 03:02:11.894817 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 6 03:02:11.894958 kernel: rtc_cmos 00:00: RTC can wake from S4 Mar 6 03:02:11.897132 kernel: rtc_cmos 00:00: registered as rtc0 Mar 6 03:02:11.897307 kernel: rtc_cmos 00:00: setting system clock to 2026-03-06T03:02:11 UTC (1772766131) Mar 6 03:02:11.897447 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Mar 6 03:02:11.897490 kernel: intel_pstate: CPU model not supported Mar 6 03:02:11.897509 kernel: efifb: probing for efifb Mar 6 03:02:11.897526 kernel: efifb: framebuffer at 0x80000000, using 1876k, total 1875k Mar 6 03:02:11.897541 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 Mar 6 03:02:11.897556 kernel: efifb: scrolling: redraw Mar 6 03:02:11.897571 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Mar 6 03:02:11.897586 kernel: Console: switching to colour frame buffer device 100x37 Mar 6 03:02:11.897604 kernel: fb0: EFI VGA frame buffer device Mar 6 03:02:11.897619 kernel: pstore: Using crash dump compression: deflate Mar 6 03:02:11.897634 kernel: pstore: Registered efi_pstore as persistent store backend Mar 6 03:02:11.897651 kernel: NET: Registered PF_INET6 protocol family Mar 6 03:02:11.897668 kernel: Segment Routing with IPv6 Mar 6 03:02:11.897685 kernel: In-situ OAM (IOAM) with IPv6 Mar 6 03:02:11.897702 kernel: NET: Registered PF_PACKET protocol family Mar 6 03:02:11.897720 kernel: Key type dns_resolver registered Mar 6 03:02:11.897737 kernel: IPI shorthand broadcast: enabled Mar 6 03:02:11.897758 kernel: sched_clock: Marking stable (2569003233, 146216763)->(2784979987, -69759991) Mar 6 03:02:11.897775 kernel: registered taskstats version 1 Mar 6 03:02:11.897790 kernel: Loading compiled-in X.509 certificates Mar 6 03:02:11.897806 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.74-flatcar: 30893fe9fd219d26109af079e6493e1c8b1c00af' Mar 6 03:02:11.897821 kernel: Demotion targets for Node 0: null Mar 6 03:02:11.897835 kernel: Key type .fscrypt registered Mar 6 03:02:11.897851 kernel: Key type fscrypt-provisioning registered Mar 6 03:02:11.897865 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 6 03:02:11.897881 kernel: ima: Allocated hash algorithm: sha1 Mar 6 03:02:11.897899 kernel: ima: No architecture policies found Mar 6 03:02:11.897915 kernel: clk: Disabling unused clocks Mar 6 03:02:11.897930 kernel: Warning: unable to open an initial console. Mar 6 03:02:11.897946 kernel: Freeing unused kernel image (initmem) memory: 46196K Mar 6 03:02:11.897962 kernel: Write protecting the kernel read-only data: 40960k Mar 6 03:02:11.897980 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Mar 6 03:02:11.897998 kernel: Run /init as init process Mar 6 03:02:11.898014 kernel: with arguments: Mar 6 03:02:11.898030 kernel: /init Mar 6 03:02:11.898044 kernel: with environment: Mar 6 03:02:11.898060 kernel: HOME=/ Mar 6 03:02:11.898103 kernel: TERM=linux Mar 6 03:02:11.898121 systemd[1]: Successfully made /usr/ read-only. Mar 6 03:02:11.898142 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 6 03:02:11.898161 systemd[1]: Detected virtualization amazon. Mar 6 03:02:11.898177 systemd[1]: Detected architecture x86-64. Mar 6 03:02:11.898193 systemd[1]: Running in initrd. Mar 6 03:02:11.898209 systemd[1]: No hostname configured, using default hostname. Mar 6 03:02:11.898225 systemd[1]: Hostname set to . Mar 6 03:02:11.898242 systemd[1]: Initializing machine ID from VM UUID. Mar 6 03:02:11.898258 systemd[1]: Queued start job for default target initrd.target. Mar 6 03:02:11.898277 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 6 03:02:11.898294 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 6 03:02:11.898312 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 6 03:02:11.898328 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 6 03:02:11.898345 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 6 03:02:11.898363 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 6 03:02:11.898381 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 6 03:02:11.898401 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 6 03:02:11.898418 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 6 03:02:11.898435 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 6 03:02:11.898451 systemd[1]: Reached target paths.target - Path Units. Mar 6 03:02:11.898468 systemd[1]: Reached target slices.target - Slice Units. Mar 6 03:02:11.898484 systemd[1]: Reached target swap.target - Swaps. Mar 6 03:02:11.898500 systemd[1]: Reached target timers.target - Timer Units. Mar 6 03:02:11.898517 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 6 03:02:11.898535 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 6 03:02:11.898554 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 6 03:02:11.898570 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Mar 6 03:02:11.898587 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 6 03:02:11.898604 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 6 03:02:11.898620 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 6 03:02:11.898638 systemd[1]: Reached target sockets.target - Socket Units. Mar 6 03:02:11.898655 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 6 03:02:11.898672 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 6 03:02:11.898692 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 6 03:02:11.898709 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Mar 6 03:02:11.898726 systemd[1]: Starting systemd-fsck-usr.service... Mar 6 03:02:11.898743 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 6 03:02:11.898760 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 6 03:02:11.898777 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 6 03:02:11.898794 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 6 03:02:11.898848 systemd-journald[188]: Collecting audit messages is disabled. Mar 6 03:02:11.898889 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 6 03:02:11.898908 systemd[1]: Finished systemd-fsck-usr.service. Mar 6 03:02:11.898927 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 6 03:02:11.898946 systemd-journald[188]: Journal started Mar 6 03:02:11.898984 systemd-journald[188]: Runtime Journal (/run/log/journal/ec2c4252d988b1f76782644d000bb131) is 4.7M, max 38.1M, 33.3M free. Mar 6 03:02:11.905128 systemd[1]: Started systemd-journald.service - Journal Service. Mar 6 03:02:11.911313 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 6 03:02:11.912437 systemd-modules-load[189]: Inserted module 'overlay' Mar 6 03:02:11.916497 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 6 03:02:11.924216 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 6 03:02:11.928300 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 6 03:02:11.937224 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 6 03:02:11.948028 systemd-tmpfiles[202]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Mar 6 03:02:11.954820 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 6 03:02:11.971097 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 6 03:02:11.970745 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 6 03:02:11.975397 systemd-modules-load[189]: Inserted module 'br_netfilter' Mar 6 03:02:11.976102 kernel: Bridge firewalling registered Mar 6 03:02:11.977661 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 6 03:02:11.980234 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 6 03:02:11.982085 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 6 03:02:11.985094 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 6 03:02:12.000629 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 6 03:02:12.004560 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 6 03:02:12.005561 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 6 03:02:12.011238 dracut-cmdline[224]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=5bef16c10382b6f77f9493af2297475832ff2f09f1ada4155425ad9b32dd6e53 Mar 6 03:02:12.065794 systemd-resolved[235]: Positive Trust Anchors: Mar 6 03:02:12.066834 systemd-resolved[235]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 6 03:02:12.066905 systemd-resolved[235]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 6 03:02:12.075868 systemd-resolved[235]: Defaulting to hostname 'linux'. Mar 6 03:02:12.077332 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 6 03:02:12.078861 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 6 03:02:12.113099 kernel: SCSI subsystem initialized Mar 6 03:02:12.123124 kernel: Loading iSCSI transport class v2.0-870. Mar 6 03:02:12.134090 kernel: iscsi: registered transport (tcp) Mar 6 03:02:12.155511 kernel: iscsi: registered transport (qla4xxx) Mar 6 03:02:12.155591 kernel: QLogic iSCSI HBA Driver Mar 6 03:02:12.174536 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 6 03:02:12.198891 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 6 03:02:12.202351 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 6 03:02:12.246521 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 6 03:02:12.248493 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 6 03:02:12.303113 kernel: raid6: avx512x4 gen() 18153 MB/s Mar 6 03:02:12.321095 kernel: raid6: avx512x2 gen() 17975 MB/s Mar 6 03:02:12.339096 kernel: raid6: avx512x1 gen() 17982 MB/s Mar 6 03:02:12.357092 kernel: raid6: avx2x4 gen() 17950 MB/s Mar 6 03:02:12.375097 kernel: raid6: avx2x2 gen() 17978 MB/s Mar 6 03:02:12.393583 kernel: raid6: avx2x1 gen() 13612 MB/s Mar 6 03:02:12.393650 kernel: raid6: using algorithm avx512x4 gen() 18153 MB/s Mar 6 03:02:12.412340 kernel: raid6: .... xor() 7738 MB/s, rmw enabled Mar 6 03:02:12.412400 kernel: raid6: using avx512x2 recovery algorithm Mar 6 03:02:12.433096 kernel: xor: automatically using best checksumming function avx Mar 6 03:02:12.601095 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 6 03:02:12.607945 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 6 03:02:12.610274 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 6 03:02:12.635010 systemd-udevd[437]: Using default interface naming scheme 'v255'. Mar 6 03:02:12.641827 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 6 03:02:12.646055 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 6 03:02:12.670715 dracut-pre-trigger[445]: rd.md=0: removing MD RAID activation Mar 6 03:02:12.697587 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 6 03:02:12.699569 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 6 03:02:12.761949 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 6 03:02:12.766140 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 6 03:02:12.856317 kernel: ena 0000:00:05.0: ENA device version: 0.10 Mar 6 03:02:12.856601 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Mar 6 03:02:12.866088 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Mar 6 03:02:12.879172 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80400000, mac addr 06:ae:57:e2:88:b7 Mar 6 03:02:12.883110 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input2 Mar 6 03:02:12.887084 kernel: cryptd: max_cpu_qlen set to 1000 Mar 6 03:02:12.887151 kernel: nvme nvme0: pci function 0000:00:04.0 Mar 6 03:02:12.891092 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Mar 6 03:02:12.908085 kernel: nvme nvme0: 2/0/0 default/read/poll queues Mar 6 03:02:12.912555 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 6 03:02:12.926233 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 6 03:02:12.926275 kernel: GPT:9289727 != 33554431 Mar 6 03:02:12.926297 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 6 03:02:12.926317 kernel: GPT:9289727 != 33554431 Mar 6 03:02:12.926336 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 6 03:02:12.926357 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 6 03:02:12.912658 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 6 03:02:12.913345 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 6 03:02:12.921468 (udev-worker)[481]: Network interface NamePolicy= disabled on kernel command line. Mar 6 03:02:12.930793 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 6 03:02:12.933257 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 6 03:02:12.938647 kernel: AES CTR mode by8 optimization enabled Mar 6 03:02:12.977707 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 6 03:02:12.989105 kernel: nvme nvme0: using unchecked data buffer Mar 6 03:02:13.126803 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Mar 6 03:02:13.146214 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Mar 6 03:02:13.156786 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 6 03:02:13.176358 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Mar 6 03:02:13.185731 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Mar 6 03:02:13.186281 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Mar 6 03:02:13.187418 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 6 03:02:13.188489 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 6 03:02:13.189682 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 6 03:02:13.191358 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 6 03:02:13.194979 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 6 03:02:13.212729 disk-uuid[670]: Primary Header is updated. Mar 6 03:02:13.212729 disk-uuid[670]: Secondary Entries is updated. Mar 6 03:02:13.212729 disk-uuid[670]: Secondary Header is updated. Mar 6 03:02:13.220140 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 6 03:02:13.221398 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 6 03:02:13.236116 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 6 03:02:14.234096 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 6 03:02:14.236965 disk-uuid[672]: The operation has completed successfully. Mar 6 03:02:14.370822 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 6 03:02:14.370949 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 6 03:02:14.408850 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 6 03:02:14.426840 sh[936]: Success Mar 6 03:02:14.453450 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 6 03:02:14.453548 kernel: device-mapper: uevent: version 1.0.3 Mar 6 03:02:14.454173 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Mar 6 03:02:14.467099 kernel: device-mapper: verity: sha256 using shash "sha256-avx2" Mar 6 03:02:14.558182 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 6 03:02:14.563175 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 6 03:02:14.578238 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 6 03:02:14.595131 kernel: BTRFS: device fsid 1235dd15-5252-4928-9c6c-372370c6bfca devid 1 transid 39 /dev/mapper/usr (254:0) scanned by mount (959) Mar 6 03:02:14.599756 kernel: BTRFS info (device dm-0): first mount of filesystem 1235dd15-5252-4928-9c6c-372370c6bfca Mar 6 03:02:14.599838 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 6 03:02:14.698641 kernel: BTRFS info (device dm-0 state E): enabling ssd optimizations Mar 6 03:02:14.698735 kernel: BTRFS info (device dm-0 state E): disabling log replay at mount time Mar 6 03:02:14.698756 kernel: BTRFS info (device dm-0 state E): enabling free space tree Mar 6 03:02:14.714737 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 6 03:02:14.715924 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Mar 6 03:02:14.716506 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 6 03:02:14.718284 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 6 03:02:14.721198 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 6 03:02:14.758132 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:11) scanned by mount (992) Mar 6 03:02:14.763094 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 993ea71e-e97d-4f5e-b5c7-fdac31a53b6b Mar 6 03:02:14.763163 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Mar 6 03:02:14.781000 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Mar 6 03:02:14.781090 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Mar 6 03:02:14.789087 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 993ea71e-e97d-4f5e-b5c7-fdac31a53b6b Mar 6 03:02:14.790371 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 6 03:02:14.793352 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 6 03:02:14.828788 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 6 03:02:14.831346 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 6 03:02:14.869395 systemd-networkd[1128]: lo: Link UP Mar 6 03:02:14.869406 systemd-networkd[1128]: lo: Gained carrier Mar 6 03:02:14.871049 systemd-networkd[1128]: Enumeration completed Mar 6 03:02:14.871786 systemd-networkd[1128]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 6 03:02:14.871795 systemd-networkd[1128]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 6 03:02:14.872799 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 6 03:02:14.874488 systemd[1]: Reached target network.target - Network. Mar 6 03:02:14.875436 systemd-networkd[1128]: eth0: Link UP Mar 6 03:02:14.875444 systemd-networkd[1128]: eth0: Gained carrier Mar 6 03:02:14.875460 systemd-networkd[1128]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 6 03:02:14.885156 systemd-networkd[1128]: eth0: DHCPv4 address 172.31.18.81/20, gateway 172.31.16.1 acquired from 172.31.16.1 Mar 6 03:02:15.261090 ignition[1083]: Ignition 2.22.0 Mar 6 03:02:15.261102 ignition[1083]: Stage: fetch-offline Mar 6 03:02:15.261266 ignition[1083]: no configs at "/usr/lib/ignition/base.d" Mar 6 03:02:15.261273 ignition[1083]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 6 03:02:15.262665 ignition[1083]: Ignition finished successfully Mar 6 03:02:15.265426 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 6 03:02:15.267477 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 6 03:02:15.296326 ignition[1138]: Ignition 2.22.0 Mar 6 03:02:15.296341 ignition[1138]: Stage: fetch Mar 6 03:02:15.296824 ignition[1138]: no configs at "/usr/lib/ignition/base.d" Mar 6 03:02:15.296838 ignition[1138]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 6 03:02:15.296955 ignition[1138]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 6 03:02:15.356053 ignition[1138]: PUT result: OK Mar 6 03:02:15.366799 ignition[1138]: parsed url from cmdline: "" Mar 6 03:02:15.366811 ignition[1138]: no config URL provided Mar 6 03:02:15.366821 ignition[1138]: reading system config file "/usr/lib/ignition/user.ign" Mar 6 03:02:15.366837 ignition[1138]: no config at "/usr/lib/ignition/user.ign" Mar 6 03:02:15.366869 ignition[1138]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 6 03:02:15.368979 ignition[1138]: PUT result: OK Mar 6 03:02:15.369076 ignition[1138]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Mar 6 03:02:15.372636 ignition[1138]: GET result: OK Mar 6 03:02:15.372732 ignition[1138]: parsing config with SHA512: 54eefdd795647eff04cccbe07ae04f39c00fb703bf376801938a9fc0306db4d6e2d99dd4aeb6d7d0660bdc7460f11d0a5e0ba0165c462a8d4ea87147f298a8db Mar 6 03:02:15.377616 unknown[1138]: fetched base config from "system" Mar 6 03:02:15.377643 unknown[1138]: fetched base config from "system" Mar 6 03:02:15.377653 unknown[1138]: fetched user config from "aws" Mar 6 03:02:15.379702 ignition[1138]: fetch: fetch complete Mar 6 03:02:15.379714 ignition[1138]: fetch: fetch passed Mar 6 03:02:15.379791 ignition[1138]: Ignition finished successfully Mar 6 03:02:15.383433 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 6 03:02:15.384992 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 6 03:02:15.413871 ignition[1144]: Ignition 2.22.0 Mar 6 03:02:15.413888 ignition[1144]: Stage: kargs Mar 6 03:02:15.414283 ignition[1144]: no configs at "/usr/lib/ignition/base.d" Mar 6 03:02:15.414295 ignition[1144]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 6 03:02:15.414408 ignition[1144]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 6 03:02:15.415311 ignition[1144]: PUT result: OK Mar 6 03:02:15.417591 ignition[1144]: kargs: kargs passed Mar 6 03:02:15.417666 ignition[1144]: Ignition finished successfully Mar 6 03:02:15.419347 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 6 03:02:15.421228 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 6 03:02:15.453366 ignition[1150]: Ignition 2.22.0 Mar 6 03:02:15.453380 ignition[1150]: Stage: disks Mar 6 03:02:15.453774 ignition[1150]: no configs at "/usr/lib/ignition/base.d" Mar 6 03:02:15.453787 ignition[1150]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 6 03:02:15.453898 ignition[1150]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 6 03:02:15.454701 ignition[1150]: PUT result: OK Mar 6 03:02:15.459504 ignition[1150]: disks: disks passed Mar 6 03:02:15.459570 ignition[1150]: Ignition finished successfully Mar 6 03:02:15.461894 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 6 03:02:15.462501 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 6 03:02:15.462862 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 6 03:02:15.463663 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 6 03:02:15.463973 systemd[1]: Reached target sysinit.target - System Initialization. Mar 6 03:02:15.464513 systemd[1]: Reached target basic.target - Basic System. Mar 6 03:02:15.466171 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 6 03:02:15.514415 systemd-fsck[1158]: ROOT: clean, 15/553520 files, 52789/553472 blocks Mar 6 03:02:15.517799 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 6 03:02:15.519915 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 6 03:02:15.657092 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 16ab7223-a8af-43d2-ad40-7e1bf0ff2a89 r/w with ordered data mode. Quota mode: none. Mar 6 03:02:15.657391 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 6 03:02:15.658372 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 6 03:02:15.660192 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 6 03:02:15.662426 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 6 03:02:15.665645 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 6 03:02:15.665710 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 6 03:02:15.665744 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 6 03:02:15.681676 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 6 03:02:15.683696 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 6 03:02:15.690094 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:11) scanned by mount (1177) Mar 6 03:02:15.694259 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 993ea71e-e97d-4f5e-b5c7-fdac31a53b6b Mar 6 03:02:15.694318 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Mar 6 03:02:15.703160 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Mar 6 03:02:15.703230 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Mar 6 03:02:15.705765 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 6 03:02:16.083353 initrd-setup-root[1201]: cut: /sysroot/etc/passwd: No such file or directory Mar 6 03:02:16.101459 initrd-setup-root[1208]: cut: /sysroot/etc/group: No such file or directory Mar 6 03:02:16.106781 initrd-setup-root[1215]: cut: /sysroot/etc/shadow: No such file or directory Mar 6 03:02:16.111552 initrd-setup-root[1222]: cut: /sysroot/etc/gshadow: No such file or directory Mar 6 03:02:16.466827 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 6 03:02:16.469094 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 6 03:02:16.472230 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 6 03:02:16.486125 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 6 03:02:16.488991 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 993ea71e-e97d-4f5e-b5c7-fdac31a53b6b Mar 6 03:02:16.498170 systemd-networkd[1128]: eth0: Gained IPv6LL Mar 6 03:02:16.521810 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 6 03:02:16.526360 ignition[1290]: INFO : Ignition 2.22.0 Mar 6 03:02:16.526360 ignition[1290]: INFO : Stage: mount Mar 6 03:02:16.527987 ignition[1290]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 6 03:02:16.527987 ignition[1290]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 6 03:02:16.527987 ignition[1290]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 6 03:02:16.531132 ignition[1290]: INFO : PUT result: OK Mar 6 03:02:16.534264 ignition[1290]: INFO : mount: mount passed Mar 6 03:02:16.534801 ignition[1290]: INFO : Ignition finished successfully Mar 6 03:02:16.536993 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 6 03:02:16.538584 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 6 03:02:16.659254 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 6 03:02:16.686100 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:11) scanned by mount (1301) Mar 6 03:02:16.689235 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 993ea71e-e97d-4f5e-b5c7-fdac31a53b6b Mar 6 03:02:16.689301 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Mar 6 03:02:16.699924 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Mar 6 03:02:16.700010 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Mar 6 03:02:16.702214 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 6 03:02:16.737693 ignition[1317]: INFO : Ignition 2.22.0 Mar 6 03:02:16.737693 ignition[1317]: INFO : Stage: files Mar 6 03:02:16.739298 ignition[1317]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 6 03:02:16.739298 ignition[1317]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 6 03:02:16.739298 ignition[1317]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 6 03:02:16.740493 ignition[1317]: INFO : PUT result: OK Mar 6 03:02:16.742823 ignition[1317]: DEBUG : files: compiled without relabeling support, skipping Mar 6 03:02:16.743637 ignition[1317]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 6 03:02:16.743637 ignition[1317]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 6 03:02:16.759984 ignition[1317]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 6 03:02:16.761181 ignition[1317]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 6 03:02:16.761970 ignition[1317]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 6 03:02:16.761617 unknown[1317]: wrote ssh authorized keys file for user: core Mar 6 03:02:16.764757 ignition[1317]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 6 03:02:16.765557 ignition[1317]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Mar 6 03:02:16.844322 ignition[1317]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 6 03:02:17.025902 ignition[1317]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 6 03:02:17.025902 ignition[1317]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Mar 6 03:02:17.028007 ignition[1317]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Mar 6 03:02:17.028007 ignition[1317]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 6 03:02:17.028007 ignition[1317]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 6 03:02:17.028007 ignition[1317]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 6 03:02:17.028007 ignition[1317]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 6 03:02:17.028007 ignition[1317]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 6 03:02:17.028007 ignition[1317]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 6 03:02:17.033152 ignition[1317]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 6 03:02:17.033152 ignition[1317]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 6 03:02:17.033152 ignition[1317]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 6 03:02:17.035952 ignition[1317]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 6 03:02:17.035952 ignition[1317]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 6 03:02:17.035952 ignition[1317]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.35.1-x86-64.raw: attempt #1 Mar 6 03:02:17.553224 ignition[1317]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Mar 6 03:02:18.827407 ignition[1317]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 6 03:02:18.827407 ignition[1317]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Mar 6 03:02:18.830043 ignition[1317]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 6 03:02:18.833345 ignition[1317]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 6 03:02:18.833345 ignition[1317]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Mar 6 03:02:18.833345 ignition[1317]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Mar 6 03:02:18.836803 ignition[1317]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Mar 6 03:02:18.836803 ignition[1317]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 6 03:02:18.836803 ignition[1317]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 6 03:02:18.836803 ignition[1317]: INFO : files: files passed Mar 6 03:02:18.836803 ignition[1317]: INFO : Ignition finished successfully Mar 6 03:02:18.835850 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 6 03:02:18.839214 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 6 03:02:18.844459 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 6 03:02:18.855844 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 6 03:02:18.856855 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 6 03:02:18.862084 initrd-setup-root-after-ignition[1348]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 6 03:02:18.862084 initrd-setup-root-after-ignition[1348]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 6 03:02:18.865419 initrd-setup-root-after-ignition[1352]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 6 03:02:18.867650 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 6 03:02:18.869337 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 6 03:02:18.870759 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 6 03:02:18.923588 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 6 03:02:18.923735 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 6 03:02:18.925772 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 6 03:02:18.926331 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 6 03:02:18.927117 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 6 03:02:18.928244 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 6 03:02:18.954206 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 6 03:02:18.956348 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 6 03:02:18.987273 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 6 03:02:18.988423 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 6 03:02:18.989379 systemd[1]: Stopped target timers.target - Timer Units. Mar 6 03:02:18.990000 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 6 03:02:18.990286 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 6 03:02:18.991372 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 6 03:02:18.992226 systemd[1]: Stopped target basic.target - Basic System. Mar 6 03:02:18.993122 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 6 03:02:18.993896 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 6 03:02:18.994666 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 6 03:02:18.995445 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Mar 6 03:02:18.996228 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 6 03:02:18.997084 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 6 03:02:18.997887 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 6 03:02:18.998998 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 6 03:02:18.999846 systemd[1]: Stopped target swap.target - Swaps. Mar 6 03:02:19.000702 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 6 03:02:19.000931 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 6 03:02:19.001921 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 6 03:02:19.002758 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 6 03:02:19.003402 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 6 03:02:19.003543 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 6 03:02:19.004246 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 6 03:02:19.004457 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 6 03:02:19.005892 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 6 03:02:19.006096 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 6 03:02:19.006855 systemd[1]: ignition-files.service: Deactivated successfully. Mar 6 03:02:19.007053 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 6 03:02:19.008902 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 6 03:02:19.013004 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 6 03:02:19.015141 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 6 03:02:19.015340 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 6 03:02:19.016333 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 6 03:02:19.016500 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 6 03:02:19.026416 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 6 03:02:19.029221 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 6 03:02:19.050851 ignition[1372]: INFO : Ignition 2.22.0 Mar 6 03:02:19.050851 ignition[1372]: INFO : Stage: umount Mar 6 03:02:19.050851 ignition[1372]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 6 03:02:19.050851 ignition[1372]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 6 03:02:19.050851 ignition[1372]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 6 03:02:19.057188 ignition[1372]: INFO : PUT result: OK Mar 6 03:02:19.057188 ignition[1372]: INFO : umount: umount passed Mar 6 03:02:19.057188 ignition[1372]: INFO : Ignition finished successfully Mar 6 03:02:19.052355 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 6 03:02:19.058293 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 6 03:02:19.058464 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 6 03:02:19.060960 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 6 03:02:19.061109 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 6 03:02:19.062732 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 6 03:02:19.062817 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 6 03:02:19.063541 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 6 03:02:19.063600 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 6 03:02:19.064160 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 6 03:02:19.064221 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 6 03:02:19.064865 systemd[1]: Stopped target network.target - Network. Mar 6 03:02:19.065478 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 6 03:02:19.065544 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 6 03:02:19.066129 systemd[1]: Stopped target paths.target - Path Units. Mar 6 03:02:19.066708 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 6 03:02:19.071166 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 6 03:02:19.072324 systemd[1]: Stopped target slices.target - Slice Units. Mar 6 03:02:19.072814 systemd[1]: Stopped target sockets.target - Socket Units. Mar 6 03:02:19.073476 systemd[1]: iscsid.socket: Deactivated successfully. Mar 6 03:02:19.073534 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 6 03:02:19.074156 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 6 03:02:19.074208 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 6 03:02:19.074784 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 6 03:02:19.074861 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 6 03:02:19.075476 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 6 03:02:19.075536 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 6 03:02:19.076112 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 6 03:02:19.076174 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 6 03:02:19.077052 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 6 03:02:19.077697 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 6 03:02:19.084229 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 6 03:02:19.084407 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 6 03:02:19.088808 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Mar 6 03:02:19.089211 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 6 03:02:19.089350 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 6 03:02:19.091961 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Mar 6 03:02:19.093169 systemd[1]: Stopped target network-pre.target - Preparation for Network. Mar 6 03:02:19.093700 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 6 03:02:19.093773 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 6 03:02:19.095599 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 6 03:02:19.096134 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 6 03:02:19.096208 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 6 03:02:19.096977 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 6 03:02:19.097036 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 6 03:02:19.100230 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 6 03:02:19.100300 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 6 03:02:19.100951 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 6 03:02:19.101020 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 6 03:02:19.101833 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 6 03:02:19.106530 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 6 03:02:19.106628 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Mar 6 03:02:19.111406 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 6 03:02:19.111599 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 6 03:02:19.114803 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 6 03:02:19.114902 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 6 03:02:19.117872 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 6 03:02:19.117930 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 6 03:02:19.119821 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 6 03:02:19.120268 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 6 03:02:19.121676 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 6 03:02:19.121750 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 6 03:02:19.122984 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 6 03:02:19.123099 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 6 03:02:19.127853 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 6 03:02:19.128441 systemd[1]: systemd-network-generator.service: Deactivated successfully. Mar 6 03:02:19.128653 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Mar 6 03:02:19.129911 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 6 03:02:19.129983 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 6 03:02:19.132176 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 6 03:02:19.132243 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 6 03:02:19.135207 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 6 03:02:19.135273 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 6 03:02:19.135825 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 6 03:02:19.135869 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 6 03:02:19.139052 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Mar 6 03:02:19.139160 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Mar 6 03:02:19.139212 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Mar 6 03:02:19.139269 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 6 03:02:19.139770 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 6 03:02:19.142187 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 6 03:02:19.149587 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 6 03:02:19.149731 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 6 03:02:19.150817 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 6 03:02:19.152433 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 6 03:02:19.174150 systemd[1]: Switching root. Mar 6 03:02:19.229086 systemd-journald[188]: Received SIGTERM from PID 1 (systemd). Mar 6 03:02:19.229171 systemd-journald[188]: Journal stopped Mar 6 03:02:20.953331 kernel: SELinux: policy capability network_peer_controls=1 Mar 6 03:02:20.953429 kernel: SELinux: policy capability open_perms=1 Mar 6 03:02:20.953450 kernel: SELinux: policy capability extended_socket_class=1 Mar 6 03:02:20.953466 kernel: SELinux: policy capability always_check_network=0 Mar 6 03:02:20.953484 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 6 03:02:20.953502 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 6 03:02:20.953526 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 6 03:02:20.953543 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 6 03:02:20.953566 kernel: SELinux: policy capability userspace_initial_context=0 Mar 6 03:02:20.953588 kernel: audit: type=1403 audit(1772766139.623:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 6 03:02:20.953609 systemd[1]: Successfully loaded SELinux policy in 92.305ms. Mar 6 03:02:20.953643 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 6.294ms. Mar 6 03:02:20.953663 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 6 03:02:20.953682 systemd[1]: Detected virtualization amazon. Mar 6 03:02:20.953702 systemd[1]: Detected architecture x86-64. Mar 6 03:02:20.953721 systemd[1]: Detected first boot. Mar 6 03:02:20.953742 systemd[1]: Initializing machine ID from VM UUID. Mar 6 03:02:20.953760 zram_generator::config[1416]: No configuration found. Mar 6 03:02:20.953783 kernel: Guest personality initialized and is inactive Mar 6 03:02:20.953800 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Mar 6 03:02:20.953827 kernel: Initialized host personality Mar 6 03:02:20.953844 kernel: NET: Registered PF_VSOCK protocol family Mar 6 03:02:20.953863 systemd[1]: Populated /etc with preset unit settings. Mar 6 03:02:20.953882 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Mar 6 03:02:20.953902 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 6 03:02:20.953925 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 6 03:02:20.953944 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 6 03:02:20.953966 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 6 03:02:20.953986 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 6 03:02:20.954006 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 6 03:02:20.954025 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 6 03:02:20.954044 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 6 03:02:20.954077 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 6 03:02:20.954096 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 6 03:02:20.954115 systemd[1]: Created slice user.slice - User and Session Slice. Mar 6 03:02:20.954137 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 6 03:02:20.954157 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 6 03:02:20.954175 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 6 03:02:20.954194 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 6 03:02:20.954212 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 6 03:02:20.954231 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 6 03:02:20.954251 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 6 03:02:20.954272 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 6 03:02:20.954291 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 6 03:02:20.954309 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 6 03:02:20.954328 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 6 03:02:20.954346 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 6 03:02:20.954366 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 6 03:02:20.954385 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 6 03:02:20.954403 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 6 03:02:20.954421 systemd[1]: Reached target slices.target - Slice Units. Mar 6 03:02:20.954438 systemd[1]: Reached target swap.target - Swaps. Mar 6 03:02:20.954459 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 6 03:02:20.954476 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 6 03:02:20.954496 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Mar 6 03:02:20.954523 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 6 03:02:20.954545 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 6 03:02:20.954565 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 6 03:02:20.954586 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 6 03:02:20.954607 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 6 03:02:20.954630 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 6 03:02:20.954654 systemd[1]: Mounting media.mount - External Media Directory... Mar 6 03:02:20.954674 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 6 03:02:20.954696 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 6 03:02:20.954717 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 6 03:02:20.954737 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 6 03:02:20.954759 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 6 03:02:20.954781 systemd[1]: Reached target machines.target - Containers. Mar 6 03:02:20.954803 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 6 03:02:20.954827 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 6 03:02:20.954849 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 6 03:02:20.954870 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 6 03:02:20.954890 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 6 03:02:20.954909 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 6 03:02:20.954926 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 6 03:02:20.954946 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 6 03:02:20.954964 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 6 03:02:20.954984 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 6 03:02:20.955006 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 6 03:02:20.955024 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 6 03:02:20.955042 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 6 03:02:20.957088 systemd[1]: Stopped systemd-fsck-usr.service. Mar 6 03:02:20.957131 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 6 03:02:20.957152 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 6 03:02:20.957171 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 6 03:02:20.957190 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 6 03:02:20.957214 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 6 03:02:20.957234 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Mar 6 03:02:20.957255 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 6 03:02:20.957282 systemd[1]: verity-setup.service: Deactivated successfully. Mar 6 03:02:20.957302 systemd[1]: Stopped verity-setup.service. Mar 6 03:02:20.957321 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 6 03:02:20.957343 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 6 03:02:20.957363 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 6 03:02:20.957383 systemd[1]: Mounted media.mount - External Media Directory. Mar 6 03:02:20.957442 systemd-journald[1492]: Collecting audit messages is disabled. Mar 6 03:02:20.957487 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 6 03:02:20.957507 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 6 03:02:20.957528 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 6 03:02:20.957550 systemd-journald[1492]: Journal started Mar 6 03:02:20.957587 systemd-journald[1492]: Runtime Journal (/run/log/journal/ec2c4252d988b1f76782644d000bb131) is 4.7M, max 38.1M, 33.3M free. Mar 6 03:02:20.635513 systemd[1]: Queued start job for default target multi-user.target. Mar 6 03:02:20.661429 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Mar 6 03:02:20.661872 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 6 03:02:20.989504 systemd[1]: Started systemd-journald.service - Journal Service. Mar 6 03:02:20.984274 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 6 03:02:20.985637 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 6 03:02:20.985948 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 6 03:02:20.988487 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 6 03:02:20.988752 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 6 03:02:20.990619 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 6 03:02:20.991093 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 6 03:02:21.001586 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 6 03:02:21.016169 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 6 03:02:21.018438 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 6 03:02:21.018486 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 6 03:02:21.023089 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Mar 6 03:02:21.030088 kernel: loop: module loaded Mar 6 03:02:21.037259 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 6 03:02:21.038205 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 6 03:02:21.041251 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 6 03:02:21.049817 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 6 03:02:21.050646 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 6 03:02:21.056909 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 6 03:02:21.060307 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 6 03:02:21.067337 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 6 03:02:21.071675 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 6 03:02:21.073290 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 6 03:02:21.074400 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 6 03:02:21.075390 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 6 03:02:21.076292 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 6 03:02:21.084908 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 6 03:02:21.086012 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 6 03:02:21.090485 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 6 03:02:21.092085 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Mar 6 03:02:21.113089 kernel: fuse: init (API version 7.41) Mar 6 03:02:21.119188 systemd-journald[1492]: Time spent on flushing to /var/log/journal/ec2c4252d988b1f76782644d000bb131 is 42.604ms for 1011 entries. Mar 6 03:02:21.119188 systemd-journald[1492]: System Journal (/var/log/journal/ec2c4252d988b1f76782644d000bb131) is 8M, max 195.6M, 187.6M free. Mar 6 03:02:21.185554 systemd-journald[1492]: Received client request to flush runtime journal. Mar 6 03:02:21.185636 kernel: loop0: detected capacity change from 0 to 72368 Mar 6 03:02:21.137617 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 6 03:02:21.139571 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 6 03:02:21.139836 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 6 03:02:21.142045 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 6 03:02:21.148224 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 6 03:02:21.155261 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Mar 6 03:02:21.158203 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 6 03:02:21.175199 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 6 03:02:21.201374 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 6 03:02:21.251210 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 6 03:02:21.267769 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Mar 6 03:02:21.278611 systemd-tmpfiles[1536]: ACLs are not supported, ignoring. Mar 6 03:02:21.278639 systemd-tmpfiles[1536]: ACLs are not supported, ignoring. Mar 6 03:02:21.290578 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 6 03:02:21.294449 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 6 03:02:21.337577 kernel: ACPI: bus type drm_connector registered Mar 6 03:02:21.336605 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 6 03:02:21.340030 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 6 03:02:21.340670 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 6 03:02:21.353098 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 6 03:02:21.373096 kernel: loop1: detected capacity change from 0 to 217752 Mar 6 03:02:21.397030 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 6 03:02:21.401882 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 6 03:02:21.424051 systemd-tmpfiles[1572]: ACLs are not supported, ignoring. Mar 6 03:02:21.424463 systemd-tmpfiles[1572]: ACLs are not supported, ignoring. Mar 6 03:02:21.429765 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 6 03:02:21.489142 kernel: loop2: detected capacity change from 0 to 128560 Mar 6 03:02:21.625801 kernel: loop3: detected capacity change from 0 to 110984 Mar 6 03:02:21.668386 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 6 03:02:21.766131 kernel: loop4: detected capacity change from 0 to 72368 Mar 6 03:02:21.784098 kernel: loop5: detected capacity change from 0 to 217752 Mar 6 03:02:21.819276 kernel: loop6: detected capacity change from 0 to 128560 Mar 6 03:02:21.845099 kernel: loop7: detected capacity change from 0 to 110984 Mar 6 03:02:21.853882 (sd-merge)[1579]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Mar 6 03:02:21.856672 (sd-merge)[1579]: Merged extensions into '/usr'. Mar 6 03:02:21.866806 systemd[1]: Reload requested from client PID 1534 ('systemd-sysext') (unit systemd-sysext.service)... Mar 6 03:02:21.866965 systemd[1]: Reloading... Mar 6 03:02:21.984090 zram_generator::config[1603]: No configuration found. Mar 6 03:02:22.250389 systemd[1]: Reloading finished in 382 ms. Mar 6 03:02:22.275924 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 6 03:02:22.277001 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 6 03:02:22.293394 systemd[1]: Starting ensure-sysext.service... Mar 6 03:02:22.298232 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 6 03:02:22.303382 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 6 03:02:22.327193 systemd[1]: Reload requested from client PID 1657 ('systemctl') (unit ensure-sysext.service)... Mar 6 03:02:22.327210 systemd[1]: Reloading... Mar 6 03:02:22.332757 systemd-tmpfiles[1658]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Mar 6 03:02:22.334195 systemd-tmpfiles[1658]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Mar 6 03:02:22.335960 systemd-tmpfiles[1658]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 6 03:02:22.338682 systemd-tmpfiles[1658]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 6 03:02:22.342305 systemd-tmpfiles[1658]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 6 03:02:22.342863 systemd-tmpfiles[1658]: ACLs are not supported, ignoring. Mar 6 03:02:22.344380 systemd-tmpfiles[1658]: ACLs are not supported, ignoring. Mar 6 03:02:22.356405 systemd-tmpfiles[1658]: Detected autofs mount point /boot during canonicalization of boot. Mar 6 03:02:22.358221 systemd-tmpfiles[1658]: Skipping /boot Mar 6 03:02:22.363029 systemd-udevd[1659]: Using default interface naming scheme 'v255'. Mar 6 03:02:22.376239 systemd-tmpfiles[1658]: Detected autofs mount point /boot during canonicalization of boot. Mar 6 03:02:22.376257 systemd-tmpfiles[1658]: Skipping /boot Mar 6 03:02:22.452097 zram_generator::config[1686]: No configuration found. Mar 6 03:02:22.753927 (udev-worker)[1721]: Network interface NamePolicy= disabled on kernel command line. Mar 6 03:02:22.870104 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Mar 6 03:02:22.875767 kernel: ACPI: button: Power Button [PWRF] Mar 6 03:02:22.875864 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Mar 6 03:02:22.876950 kernel: ACPI: button: Sleep Button [SLPF] Mar 6 03:02:22.888613 ldconfig[1530]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 6 03:02:22.889084 kernel: mousedev: PS/2 mouse device common for all mice Mar 6 03:02:22.923103 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Mar 6 03:02:22.987716 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 6 03:02:22.988423 systemd[1]: Reloading finished in 660 ms. Mar 6 03:02:23.000534 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 6 03:02:23.001938 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 6 03:02:23.004652 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 6 03:02:23.037166 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 6 03:02:23.042644 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 6 03:02:23.045125 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 6 03:02:23.052101 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 6 03:02:23.057309 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 6 03:02:23.063193 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 6 03:02:23.072006 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 6 03:02:23.072347 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 6 03:02:23.080737 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 6 03:02:23.083081 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 6 03:02:23.094823 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 6 03:02:23.095590 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 6 03:02:23.095796 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 6 03:02:23.095948 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 6 03:02:23.108597 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 6 03:02:23.109498 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 6 03:02:23.109819 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 6 03:02:23.109983 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 6 03:02:23.120697 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 6 03:02:23.121273 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 6 03:02:23.123805 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 6 03:02:23.137791 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 6 03:02:23.140293 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 6 03:02:23.143287 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 6 03:02:23.144244 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 6 03:02:23.161412 systemd[1]: Finished ensure-sysext.service. Mar 6 03:02:23.163591 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 6 03:02:23.164698 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 6 03:02:23.178946 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 6 03:02:23.180567 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 6 03:02:23.184355 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 6 03:02:23.186335 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 6 03:02:23.186521 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 6 03:02:23.186709 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 6 03:02:23.186880 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 6 03:02:23.188137 systemd[1]: Reached target time-set.target - System Time Set. Mar 6 03:02:23.188855 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 6 03:02:23.197671 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 6 03:02:23.237652 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 6 03:02:23.237905 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 6 03:02:23.239399 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 6 03:02:23.245348 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 6 03:02:23.298264 augenrules[1898]: No rules Mar 6 03:02:23.300574 systemd[1]: audit-rules.service: Deactivated successfully. Mar 6 03:02:23.302259 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 6 03:02:23.309120 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 6 03:02:23.322720 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 6 03:02:23.329571 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 6 03:02:23.344283 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 6 03:02:23.349800 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 6 03:02:23.350155 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 6 03:02:23.354430 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 6 03:02:23.361332 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 6 03:02:23.439485 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Mar 6 03:02:23.444314 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 6 03:02:23.474131 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 6 03:02:23.526699 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 6 03:02:23.542425 systemd-networkd[1838]: lo: Link UP Mar 6 03:02:23.542818 systemd-networkd[1838]: lo: Gained carrier Mar 6 03:02:23.542978 systemd-resolved[1840]: Positive Trust Anchors: Mar 6 03:02:23.542988 systemd-resolved[1840]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 6 03:02:23.543044 systemd-resolved[1840]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 6 03:02:23.544880 systemd-networkd[1838]: Enumeration completed Mar 6 03:02:23.545023 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 6 03:02:23.546127 systemd-networkd[1838]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 6 03:02:23.546133 systemd-networkd[1838]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 6 03:02:23.550225 systemd-networkd[1838]: eth0: Link UP Mar 6 03:02:23.550417 systemd-networkd[1838]: eth0: Gained carrier Mar 6 03:02:23.550451 systemd-networkd[1838]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 6 03:02:23.550648 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Mar 6 03:02:23.551829 systemd-resolved[1840]: Defaulting to hostname 'linux'. Mar 6 03:02:23.553441 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 6 03:02:23.557843 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 6 03:02:23.558210 systemd-networkd[1838]: eth0: DHCPv4 address 172.31.18.81/20, gateway 172.31.16.1 acquired from 172.31.16.1 Mar 6 03:02:23.558575 systemd[1]: Reached target network.target - Network. Mar 6 03:02:23.560240 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 6 03:02:23.560857 systemd[1]: Reached target sysinit.target - System Initialization. Mar 6 03:02:23.562431 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 6 03:02:23.562993 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 6 03:02:23.563543 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Mar 6 03:02:23.564314 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 6 03:02:23.565264 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 6 03:02:23.565655 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 6 03:02:23.566022 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 6 03:02:23.566148 systemd[1]: Reached target paths.target - Path Units. Mar 6 03:02:23.566503 systemd[1]: Reached target timers.target - Timer Units. Mar 6 03:02:23.568282 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 6 03:02:23.570837 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 6 03:02:23.575252 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Mar 6 03:02:23.576019 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Mar 6 03:02:23.576487 systemd[1]: Reached target ssh-access.target - SSH Access Available. Mar 6 03:02:23.578842 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 6 03:02:23.579571 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Mar 6 03:02:23.580837 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 6 03:02:23.582233 systemd[1]: Reached target sockets.target - Socket Units. Mar 6 03:02:23.584130 systemd[1]: Reached target basic.target - Basic System. Mar 6 03:02:23.584746 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 6 03:02:23.584785 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 6 03:02:23.586024 systemd[1]: Starting containerd.service - containerd container runtime... Mar 6 03:02:23.588325 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Mar 6 03:02:23.591343 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 6 03:02:23.595304 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 6 03:02:23.602909 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 6 03:02:23.610885 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 6 03:02:23.611623 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 6 03:02:23.614452 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Mar 6 03:02:23.618287 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 6 03:02:23.628240 systemd[1]: Started ntpd.service - Network Time Service. Mar 6 03:02:23.641922 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 6 03:02:23.657097 jq[1938]: false Mar 6 03:02:23.659194 systemd[1]: Starting setup-oem.service - Setup OEM... Mar 6 03:02:23.664348 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 6 03:02:23.669500 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 6 03:02:23.670990 oslogin_cache_refresh[1940]: Refreshing passwd entry cache Mar 6 03:02:23.677466 google_oslogin_nss_cache[1940]: oslogin_cache_refresh[1940]: Refreshing passwd entry cache Mar 6 03:02:23.677466 google_oslogin_nss_cache[1940]: oslogin_cache_refresh[1940]: Failure getting users, quitting Mar 6 03:02:23.677466 google_oslogin_nss_cache[1940]: oslogin_cache_refresh[1940]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Mar 6 03:02:23.677466 google_oslogin_nss_cache[1940]: oslogin_cache_refresh[1940]: Refreshing group entry cache Mar 6 03:02:23.676296 oslogin_cache_refresh[1940]: Failure getting users, quitting Mar 6 03:02:23.676318 oslogin_cache_refresh[1940]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Mar 6 03:02:23.676378 oslogin_cache_refresh[1940]: Refreshing group entry cache Mar 6 03:02:23.682480 google_oslogin_nss_cache[1940]: oslogin_cache_refresh[1940]: Failure getting groups, quitting Mar 6 03:02:23.682480 google_oslogin_nss_cache[1940]: oslogin_cache_refresh[1940]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Mar 6 03:02:23.678409 oslogin_cache_refresh[1940]: Failure getting groups, quitting Mar 6 03:02:23.678422 oslogin_cache_refresh[1940]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Mar 6 03:02:23.689352 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 6 03:02:23.692859 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 6 03:02:23.694396 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 6 03:02:23.696089 systemd[1]: Starting update-engine.service - Update Engine... Mar 6 03:02:23.699848 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 6 03:02:23.706784 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Mar 6 03:02:23.716152 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 6 03:02:23.717605 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 6 03:02:23.719129 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 6 03:02:23.719522 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Mar 6 03:02:23.719781 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Mar 6 03:02:23.730098 jq[1954]: true Mar 6 03:02:23.753020 extend-filesystems[1939]: Found /dev/nvme0n1p6 Mar 6 03:02:23.782145 extend-filesystems[1939]: Found /dev/nvme0n1p9 Mar 6 03:02:23.787823 extend-filesystems[1939]: Checking size of /dev/nvme0n1p9 Mar 6 03:02:23.810163 jq[1960]: true Mar 6 03:02:23.819000 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 6 03:02:23.819320 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 6 03:02:23.831078 update_engine[1953]: I20260306 03:02:23.824761 1953 main.cc:92] Flatcar Update Engine starting Mar 6 03:02:23.831399 extend-filesystems[1939]: Resized partition /dev/nvme0n1p9 Mar 6 03:02:23.844518 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 3587067 blocks Mar 6 03:02:23.837177 systemd[1]: motdgen.service: Deactivated successfully. Mar 6 03:02:23.844700 extend-filesystems[1990]: resize2fs 1.47.3 (8-Jul-2025) Mar 6 03:02:23.839713 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 6 03:02:23.852239 (ntainerd)[1983]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 6 03:02:23.900927 coreos-metadata[1935]: Mar 06 03:02:23.900 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Mar 6 03:02:23.933710 coreos-metadata[1935]: Mar 06 03:02:23.902 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Mar 6 03:02:23.933710 coreos-metadata[1935]: Mar 06 03:02:23.903 INFO Fetch successful Mar 6 03:02:23.933710 coreos-metadata[1935]: Mar 06 03:02:23.903 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Mar 6 03:02:23.933710 coreos-metadata[1935]: Mar 06 03:02:23.909 INFO Fetch successful Mar 6 03:02:23.933710 coreos-metadata[1935]: Mar 06 03:02:23.909 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Mar 6 03:02:23.933710 coreos-metadata[1935]: Mar 06 03:02:23.911 INFO Fetch successful Mar 6 03:02:23.933710 coreos-metadata[1935]: Mar 06 03:02:23.911 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Mar 6 03:02:23.933710 coreos-metadata[1935]: Mar 06 03:02:23.915 INFO Fetch successful Mar 6 03:02:23.933710 coreos-metadata[1935]: Mar 06 03:02:23.915 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Mar 6 03:02:23.933710 coreos-metadata[1935]: Mar 06 03:02:23.916 INFO Fetch failed with 404: resource not found Mar 6 03:02:23.933710 coreos-metadata[1935]: Mar 06 03:02:23.916 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Mar 6 03:02:23.933710 coreos-metadata[1935]: Mar 06 03:02:23.920 INFO Fetch successful Mar 6 03:02:23.933710 coreos-metadata[1935]: Mar 06 03:02:23.920 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Mar 6 03:02:23.933710 coreos-metadata[1935]: Mar 06 03:02:23.920 INFO Fetch successful Mar 6 03:02:23.933710 coreos-metadata[1935]: Mar 06 03:02:23.920 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Mar 6 03:02:23.933710 coreos-metadata[1935]: Mar 06 03:02:23.923 INFO Fetch successful Mar 6 03:02:23.933710 coreos-metadata[1935]: Mar 06 03:02:23.923 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Mar 6 03:02:23.933710 coreos-metadata[1935]: Mar 06 03:02:23.930 INFO Fetch successful Mar 6 03:02:23.933710 coreos-metadata[1935]: Mar 06 03:02:23.930 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Mar 6 03:02:23.933710 coreos-metadata[1935]: Mar 06 03:02:23.931 INFO Fetch successful Mar 6 03:02:23.934480 tar[1967]: linux-amd64/LICENSE Mar 6 03:02:23.915105 dbus-daemon[1936]: [system] SELinux support is enabled Mar 6 03:02:23.916775 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 6 03:02:23.929155 dbus-daemon[1936]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1838 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Mar 6 03:02:23.927703 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 6 03:02:23.940505 tar[1967]: linux-amd64/helm Mar 6 03:02:23.927743 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 6 03:02:23.929218 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 6 03:02:23.929245 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 6 03:02:23.936648 systemd-logind[1952]: Watching system buttons on /dev/input/event2 (Power Button) Mar 6 03:02:23.936672 systemd-logind[1952]: Watching system buttons on /dev/input/event3 (Sleep Button) Mar 6 03:02:23.936700 systemd-logind[1952]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 6 03:02:23.943642 systemd-logind[1952]: New seat seat0. Mar 6 03:02:23.954502 systemd[1]: Started systemd-logind.service - User Login Management. Mar 6 03:02:23.959801 systemd[1]: Finished setup-oem.service - Setup OEM. Mar 6 03:02:23.963720 dbus-daemon[1936]: [system] Successfully activated service 'org.freedesktop.systemd1' Mar 6 03:02:23.972884 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Mar 6 03:02:23.976807 ntpd[1942]: ntpd 4.2.8p18@1.4062-o Thu Mar 5 21:56:02 UTC 2026 (1): Starting Mar 6 03:02:23.979481 ntpd[1942]: 6 Mar 03:02:23 ntpd[1942]: ntpd 4.2.8p18@1.4062-o Thu Mar 5 21:56:02 UTC 2026 (1): Starting Mar 6 03:02:23.979481 ntpd[1942]: 6 Mar 03:02:23 ntpd[1942]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Mar 6 03:02:23.979481 ntpd[1942]: 6 Mar 03:02:23 ntpd[1942]: ---------------------------------------------------- Mar 6 03:02:23.979481 ntpd[1942]: 6 Mar 03:02:23 ntpd[1942]: ntp-4 is maintained by Network Time Foundation, Mar 6 03:02:23.979481 ntpd[1942]: 6 Mar 03:02:23 ntpd[1942]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Mar 6 03:02:23.979481 ntpd[1942]: 6 Mar 03:02:23 ntpd[1942]: corporation. Support and training for ntp-4 are Mar 6 03:02:23.979481 ntpd[1942]: 6 Mar 03:02:23 ntpd[1942]: available at https://www.nwtime.org/support Mar 6 03:02:23.979481 ntpd[1942]: 6 Mar 03:02:23 ntpd[1942]: ---------------------------------------------------- Mar 6 03:02:23.976890 ntpd[1942]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Mar 6 03:02:23.976902 ntpd[1942]: ---------------------------------------------------- Mar 6 03:02:23.976913 ntpd[1942]: ntp-4 is maintained by Network Time Foundation, Mar 6 03:02:23.976921 ntpd[1942]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Mar 6 03:02:23.976931 ntpd[1942]: corporation. Support and training for ntp-4 are Mar 6 03:02:23.976942 ntpd[1942]: available at https://www.nwtime.org/support Mar 6 03:02:23.976952 ntpd[1942]: ---------------------------------------------------- Mar 6 03:02:23.981298 systemd[1]: Started update-engine.service - Update Engine. Mar 6 03:02:23.985265 update_engine[1953]: I20260306 03:02:23.984335 1953 update_check_scheduler.cc:74] Next update check in 2m33s Mar 6 03:02:23.992585 ntpd[1942]: proto: precision = 0.063 usec (-24) Mar 6 03:02:23.992727 ntpd[1942]: 6 Mar 03:02:23 ntpd[1942]: proto: precision = 0.063 usec (-24) Mar 6 03:02:23.996527 ntpd[1942]: basedate set to 2026-02-21 Mar 6 03:02:23.996905 ntpd[1942]: 6 Mar 03:02:23 ntpd[1942]: basedate set to 2026-02-21 Mar 6 03:02:23.996905 ntpd[1942]: 6 Mar 03:02:23 ntpd[1942]: gps base set to 2026-02-22 (week 2407) Mar 6 03:02:23.996555 ntpd[1942]: gps base set to 2026-02-22 (week 2407) Mar 6 03:02:23.997129 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 6 03:02:23.998300 ntpd[1942]: 6 Mar 03:02:23 ntpd[1942]: Listen and drop on 0 v6wildcard [::]:123 Mar 6 03:02:23.997833 ntpd[1942]: Listen and drop on 0 v6wildcard [::]:123 Mar 6 03:02:23.999468 ntpd[1942]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Mar 6 03:02:23.999554 ntpd[1942]: 6 Mar 03:02:23 ntpd[1942]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Mar 6 03:02:24.004438 ntpd[1942]: Listen normally on 2 lo 127.0.0.1:123 Mar 6 03:02:24.004820 ntpd[1942]: 6 Mar 03:02:24 ntpd[1942]: Listen normally on 2 lo 127.0.0.1:123 Mar 6 03:02:24.004820 ntpd[1942]: 6 Mar 03:02:24 ntpd[1942]: Listen normally on 3 eth0 172.31.18.81:123 Mar 6 03:02:24.004820 ntpd[1942]: 6 Mar 03:02:24 ntpd[1942]: Listen normally on 4 lo [::1]:123 Mar 6 03:02:24.004820 ntpd[1942]: 6 Mar 03:02:24 ntpd[1942]: bind(21) AF_INET6 [fe80::4ae:57ff:fee2:88b7%2]:123 flags 0x811 failed: Cannot assign requested address Mar 6 03:02:24.004820 ntpd[1942]: 6 Mar 03:02:24 ntpd[1942]: unable to create socket on eth0 (5) for [fe80::4ae:57ff:fee2:88b7%2]:123 Mar 6 03:02:24.004511 ntpd[1942]: Listen normally on 3 eth0 172.31.18.81:123 Mar 6 03:02:24.004544 ntpd[1942]: Listen normally on 4 lo [::1]:123 Mar 6 03:02:24.005543 kernel: ntpd[1942]: segfault at 24 ip 0000559099070aeb sp 00007ffea535f440 error 4 in ntpd[68aeb,55909900e000+80000] likely on CPU 1 (core 0, socket 0) Mar 6 03:02:24.004578 ntpd[1942]: bind(21) AF_INET6 [fe80::4ae:57ff:fee2:88b7%2]:123 flags 0x811 failed: Cannot assign requested address Mar 6 03:02:24.004600 ntpd[1942]: unable to create socket on eth0 (5) for [fe80::4ae:57ff:fee2:88b7%2]:123 Mar 6 03:02:24.008607 kernel: Code: 0f 1e fa 41 56 41 55 41 54 55 53 48 89 fb e8 8c eb f9 ff 44 8b 28 49 89 c4 e8 51 6b ff ff 48 89 c5 48 85 db 0f 84 a5 00 00 00 <0f> b7 0b 66 83 f9 02 0f 84 c0 00 00 00 66 83 f9 0a 74 32 66 85 c9 Mar 6 03:02:24.023083 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 3587067 Mar 6 03:02:24.050836 systemd-coredump[2030]: Process 1942 (ntpd) of user 0 terminated abnormally with signal 11/SEGV, processing... Mar 6 03:02:24.058604 systemd[1]: Created slice system-systemd\x2dcoredump.slice - Slice /system/systemd-coredump. Mar 6 03:02:24.064031 extend-filesystems[1990]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Mar 6 03:02:24.064031 extend-filesystems[1990]: old_desc_blocks = 1, new_desc_blocks = 2 Mar 6 03:02:24.064031 extend-filesystems[1990]: The filesystem on /dev/nvme0n1p9 is now 3587067 (4k) blocks long. Mar 6 03:02:24.086851 extend-filesystems[1939]: Resized filesystem in /dev/nvme0n1p9 Mar 6 03:02:24.070827 systemd[1]: Started systemd-coredump@0-2030-0.service - Process Core Dump (PID 2030/UID 0). Mar 6 03:02:24.092426 bash[2027]: Updated "/home/core/.ssh/authorized_keys" Mar 6 03:02:24.073956 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 6 03:02:24.103502 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 6 03:02:24.106668 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 6 03:02:24.123219 systemd[1]: Starting sshkeys.service... Mar 6 03:02:24.125140 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Mar 6 03:02:24.128573 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 6 03:02:24.235149 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Mar 6 03:02:24.238608 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Mar 6 03:02:24.414382 sshd_keygen[1976]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 6 03:02:24.475902 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Mar 6 03:02:24.489781 dbus-daemon[1936]: [system] Successfully activated service 'org.freedesktop.hostname1' Mar 6 03:02:24.494760 dbus-daemon[1936]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=2016 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Mar 6 03:02:24.495226 locksmithd[2017]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 6 03:02:24.546813 systemd[1]: Starting polkit.service - Authorization Manager... Mar 6 03:02:24.550297 coreos-metadata[2073]: Mar 06 03:02:24.550 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Mar 6 03:02:24.575508 coreos-metadata[2073]: Mar 06 03:02:24.575 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Mar 6 03:02:24.575621 coreos-metadata[2073]: Mar 06 03:02:24.575 INFO Fetch successful Mar 6 03:02:24.575621 coreos-metadata[2073]: Mar 06 03:02:24.575 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Mar 6 03:02:24.575621 coreos-metadata[2073]: Mar 06 03:02:24.575 INFO Fetch successful Mar 6 03:02:24.581470 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 6 03:02:24.583860 unknown[2073]: wrote ssh authorized keys file for user: core Mar 6 03:02:24.591173 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 6 03:02:24.609748 systemd-coredump[2034]: Process 1942 (ntpd) of user 0 dumped core. Module libnss_usrfiles.so.2 without build-id. Module libgcc_s.so.1 without build-id. Module ld-linux-x86-64.so.2 without build-id. Module libc.so.6 without build-id. Module libcrypto.so.3 without build-id. Module libm.so.6 without build-id. Module libcap.so.2 without build-id. Module ntpd without build-id. Stack trace of thread 1942: #0 0x0000559099070aeb n/a (ntpd + 0x68aeb) #1 0x0000559099019cdf n/a (ntpd + 0x11cdf) #2 0x000055909901a575 n/a (ntpd + 0x12575) #3 0x0000559099015d8a n/a (ntpd + 0xdd8a) #4 0x00005590990175d3 n/a (ntpd + 0xf5d3) #5 0x000055909901ffd1 n/a (ntpd + 0x17fd1) #6 0x0000559099010c2d n/a (ntpd + 0x8c2d) #7 0x00007fab1fa5716c n/a (libc.so.6 + 0x2716c) #8 0x00007fab1fa57229 __libc_start_main (libc.so.6 + 0x27229) #9 0x0000559099010c55 n/a (ntpd + 0x8c55) ELF object binary architecture: AMD x86-64 Mar 6 03:02:24.614862 systemd[1]: ntpd.service: Main process exited, code=dumped, status=11/SEGV Mar 6 03:02:24.615058 systemd[1]: ntpd.service: Failed with result 'core-dump'. Mar 6 03:02:24.620743 systemd[1]: systemd-coredump@0-2030-0.service: Deactivated successfully. Mar 6 03:02:24.646851 systemd[1]: issuegen.service: Deactivated successfully. Mar 6 03:02:24.647162 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 6 03:02:24.657131 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 6 03:02:24.683091 update-ssh-keys[2140]: Updated "/home/core/.ssh/authorized_keys" Mar 6 03:02:24.683505 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Mar 6 03:02:24.688965 systemd[1]: Finished sshkeys.service. Mar 6 03:02:24.696145 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 6 03:02:24.717001 systemd[1]: ntpd.service: Scheduled restart job, restart counter is at 1. Mar 6 03:02:24.725496 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 6 03:02:24.733905 systemd[1]: Started ntpd.service - Network Time Service. Mar 6 03:02:24.737250 containerd[1983]: time="2026-03-06T03:02:24Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Mar 6 03:02:24.738518 containerd[1983]: time="2026-03-06T03:02:24.737916156Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Mar 6 03:02:24.739007 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 6 03:02:24.742030 systemd[1]: Reached target getty.target - Login Prompts. Mar 6 03:02:24.779913 containerd[1983]: time="2026-03-06T03:02:24.777161825Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="14.952µs" Mar 6 03:02:24.779913 containerd[1983]: time="2026-03-06T03:02:24.777199221Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Mar 6 03:02:24.779913 containerd[1983]: time="2026-03-06T03:02:24.777227241Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Mar 6 03:02:24.779913 containerd[1983]: time="2026-03-06T03:02:24.777425221Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Mar 6 03:02:24.779913 containerd[1983]: time="2026-03-06T03:02:24.777449176Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Mar 6 03:02:24.779913 containerd[1983]: time="2026-03-06T03:02:24.777491184Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 6 03:02:24.779913 containerd[1983]: time="2026-03-06T03:02:24.777563248Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 6 03:02:24.779913 containerd[1983]: time="2026-03-06T03:02:24.777579121Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 6 03:02:24.779913 containerd[1983]: time="2026-03-06T03:02:24.777865691Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 6 03:02:24.779913 containerd[1983]: time="2026-03-06T03:02:24.777887812Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 6 03:02:24.779913 containerd[1983]: time="2026-03-06T03:02:24.777903773Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 6 03:02:24.779913 containerd[1983]: time="2026-03-06T03:02:24.777915658Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Mar 6 03:02:24.780422 containerd[1983]: time="2026-03-06T03:02:24.778018852Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Mar 6 03:02:24.782247 containerd[1983]: time="2026-03-06T03:02:24.781684794Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 6 03:02:24.782247 containerd[1983]: time="2026-03-06T03:02:24.781764279Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 6 03:02:24.782247 containerd[1983]: time="2026-03-06T03:02:24.781787326Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Mar 6 03:02:24.782247 containerd[1983]: time="2026-03-06T03:02:24.781952280Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Mar 6 03:02:24.786280 containerd[1983]: time="2026-03-06T03:02:24.783032021Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Mar 6 03:02:24.786280 containerd[1983]: time="2026-03-06T03:02:24.783364951Z" level=info msg="metadata content store policy set" policy=shared Mar 6 03:02:24.788568 containerd[1983]: time="2026-03-06T03:02:24.788528059Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Mar 6 03:02:24.789138 containerd[1983]: time="2026-03-06T03:02:24.789111023Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Mar 6 03:02:24.789204 containerd[1983]: time="2026-03-06T03:02:24.789182733Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Mar 6 03:02:24.789245 containerd[1983]: time="2026-03-06T03:02:24.789203543Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Mar 6 03:02:24.789245 containerd[1983]: time="2026-03-06T03:02:24.789221541Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Mar 6 03:02:24.789245 containerd[1983]: time="2026-03-06T03:02:24.789237045Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Mar 6 03:02:24.789357 containerd[1983]: time="2026-03-06T03:02:24.789258045Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Mar 6 03:02:24.789357 containerd[1983]: time="2026-03-06T03:02:24.789275929Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Mar 6 03:02:24.789357 containerd[1983]: time="2026-03-06T03:02:24.789290984Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Mar 6 03:02:24.789357 containerd[1983]: time="2026-03-06T03:02:24.789305930Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Mar 6 03:02:24.789357 containerd[1983]: time="2026-03-06T03:02:24.789319801Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Mar 6 03:02:24.789357 containerd[1983]: time="2026-03-06T03:02:24.789340693Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Mar 6 03:02:24.789556 containerd[1983]: time="2026-03-06T03:02:24.789480497Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Mar 6 03:02:24.789556 containerd[1983]: time="2026-03-06T03:02:24.789506020Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Mar 6 03:02:24.789556 containerd[1983]: time="2026-03-06T03:02:24.789532768Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Mar 6 03:02:24.789556 containerd[1983]: time="2026-03-06T03:02:24.789551644Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Mar 6 03:02:24.789692 containerd[1983]: time="2026-03-06T03:02:24.789567968Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Mar 6 03:02:24.789692 containerd[1983]: time="2026-03-06T03:02:24.789584221Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Mar 6 03:02:24.789692 containerd[1983]: time="2026-03-06T03:02:24.789599490Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Mar 6 03:02:24.789692 containerd[1983]: time="2026-03-06T03:02:24.789621484Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Mar 6 03:02:24.789692 containerd[1983]: time="2026-03-06T03:02:24.789638490Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Mar 6 03:02:24.789692 containerd[1983]: time="2026-03-06T03:02:24.789654959Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Mar 6 03:02:24.789692 containerd[1983]: time="2026-03-06T03:02:24.789669252Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Mar 6 03:02:24.789941 containerd[1983]: time="2026-03-06T03:02:24.789730077Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Mar 6 03:02:24.789941 containerd[1983]: time="2026-03-06T03:02:24.789748509Z" level=info msg="Start snapshots syncer" Mar 6 03:02:24.792901 containerd[1983]: time="2026-03-06T03:02:24.790486170Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Mar 6 03:02:24.792901 containerd[1983]: time="2026-03-06T03:02:24.790856539Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Mar 6 03:02:24.793186 containerd[1983]: time="2026-03-06T03:02:24.790936418Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Mar 6 03:02:24.793186 containerd[1983]: time="2026-03-06T03:02:24.791423268Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Mar 6 03:02:24.793186 containerd[1983]: time="2026-03-06T03:02:24.791562731Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Mar 6 03:02:24.793186 containerd[1983]: time="2026-03-06T03:02:24.791593386Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Mar 6 03:02:24.793186 containerd[1983]: time="2026-03-06T03:02:24.791609934Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Mar 6 03:02:24.793186 containerd[1983]: time="2026-03-06T03:02:24.791625233Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Mar 6 03:02:24.793186 containerd[1983]: time="2026-03-06T03:02:24.791645479Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Mar 6 03:02:24.793186 containerd[1983]: time="2026-03-06T03:02:24.791662811Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Mar 6 03:02:24.793186 containerd[1983]: time="2026-03-06T03:02:24.791681351Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Mar 6 03:02:24.793186 containerd[1983]: time="2026-03-06T03:02:24.792194363Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Mar 6 03:02:24.793186 containerd[1983]: time="2026-03-06T03:02:24.792223340Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Mar 6 03:02:24.793186 containerd[1983]: time="2026-03-06T03:02:24.792242855Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Mar 6 03:02:24.793186 containerd[1983]: time="2026-03-06T03:02:24.792280431Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 6 03:02:24.793186 containerd[1983]: time="2026-03-06T03:02:24.792303808Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 6 03:02:24.793657 containerd[1983]: time="2026-03-06T03:02:24.792317896Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 6 03:02:24.793657 containerd[1983]: time="2026-03-06T03:02:24.792332544Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 6 03:02:24.793657 containerd[1983]: time="2026-03-06T03:02:24.792346732Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Mar 6 03:02:24.793657 containerd[1983]: time="2026-03-06T03:02:24.792361481Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Mar 6 03:02:24.793657 containerd[1983]: time="2026-03-06T03:02:24.792391733Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Mar 6 03:02:24.793657 containerd[1983]: time="2026-03-06T03:02:24.792412970Z" level=info msg="runtime interface created" Mar 6 03:02:24.793657 containerd[1983]: time="2026-03-06T03:02:24.792420282Z" level=info msg="created NRI interface" Mar 6 03:02:24.793657 containerd[1983]: time="2026-03-06T03:02:24.792432090Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Mar 6 03:02:24.793657 containerd[1983]: time="2026-03-06T03:02:24.792449362Z" level=info msg="Connect containerd service" Mar 6 03:02:24.793657 containerd[1983]: time="2026-03-06T03:02:24.792478688Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 6 03:02:24.793657 containerd[1983]: time="2026-03-06T03:02:24.793320316Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 6 03:02:24.823767 ntpd[2163]: ntpd 4.2.8p18@1.4062-o Thu Mar 5 21:56:02 UTC 2026 (1): Starting Mar 6 03:02:24.825444 ntpd[2163]: 6 Mar 03:02:24 ntpd[2163]: ntpd 4.2.8p18@1.4062-o Thu Mar 5 21:56:02 UTC 2026 (1): Starting Mar 6 03:02:24.825444 ntpd[2163]: 6 Mar 03:02:24 ntpd[2163]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Mar 6 03:02:24.825444 ntpd[2163]: 6 Mar 03:02:24 ntpd[2163]: ---------------------------------------------------- Mar 6 03:02:24.825444 ntpd[2163]: 6 Mar 03:02:24 ntpd[2163]: ntp-4 is maintained by Network Time Foundation, Mar 6 03:02:24.825444 ntpd[2163]: 6 Mar 03:02:24 ntpd[2163]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Mar 6 03:02:24.825444 ntpd[2163]: 6 Mar 03:02:24 ntpd[2163]: corporation. Support and training for ntp-4 are Mar 6 03:02:24.825444 ntpd[2163]: 6 Mar 03:02:24 ntpd[2163]: available at https://www.nwtime.org/support Mar 6 03:02:24.825444 ntpd[2163]: 6 Mar 03:02:24 ntpd[2163]: ---------------------------------------------------- Mar 6 03:02:24.825444 ntpd[2163]: 6 Mar 03:02:24 ntpd[2163]: proto: precision = 0.090 usec (-23) Mar 6 03:02:24.825444 ntpd[2163]: 6 Mar 03:02:24 ntpd[2163]: basedate set to 2026-02-21 Mar 6 03:02:24.825444 ntpd[2163]: 6 Mar 03:02:24 ntpd[2163]: gps base set to 2026-02-22 (week 2407) Mar 6 03:02:24.825444 ntpd[2163]: 6 Mar 03:02:24 ntpd[2163]: Listen and drop on 0 v6wildcard [::]:123 Mar 6 03:02:24.823844 ntpd[2163]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Mar 6 03:02:24.823855 ntpd[2163]: ---------------------------------------------------- Mar 6 03:02:24.823865 ntpd[2163]: ntp-4 is maintained by Network Time Foundation, Mar 6 03:02:24.823874 ntpd[2163]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Mar 6 03:02:24.823883 ntpd[2163]: corporation. Support and training for ntp-4 are Mar 6 03:02:24.823892 ntpd[2163]: available at https://www.nwtime.org/support Mar 6 03:02:24.823901 ntpd[2163]: ---------------------------------------------------- Mar 6 03:02:24.824692 ntpd[2163]: proto: precision = 0.090 usec (-23) Mar 6 03:02:24.824943 ntpd[2163]: basedate set to 2026-02-21 Mar 6 03:02:24.824954 ntpd[2163]: gps base set to 2026-02-22 (week 2407) Mar 6 03:02:24.825037 ntpd[2163]: Listen and drop on 0 v6wildcard [::]:123 Mar 6 03:02:24.837429 kernel: ntpd[2163]: segfault at 24 ip 00005611a3ce8aeb sp 00007ffd4f48bc40 error 4 in ntpd[68aeb,5611a3c86000+80000] likely on CPU 1 (core 0, socket 0) Mar 6 03:02:24.837567 kernel: Code: 0f 1e fa 41 56 41 55 41 54 55 53 48 89 fb e8 8c eb f9 ff 44 8b 28 49 89 c4 e8 51 6b ff ff 48 89 c5 48 85 db 0f 84 a5 00 00 00 <0f> b7 0b 66 83 f9 02 0f 84 c0 00 00 00 66 83 f9 0a 74 32 66 85 c9 Mar 6 03:02:24.830094 ntpd[2163]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Mar 6 03:02:24.837674 ntpd[2163]: 6 Mar 03:02:24 ntpd[2163]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Mar 6 03:02:24.837674 ntpd[2163]: 6 Mar 03:02:24 ntpd[2163]: Listen normally on 2 lo 127.0.0.1:123 Mar 6 03:02:24.837674 ntpd[2163]: 6 Mar 03:02:24 ntpd[2163]: Listen normally on 3 eth0 172.31.18.81:123 Mar 6 03:02:24.837674 ntpd[2163]: 6 Mar 03:02:24 ntpd[2163]: Listen normally on 4 lo [::1]:123 Mar 6 03:02:24.837674 ntpd[2163]: 6 Mar 03:02:24 ntpd[2163]: bind(21) AF_INET6 [fe80::4ae:57ff:fee2:88b7%2]:123 flags 0x811 failed: Cannot assign requested address Mar 6 03:02:24.837674 ntpd[2163]: 6 Mar 03:02:24 ntpd[2163]: unable to create socket on eth0 (5) for [fe80::4ae:57ff:fee2:88b7%2]:123 Mar 6 03:02:24.830324 ntpd[2163]: Listen normally on 2 lo 127.0.0.1:123 Mar 6 03:02:24.830351 ntpd[2163]: Listen normally on 3 eth0 172.31.18.81:123 Mar 6 03:02:24.830382 ntpd[2163]: Listen normally on 4 lo [::1]:123 Mar 6 03:02:24.830418 ntpd[2163]: bind(21) AF_INET6 [fe80::4ae:57ff:fee2:88b7%2]:123 flags 0x811 failed: Cannot assign requested address Mar 6 03:02:24.830439 ntpd[2163]: unable to create socket on eth0 (5) for [fe80::4ae:57ff:fee2:88b7%2]:123 Mar 6 03:02:24.860459 systemd-coredump[2170]: Process 2163 (ntpd) of user 0 terminated abnormally with signal 11/SEGV, processing... Mar 6 03:02:24.869387 systemd[1]: Started systemd-coredump@1-2170-0.service - Process Core Dump (PID 2170/UID 0). Mar 6 03:02:24.946522 systemd-networkd[1838]: eth0: Gained IPv6LL Mar 6 03:02:24.954025 polkitd[2127]: Started polkitd version 126 Mar 6 03:02:24.955143 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 6 03:02:24.958795 systemd[1]: Reached target network-online.target - Network is Online. Mar 6 03:02:24.963422 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Mar 6 03:02:24.970403 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 6 03:02:24.974169 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 6 03:02:24.993580 polkitd[2127]: Loading rules from directory /etc/polkit-1/rules.d Mar 6 03:02:24.998545 polkitd[2127]: Loading rules from directory /run/polkit-1/rules.d Mar 6 03:02:24.998635 polkitd[2127]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Mar 6 03:02:24.999136 polkitd[2127]: Loading rules from directory /usr/local/share/polkit-1/rules.d Mar 6 03:02:24.999184 polkitd[2127]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Mar 6 03:02:24.999234 polkitd[2127]: Loading rules from directory /usr/share/polkit-1/rules.d Mar 6 03:02:25.013432 polkitd[2127]: Finished loading, compiling and executing 2 rules Mar 6 03:02:25.013731 systemd[1]: Started polkit.service - Authorization Manager. Mar 6 03:02:25.017101 dbus-daemon[1936]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Mar 6 03:02:25.030593 polkitd[2127]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Mar 6 03:02:25.080415 systemd-hostnamed[2016]: Hostname set to (transient) Mar 6 03:02:25.081745 systemd-resolved[1840]: System hostname changed to 'ip-172-31-18-81'. Mar 6 03:02:25.098275 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 6 03:02:25.131268 systemd-coredump[2171]: Process 2163 (ntpd) of user 0 dumped core. Module libnss_usrfiles.so.2 without build-id. Module libgcc_s.so.1 without build-id. Module ld-linux-x86-64.so.2 without build-id. Module libc.so.6 without build-id. Module libcrypto.so.3 without build-id. Module libm.so.6 without build-id. Module libcap.so.2 without build-id. Module ntpd without build-id. Stack trace of thread 2163: #0 0x00005611a3ce8aeb n/a (ntpd + 0x68aeb) #1 0x00005611a3c91cdf n/a (ntpd + 0x11cdf) #2 0x00005611a3c92575 n/a (ntpd + 0x12575) #3 0x00005611a3c8dd8a n/a (ntpd + 0xdd8a) #4 0x00005611a3c8f5d3 n/a (ntpd + 0xf5d3) #5 0x00005611a3c97fd1 n/a (ntpd + 0x17fd1) #6 0x00005611a3c88c2d n/a (ntpd + 0x8c2d) #7 0x00007f7420cac16c n/a (libc.so.6 + 0x2716c) #8 0x00007f7420cac229 __libc_start_main (libc.so.6 + 0x27229) #9 0x00005611a3c88c55 n/a (ntpd + 0x8c55) ELF object binary architecture: AMD x86-64 Mar 6 03:02:25.133558 systemd[1]: ntpd.service: Main process exited, code=dumped, status=11/SEGV Mar 6 03:02:25.133743 systemd[1]: ntpd.service: Failed with result 'core-dump'. Mar 6 03:02:25.174146 containerd[1983]: time="2026-03-06T03:02:25.153357132Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 6 03:02:25.174146 containerd[1983]: time="2026-03-06T03:02:25.153419867Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 6 03:02:25.174146 containerd[1983]: time="2026-03-06T03:02:25.153449729Z" level=info msg="Start subscribing containerd event" Mar 6 03:02:25.174146 containerd[1983]: time="2026-03-06T03:02:25.153481001Z" level=info msg="Start recovering state" Mar 6 03:02:25.174146 containerd[1983]: time="2026-03-06T03:02:25.153584495Z" level=info msg="Start event monitor" Mar 6 03:02:25.174146 containerd[1983]: time="2026-03-06T03:02:25.153601166Z" level=info msg="Start cni network conf syncer for default" Mar 6 03:02:25.174146 containerd[1983]: time="2026-03-06T03:02:25.153615022Z" level=info msg="Start streaming server" Mar 6 03:02:25.174146 containerd[1983]: time="2026-03-06T03:02:25.153626642Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Mar 6 03:02:25.174146 containerd[1983]: time="2026-03-06T03:02:25.153636490Z" level=info msg="runtime interface starting up..." Mar 6 03:02:25.174146 containerd[1983]: time="2026-03-06T03:02:25.153644253Z" level=info msg="starting plugins..." Mar 6 03:02:25.174146 containerd[1983]: time="2026-03-06T03:02:25.153661509Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Mar 6 03:02:25.174146 containerd[1983]: time="2026-03-06T03:02:25.153810988Z" level=info msg="containerd successfully booted in 0.418026s" Mar 6 03:02:25.143589 systemd[1]: systemd-coredump@1-2170-0.service: Deactivated successfully. Mar 6 03:02:25.153870 systemd[1]: Started containerd.service - containerd container runtime. Mar 6 03:02:25.177694 amazon-ssm-agent[2180]: Initializing new seelog logger Mar 6 03:02:25.178368 amazon-ssm-agent[2180]: New Seelog Logger Creation Complete Mar 6 03:02:25.179080 amazon-ssm-agent[2180]: 2026/03/06 03:02:25 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 6 03:02:25.179080 amazon-ssm-agent[2180]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 6 03:02:25.179080 amazon-ssm-agent[2180]: 2026/03/06 03:02:25 processing appconfig overrides Mar 6 03:02:25.180099 amazon-ssm-agent[2180]: 2026/03/06 03:02:25 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 6 03:02:25.180185 amazon-ssm-agent[2180]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 6 03:02:25.180336 amazon-ssm-agent[2180]: 2026/03/06 03:02:25 processing appconfig overrides Mar 6 03:02:25.180689 amazon-ssm-agent[2180]: 2026/03/06 03:02:25 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 6 03:02:25.180757 amazon-ssm-agent[2180]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 6 03:02:25.180888 amazon-ssm-agent[2180]: 2026/03/06 03:02:25 processing appconfig overrides Mar 6 03:02:25.181379 amazon-ssm-agent[2180]: 2026-03-06 03:02:25.1794 INFO Proxy environment variables: Mar 6 03:02:25.184630 amazon-ssm-agent[2180]: 2026/03/06 03:02:25 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 6 03:02:25.184630 amazon-ssm-agent[2180]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 6 03:02:25.184766 amazon-ssm-agent[2180]: 2026/03/06 03:02:25 processing appconfig overrides Mar 6 03:02:25.255473 systemd[1]: ntpd.service: Scheduled restart job, restart counter is at 2. Mar 6 03:02:25.260337 systemd[1]: Started ntpd.service - Network Time Service. Mar 6 03:02:25.281148 amazon-ssm-agent[2180]: 2026-03-06 03:02:25.1800 INFO https_proxy: Mar 6 03:02:25.310651 ntpd[2217]: ntpd 4.2.8p18@1.4062-o Thu Mar 5 21:56:02 UTC 2026 (1): Starting Mar 6 03:02:25.310720 ntpd[2217]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Mar 6 03:02:25.311203 ntpd[2217]: 6 Mar 03:02:25 ntpd[2217]: ntpd 4.2.8p18@1.4062-o Thu Mar 5 21:56:02 UTC 2026 (1): Starting Mar 6 03:02:25.311203 ntpd[2217]: 6 Mar 03:02:25 ntpd[2217]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Mar 6 03:02:25.311203 ntpd[2217]: 6 Mar 03:02:25 ntpd[2217]: ---------------------------------------------------- Mar 6 03:02:25.311203 ntpd[2217]: 6 Mar 03:02:25 ntpd[2217]: ntp-4 is maintained by Network Time Foundation, Mar 6 03:02:25.311203 ntpd[2217]: 6 Mar 03:02:25 ntpd[2217]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Mar 6 03:02:25.311203 ntpd[2217]: 6 Mar 03:02:25 ntpd[2217]: corporation. Support and training for ntp-4 are Mar 6 03:02:25.311203 ntpd[2217]: 6 Mar 03:02:25 ntpd[2217]: available at https://www.nwtime.org/support Mar 6 03:02:25.311203 ntpd[2217]: 6 Mar 03:02:25 ntpd[2217]: ---------------------------------------------------- Mar 6 03:02:25.310731 ntpd[2217]: ---------------------------------------------------- Mar 6 03:02:25.310739 ntpd[2217]: ntp-4 is maintained by Network Time Foundation, Mar 6 03:02:25.310748 ntpd[2217]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Mar 6 03:02:25.310757 ntpd[2217]: corporation. Support and training for ntp-4 are Mar 6 03:02:25.310766 ntpd[2217]: available at https://www.nwtime.org/support Mar 6 03:02:25.310775 ntpd[2217]: ---------------------------------------------------- Mar 6 03:02:25.312835 ntpd[2217]: proto: precision = 0.095 usec (-23) Mar 6 03:02:25.313263 ntpd[2217]: 6 Mar 03:02:25 ntpd[2217]: proto: precision = 0.095 usec (-23) Mar 6 03:02:25.313263 ntpd[2217]: 6 Mar 03:02:25 ntpd[2217]: basedate set to 2026-02-21 Mar 6 03:02:25.313263 ntpd[2217]: 6 Mar 03:02:25 ntpd[2217]: gps base set to 2026-02-22 (week 2407) Mar 6 03:02:25.313263 ntpd[2217]: 6 Mar 03:02:25 ntpd[2217]: Listen and drop on 0 v6wildcard [::]:123 Mar 6 03:02:25.313263 ntpd[2217]: 6 Mar 03:02:25 ntpd[2217]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Mar 6 03:02:25.313124 ntpd[2217]: basedate set to 2026-02-21 Mar 6 03:02:25.313139 ntpd[2217]: gps base set to 2026-02-22 (week 2407) Mar 6 03:02:25.313544 ntpd[2217]: 6 Mar 03:02:25 ntpd[2217]: Listen normally on 2 lo 127.0.0.1:123 Mar 6 03:02:25.313544 ntpd[2217]: 6 Mar 03:02:25 ntpd[2217]: Listen normally on 3 eth0 172.31.18.81:123 Mar 6 03:02:25.313544 ntpd[2217]: 6 Mar 03:02:25 ntpd[2217]: Listen normally on 4 lo [::1]:123 Mar 6 03:02:25.313544 ntpd[2217]: 6 Mar 03:02:25 ntpd[2217]: Listen normally on 5 eth0 [fe80::4ae:57ff:fee2:88b7%2]:123 Mar 6 03:02:25.313231 ntpd[2217]: Listen and drop on 0 v6wildcard [::]:123 Mar 6 03:02:25.313730 ntpd[2217]: 6 Mar 03:02:25 ntpd[2217]: Listening on routing socket on fd #22 for interface updates Mar 6 03:02:25.313258 ntpd[2217]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Mar 6 03:02:25.313445 ntpd[2217]: Listen normally on 2 lo 127.0.0.1:123 Mar 6 03:02:25.313472 ntpd[2217]: Listen normally on 3 eth0 172.31.18.81:123 Mar 6 03:02:25.313499 ntpd[2217]: Listen normally on 4 lo [::1]:123 Mar 6 03:02:25.313526 ntpd[2217]: Listen normally on 5 eth0 [fe80::4ae:57ff:fee2:88b7%2]:123 Mar 6 03:02:25.313552 ntpd[2217]: Listening on routing socket on fd #22 for interface updates Mar 6 03:02:25.316868 ntpd[2217]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 6 03:02:25.318023 ntpd[2217]: 6 Mar 03:02:25 ntpd[2217]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 6 03:02:25.318023 ntpd[2217]: 6 Mar 03:02:25 ntpd[2217]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 6 03:02:25.316896 ntpd[2217]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 6 03:02:25.355233 tar[1967]: linux-amd64/README.md Mar 6 03:02:25.375889 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 6 03:02:25.379560 amazon-ssm-agent[2180]: 2026-03-06 03:02:25.1800 INFO http_proxy: Mar 6 03:02:25.480080 amazon-ssm-agent[2180]: 2026-03-06 03:02:25.1800 INFO no_proxy: Mar 6 03:02:25.577637 amazon-ssm-agent[2180]: 2026-03-06 03:02:25.1803 INFO Checking if agent identity type OnPrem can be assumed Mar 6 03:02:25.584355 amazon-ssm-agent[2180]: 2026/03/06 03:02:25 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 6 03:02:25.584355 amazon-ssm-agent[2180]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 6 03:02:25.584729 amazon-ssm-agent[2180]: 2026/03/06 03:02:25 processing appconfig overrides Mar 6 03:02:25.614283 amazon-ssm-agent[2180]: 2026-03-06 03:02:25.1805 INFO Checking if agent identity type EC2 can be assumed Mar 6 03:02:25.614283 amazon-ssm-agent[2180]: 2026-03-06 03:02:25.2491 INFO Agent will take identity from EC2 Mar 6 03:02:25.614283 amazon-ssm-agent[2180]: 2026-03-06 03:02:25.2514 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.3.0.0 Mar 6 03:02:25.614283 amazon-ssm-agent[2180]: 2026-03-06 03:02:25.2514 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Mar 6 03:02:25.614283 amazon-ssm-agent[2180]: 2026-03-06 03:02:25.2514 INFO [amazon-ssm-agent] Starting Core Agent Mar 6 03:02:25.614283 amazon-ssm-agent[2180]: 2026-03-06 03:02:25.2514 INFO [amazon-ssm-agent] Registrar detected. Attempting registration Mar 6 03:02:25.614283 amazon-ssm-agent[2180]: 2026-03-06 03:02:25.2514 INFO [Registrar] Starting registrar module Mar 6 03:02:25.614283 amazon-ssm-agent[2180]: 2026-03-06 03:02:25.2533 INFO [EC2Identity] Checking disk for registration info Mar 6 03:02:25.614283 amazon-ssm-agent[2180]: 2026-03-06 03:02:25.2534 INFO [EC2Identity] No registration info found for ec2 instance, attempting registration Mar 6 03:02:25.614736 amazon-ssm-agent[2180]: 2026-03-06 03:02:25.2534 INFO [EC2Identity] Generating registration keypair Mar 6 03:02:25.614736 amazon-ssm-agent[2180]: 2026-03-06 03:02:25.5400 INFO [EC2Identity] Checking write access before registering Mar 6 03:02:25.614736 amazon-ssm-agent[2180]: 2026-03-06 03:02:25.5404 INFO [EC2Identity] Registering EC2 instance with Systems Manager Mar 6 03:02:25.614736 amazon-ssm-agent[2180]: 2026-03-06 03:02:25.5841 INFO [EC2Identity] EC2 registration was successful. Mar 6 03:02:25.614736 amazon-ssm-agent[2180]: 2026-03-06 03:02:25.5841 INFO [amazon-ssm-agent] Registration attempted. Resuming core agent startup. Mar 6 03:02:25.614736 amazon-ssm-agent[2180]: 2026-03-06 03:02:25.5842 INFO [CredentialRefresher] credentialRefresher has started Mar 6 03:02:25.614736 amazon-ssm-agent[2180]: 2026-03-06 03:02:25.5842 INFO [CredentialRefresher] Starting credentials refresher loop Mar 6 03:02:25.614736 amazon-ssm-agent[2180]: 2026-03-06 03:02:25.6139 INFO EC2RoleProvider Successfully connected with instance profile role credentials Mar 6 03:02:25.614736 amazon-ssm-agent[2180]: 2026-03-06 03:02:25.6142 INFO [CredentialRefresher] Credentials ready Mar 6 03:02:25.677078 amazon-ssm-agent[2180]: 2026-03-06 03:02:25.6144 INFO [CredentialRefresher] Next credential rotation will be in 29.9999921918 minutes Mar 6 03:02:26.432032 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 6 03:02:26.434140 systemd[1]: Started sshd@0-172.31.18.81:22-68.220.241.50:57350.service - OpenSSH per-connection server daemon (68.220.241.50:57350). Mar 6 03:02:26.625584 amazon-ssm-agent[2180]: 2026-03-06 03:02:26.6254 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Mar 6 03:02:26.726392 amazon-ssm-agent[2180]: 2026-03-06 03:02:26.6277 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2232) started Mar 6 03:02:26.827359 amazon-ssm-agent[2180]: 2026-03-06 03:02:26.6277 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Mar 6 03:02:26.932549 sshd[2226]: Accepted publickey for core from 68.220.241.50 port 57350 ssh2: RSA SHA256:qfQ2rypVMEWpTsWSSrgOtJoJZgil1uRcnShauR4vebY Mar 6 03:02:26.934187 sshd-session[2226]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 03:02:26.941238 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 6 03:02:26.944168 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 6 03:02:26.957797 systemd-logind[1952]: New session 1 of user core. Mar 6 03:02:26.973213 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 6 03:02:26.978804 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 6 03:02:26.994754 (systemd)[2246]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 6 03:02:26.997685 systemd-logind[1952]: New session c1 of user core. Mar 6 03:02:27.152241 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 6 03:02:27.153570 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 6 03:02:27.163763 (kubelet)[2257]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 6 03:02:27.192533 systemd[2246]: Queued start job for default target default.target. Mar 6 03:02:27.197479 systemd[2246]: Created slice app.slice - User Application Slice. Mar 6 03:02:27.197522 systemd[2246]: Reached target paths.target - Paths. Mar 6 03:02:27.197584 systemd[2246]: Reached target timers.target - Timers. Mar 6 03:02:27.199092 systemd[2246]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 6 03:02:27.214476 systemd[2246]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 6 03:02:27.214634 systemd[2246]: Reached target sockets.target - Sockets. Mar 6 03:02:27.214699 systemd[2246]: Reached target basic.target - Basic System. Mar 6 03:02:27.214755 systemd[2246]: Reached target default.target - Main User Target. Mar 6 03:02:27.214796 systemd[2246]: Startup finished in 206ms. Mar 6 03:02:27.214957 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 6 03:02:27.222700 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 6 03:02:27.223966 systemd[1]: Startup finished in 2.629s (kernel) + 7.955s (initrd) + 7.690s (userspace) = 18.275s. Mar 6 03:02:27.477232 systemd[1]: Started sshd@1-172.31.18.81:22-68.220.241.50:57358.service - OpenSSH per-connection server daemon (68.220.241.50:57358). Mar 6 03:02:27.915733 sshd[2271]: Accepted publickey for core from 68.220.241.50 port 57358 ssh2: RSA SHA256:qfQ2rypVMEWpTsWSSrgOtJoJZgil1uRcnShauR4vebY Mar 6 03:02:27.917659 sshd-session[2271]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 03:02:27.923350 systemd-logind[1952]: New session 2 of user core. Mar 6 03:02:27.929269 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 6 03:02:28.073388 kubelet[2257]: E0306 03:02:28.073319 2257 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 6 03:02:28.077330 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 6 03:02:28.077530 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 6 03:02:28.078280 systemd[1]: kubelet.service: Consumed 1.001s CPU time, 254.7M memory peak. Mar 6 03:02:28.158012 sshd[2274]: Connection closed by 68.220.241.50 port 57358 Mar 6 03:02:28.159803 sshd-session[2271]: pam_unix(sshd:session): session closed for user core Mar 6 03:02:28.163299 systemd[1]: sshd@1-172.31.18.81:22-68.220.241.50:57358.service: Deactivated successfully. Mar 6 03:02:28.165533 systemd[1]: session-2.scope: Deactivated successfully. Mar 6 03:02:28.167429 systemd-logind[1952]: Session 2 logged out. Waiting for processes to exit. Mar 6 03:02:28.169419 systemd-logind[1952]: Removed session 2. Mar 6 03:02:28.248930 systemd[1]: Started sshd@2-172.31.18.81:22-68.220.241.50:57362.service - OpenSSH per-connection server daemon (68.220.241.50:57362). Mar 6 03:02:28.694747 sshd[2282]: Accepted publickey for core from 68.220.241.50 port 57362 ssh2: RSA SHA256:qfQ2rypVMEWpTsWSSrgOtJoJZgil1uRcnShauR4vebY Mar 6 03:02:28.696025 sshd-session[2282]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 03:02:28.701930 systemd-logind[1952]: New session 3 of user core. Mar 6 03:02:28.708269 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 6 03:02:28.933357 sshd[2285]: Connection closed by 68.220.241.50 port 57362 Mar 6 03:02:28.934317 sshd-session[2282]: pam_unix(sshd:session): session closed for user core Mar 6 03:02:28.938887 systemd[1]: sshd@2-172.31.18.81:22-68.220.241.50:57362.service: Deactivated successfully. Mar 6 03:02:28.940859 systemd[1]: session-3.scope: Deactivated successfully. Mar 6 03:02:28.941752 systemd-logind[1952]: Session 3 logged out. Waiting for processes to exit. Mar 6 03:02:28.943193 systemd-logind[1952]: Removed session 3. Mar 6 03:02:29.022320 systemd[1]: Started sshd@3-172.31.18.81:22-68.220.241.50:57366.service - OpenSSH per-connection server daemon (68.220.241.50:57366). Mar 6 03:02:29.465113 sshd[2291]: Accepted publickey for core from 68.220.241.50 port 57366 ssh2: RSA SHA256:qfQ2rypVMEWpTsWSSrgOtJoJZgil1uRcnShauR4vebY Mar 6 03:02:29.466061 sshd-session[2291]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 03:02:29.472777 systemd-logind[1952]: New session 4 of user core. Mar 6 03:02:29.479286 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 6 03:02:29.702422 sshd[2294]: Connection closed by 68.220.241.50 port 57366 Mar 6 03:02:29.703322 sshd-session[2291]: pam_unix(sshd:session): session closed for user core Mar 6 03:02:29.707939 systemd[1]: sshd@3-172.31.18.81:22-68.220.241.50:57366.service: Deactivated successfully. Mar 6 03:02:29.709921 systemd[1]: session-4.scope: Deactivated successfully. Mar 6 03:02:29.710934 systemd-logind[1952]: Session 4 logged out. Waiting for processes to exit. Mar 6 03:02:29.712687 systemd-logind[1952]: Removed session 4. Mar 6 03:02:29.791525 systemd[1]: Started sshd@4-172.31.18.81:22-68.220.241.50:57382.service - OpenSSH per-connection server daemon (68.220.241.50:57382). Mar 6 03:02:30.222559 sshd[2300]: Accepted publickey for core from 68.220.241.50 port 57382 ssh2: RSA SHA256:qfQ2rypVMEWpTsWSSrgOtJoJZgil1uRcnShauR4vebY Mar 6 03:02:30.223963 sshd-session[2300]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 03:02:30.229742 systemd-logind[1952]: New session 5 of user core. Mar 6 03:02:30.241322 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 6 03:02:30.431588 sudo[2304]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 6 03:02:30.431958 sudo[2304]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 6 03:02:30.447268 sudo[2304]: pam_unix(sudo:session): session closed for user root Mar 6 03:02:30.525431 sshd[2303]: Connection closed by 68.220.241.50 port 57382 Mar 6 03:02:30.526336 sshd-session[2300]: pam_unix(sshd:session): session closed for user core Mar 6 03:02:30.531457 systemd[1]: sshd@4-172.31.18.81:22-68.220.241.50:57382.service: Deactivated successfully. Mar 6 03:02:30.533520 systemd[1]: session-5.scope: Deactivated successfully. Mar 6 03:02:30.535435 systemd-logind[1952]: Session 5 logged out. Waiting for processes to exit. Mar 6 03:02:30.536795 systemd-logind[1952]: Removed session 5. Mar 6 03:02:30.618677 systemd[1]: Started sshd@5-172.31.18.81:22-68.220.241.50:57398.service - OpenSSH per-connection server daemon (68.220.241.50:57398). Mar 6 03:02:31.055316 sshd[2310]: Accepted publickey for core from 68.220.241.50 port 57398 ssh2: RSA SHA256:qfQ2rypVMEWpTsWSSrgOtJoJZgil1uRcnShauR4vebY Mar 6 03:02:31.056847 sshd-session[2310]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 03:02:31.062546 systemd-logind[1952]: New session 6 of user core. Mar 6 03:02:31.065271 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 6 03:02:31.217132 sudo[2315]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 6 03:02:31.217487 sudo[2315]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 6 03:02:31.224392 sudo[2315]: pam_unix(sudo:session): session closed for user root Mar 6 03:02:31.229864 sudo[2314]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Mar 6 03:02:31.230234 sudo[2314]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 6 03:02:31.240806 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 6 03:02:31.279619 augenrules[2337]: No rules Mar 6 03:02:31.280960 systemd[1]: audit-rules.service: Deactivated successfully. Mar 6 03:02:31.281208 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 6 03:02:31.282530 sudo[2314]: pam_unix(sudo:session): session closed for user root Mar 6 03:02:31.361571 sshd[2313]: Connection closed by 68.220.241.50 port 57398 Mar 6 03:02:31.363235 sshd-session[2310]: pam_unix(sshd:session): session closed for user core Mar 6 03:02:31.367517 systemd[1]: sshd@5-172.31.18.81:22-68.220.241.50:57398.service: Deactivated successfully. Mar 6 03:02:31.369516 systemd[1]: session-6.scope: Deactivated successfully. Mar 6 03:02:31.370604 systemd-logind[1952]: Session 6 logged out. Waiting for processes to exit. Mar 6 03:02:31.372248 systemd-logind[1952]: Removed session 6. Mar 6 03:02:31.450613 systemd[1]: Started sshd@6-172.31.18.81:22-68.220.241.50:57410.service - OpenSSH per-connection server daemon (68.220.241.50:57410). Mar 6 03:02:31.888114 sshd[2346]: Accepted publickey for core from 68.220.241.50 port 57410 ssh2: RSA SHA256:qfQ2rypVMEWpTsWSSrgOtJoJZgil1uRcnShauR4vebY Mar 6 03:02:31.889174 sshd-session[2346]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 03:02:31.894914 systemd-logind[1952]: New session 7 of user core. Mar 6 03:02:31.902291 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 6 03:02:32.048697 sudo[2350]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 6 03:02:32.049054 sudo[2350]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 6 03:02:32.837643 systemd-resolved[1840]: Clock change detected. Flushing caches. Mar 6 03:02:33.256302 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 6 03:02:33.280963 (dockerd)[2369]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 6 03:02:33.928012 dockerd[2369]: time="2026-03-06T03:02:33.927935507Z" level=info msg="Starting up" Mar 6 03:02:33.929172 dockerd[2369]: time="2026-03-06T03:02:33.929136418Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Mar 6 03:02:33.943435 dockerd[2369]: time="2026-03-06T03:02:33.943357953Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Mar 6 03:02:33.978329 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1953650491-merged.mount: Deactivated successfully. Mar 6 03:02:34.460848 dockerd[2369]: time="2026-03-06T03:02:34.460798767Z" level=info msg="Loading containers: start." Mar 6 03:02:34.473449 kernel: Initializing XFRM netlink socket Mar 6 03:02:34.751758 (udev-worker)[2390]: Network interface NamePolicy= disabled on kernel command line. Mar 6 03:02:34.797646 systemd-networkd[1838]: docker0: Link UP Mar 6 03:02:34.807987 dockerd[2369]: time="2026-03-06T03:02:34.807928794Z" level=info msg="Loading containers: done." Mar 6 03:02:34.825459 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3104297374-merged.mount: Deactivated successfully. Mar 6 03:02:34.827068 dockerd[2369]: time="2026-03-06T03:02:34.827021676Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 6 03:02:34.827171 dockerd[2369]: time="2026-03-06T03:02:34.827127256Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Mar 6 03:02:34.827254 dockerd[2369]: time="2026-03-06T03:02:34.827231189Z" level=info msg="Initializing buildkit" Mar 6 03:02:34.852647 dockerd[2369]: time="2026-03-06T03:02:34.852593246Z" level=info msg="Completed buildkit initialization" Mar 6 03:02:34.861133 dockerd[2369]: time="2026-03-06T03:02:34.861080853Z" level=info msg="Daemon has completed initialization" Mar 6 03:02:34.861133 dockerd[2369]: time="2026-03-06T03:02:34.861162867Z" level=info msg="API listen on /run/docker.sock" Mar 6 03:02:34.861361 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 6 03:02:35.909855 containerd[1983]: time="2026-03-06T03:02:35.909807332Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.2\"" Mar 6 03:02:36.466400 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount169624768.mount: Deactivated successfully. Mar 6 03:02:38.411972 containerd[1983]: time="2026-03-06T03:02:38.411917844Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:02:38.413175 containerd[1983]: time="2026-03-06T03:02:38.413007260Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.35.2: active requests=0, bytes read=27696467" Mar 6 03:02:38.414226 containerd[1983]: time="2026-03-06T03:02:38.414185689Z" level=info msg="ImageCreate event name:\"sha256:66108468ce51257077e642f2f509cd61d470029036a7954a1a47ca15b2706dda\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:02:38.417140 containerd[1983]: time="2026-03-06T03:02:38.417079636Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:68cdc586f13b13edb7aa30a18155be530136a39cfd5ef8672aad8ccc98f0a7f7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:02:38.418209 containerd[1983]: time="2026-03-06T03:02:38.418013317Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.35.2\" with image id \"sha256:66108468ce51257077e642f2f509cd61d470029036a7954a1a47ca15b2706dda\", repo tag \"registry.k8s.io/kube-apiserver:v1.35.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:68cdc586f13b13edb7aa30a18155be530136a39cfd5ef8672aad8ccc98f0a7f7\", size \"27693066\" in 2.508166913s" Mar 6 03:02:38.418209 containerd[1983]: time="2026-03-06T03:02:38.418059232Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.2\" returns image reference \"sha256:66108468ce51257077e642f2f509cd61d470029036a7954a1a47ca15b2706dda\"" Mar 6 03:02:38.419013 containerd[1983]: time="2026-03-06T03:02:38.418629446Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.2\"" Mar 6 03:02:38.720693 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 6 03:02:38.723198 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 6 03:02:39.328717 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 6 03:02:39.341952 (kubelet)[2646]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 6 03:02:39.387821 kubelet[2646]: E0306 03:02:39.387742 2646 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 6 03:02:39.391563 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 6 03:02:39.391757 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 6 03:02:39.392134 systemd[1]: kubelet.service: Consumed 182ms CPU time, 110.7M memory peak. Mar 6 03:02:40.830561 containerd[1983]: time="2026-03-06T03:02:40.829902963Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:02:40.840959 containerd[1983]: time="2026-03-06T03:02:40.840904343Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.35.2: active requests=0, bytes read=21450700" Mar 6 03:02:40.853955 containerd[1983]: time="2026-03-06T03:02:40.853872488Z" level=info msg="ImageCreate event name:\"sha256:0f2dd35011c05b55a97c9304ae1d36cfd58499cc1fd3dd8ccdf6efef1144e36a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:02:40.874888 containerd[1983]: time="2026-03-06T03:02:40.874804311Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d9784320a41dd1b155c0ad8fdb5823d60c475870f3dd23865edde36b585748f2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:02:40.876407 containerd[1983]: time="2026-03-06T03:02:40.876116897Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.35.2\" with image id \"sha256:0f2dd35011c05b55a97c9304ae1d36cfd58499cc1fd3dd8ccdf6efef1144e36a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.35.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d9784320a41dd1b155c0ad8fdb5823d60c475870f3dd23865edde36b585748f2\", size \"23142311\" in 2.457451661s" Mar 6 03:02:40.876407 containerd[1983]: time="2026-03-06T03:02:40.876163946Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.2\" returns image reference \"sha256:0f2dd35011c05b55a97c9304ae1d36cfd58499cc1fd3dd8ccdf6efef1144e36a\"" Mar 6 03:02:40.877308 containerd[1983]: time="2026-03-06T03:02:40.876883075Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.2\"" Mar 6 03:02:42.268392 containerd[1983]: time="2026-03-06T03:02:42.268338180Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:02:42.269646 containerd[1983]: time="2026-03-06T03:02:42.269443074Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.35.2: active requests=0, bytes read=15548429" Mar 6 03:02:42.270899 containerd[1983]: time="2026-03-06T03:02:42.270866074Z" level=info msg="ImageCreate event name:\"sha256:ee83c410d7938aa1752b4e79a8d51f03710b4becc23b2e095fba471049fb2914\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:02:42.273525 containerd[1983]: time="2026-03-06T03:02:42.273495869Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:5833e2c4b779215efe7a48126c067de199e86aa5a86518693adeef16db0ff943\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:02:42.274566 containerd[1983]: time="2026-03-06T03:02:42.274532965Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.35.2\" with image id \"sha256:ee83c410d7938aa1752b4e79a8d51f03710b4becc23b2e095fba471049fb2914\", repo tag \"registry.k8s.io/kube-scheduler:v1.35.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:5833e2c4b779215efe7a48126c067de199e86aa5a86518693adeef16db0ff943\", size \"17240058\" in 1.39761593s" Mar 6 03:02:42.274643 containerd[1983]: time="2026-03-06T03:02:42.274572902Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.2\" returns image reference \"sha256:ee83c410d7938aa1752b4e79a8d51f03710b4becc23b2e095fba471049fb2914\"" Mar 6 03:02:42.275399 containerd[1983]: time="2026-03-06T03:02:42.275248207Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.2\"" Mar 6 03:02:43.621309 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount897899783.mount: Deactivated successfully. Mar 6 03:02:44.008305 containerd[1983]: time="2026-03-06T03:02:44.008252241Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:02:44.009241 containerd[1983]: time="2026-03-06T03:02:44.009195497Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.35.2: active requests=0, bytes read=25685312" Mar 6 03:02:44.010484 containerd[1983]: time="2026-03-06T03:02:44.010404247Z" level=info msg="ImageCreate event name:\"sha256:3c471cf273e44f68c91b48985c27627d581915b9ee5e72f7227bbf2146008b5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:02:44.012919 containerd[1983]: time="2026-03-06T03:02:44.012859276Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:015265214cc874b593a7adccdcfe4ac15d2b8e9ae89881bdcd5bcb99d42e1862\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:02:44.013837 containerd[1983]: time="2026-03-06T03:02:44.013395356Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.35.2\" with image id \"sha256:3c471cf273e44f68c91b48985c27627d581915b9ee5e72f7227bbf2146008b5e\", repo tag \"registry.k8s.io/kube-proxy:v1.35.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:015265214cc874b593a7adccdcfe4ac15d2b8e9ae89881bdcd5bcb99d42e1862\", size \"25684331\" in 1.738110596s" Mar 6 03:02:44.013837 containerd[1983]: time="2026-03-06T03:02:44.013452128Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.2\" returns image reference \"sha256:3c471cf273e44f68c91b48985c27627d581915b9ee5e72f7227bbf2146008b5e\"" Mar 6 03:02:44.013972 containerd[1983]: time="2026-03-06T03:02:44.013947400Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\"" Mar 6 03:02:44.529889 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1384156575.mount: Deactivated successfully. Mar 6 03:02:46.216363 containerd[1983]: time="2026-03-06T03:02:46.216308647Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:02:46.217652 containerd[1983]: time="2026-03-06T03:02:46.217464865Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.13.1: active requests=0, bytes read=23556542" Mar 6 03:02:46.218607 containerd[1983]: time="2026-03-06T03:02:46.218576557Z" level=info msg="ImageCreate event name:\"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:02:46.221508 containerd[1983]: time="2026-03-06T03:02:46.221449323Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:02:46.222957 containerd[1983]: time="2026-03-06T03:02:46.222535466Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.13.1\" with image id \"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\", repo tag \"registry.k8s.io/coredns/coredns:v1.13.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\", size \"23553139\" in 2.208559548s" Mar 6 03:02:46.222957 containerd[1983]: time="2026-03-06T03:02:46.222573489Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\" returns image reference \"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\"" Mar 6 03:02:46.223165 containerd[1983]: time="2026-03-06T03:02:46.223132982Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Mar 6 03:02:46.647765 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2012284861.mount: Deactivated successfully. Mar 6 03:02:46.657880 containerd[1983]: time="2026-03-06T03:02:46.657824318Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:02:46.659711 containerd[1983]: time="2026-03-06T03:02:46.659537850Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321218" Mar 6 03:02:46.661690 containerd[1983]: time="2026-03-06T03:02:46.661652149Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:02:46.665224 containerd[1983]: time="2026-03-06T03:02:46.665066755Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:02:46.666278 containerd[1983]: time="2026-03-06T03:02:46.665902606Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 442.736354ms" Mar 6 03:02:46.666278 containerd[1983]: time="2026-03-06T03:02:46.665943387Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Mar 6 03:02:46.666606 containerd[1983]: time="2026-03-06T03:02:46.666572456Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\"" Mar 6 03:02:47.212113 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2111115524.mount: Deactivated successfully. Mar 6 03:02:48.252527 containerd[1983]: time="2026-03-06T03:02:48.252475570Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.6-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:02:48.253668 containerd[1983]: time="2026-03-06T03:02:48.253587056Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.6-0: active requests=0, bytes read=23630322" Mar 6 03:02:48.255088 containerd[1983]: time="2026-03-06T03:02:48.255032253Z" level=info msg="ImageCreate event name:\"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:02:48.259150 containerd[1983]: time="2026-03-06T03:02:48.259106725Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:02:48.263462 containerd[1983]: time="2026-03-06T03:02:48.262538244Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.6-0\" with image id \"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\", repo tag \"registry.k8s.io/etcd:3.6.6-0\", repo digest \"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\", size \"23641797\" in 1.595928552s" Mar 6 03:02:48.263462 containerd[1983]: time="2026-03-06T03:02:48.262577647Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\" returns image reference \"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\"" Mar 6 03:02:49.469260 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 6 03:02:49.473653 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 6 03:02:49.763645 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 6 03:02:49.776171 (kubelet)[2818]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 6 03:02:49.841387 kubelet[2818]: E0306 03:02:49.841311 2818 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 6 03:02:49.844650 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 6 03:02:49.844989 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 6 03:02:49.845525 systemd[1]: kubelet.service: Consumed 203ms CPU time, 107.9M memory peak. Mar 6 03:02:49.870023 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 6 03:02:49.870257 systemd[1]: kubelet.service: Consumed 203ms CPU time, 107.9M memory peak. Mar 6 03:02:49.872884 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 6 03:02:49.907954 systemd[1]: Reload requested from client PID 2832 ('systemctl') (unit session-7.scope)... Mar 6 03:02:49.907971 systemd[1]: Reloading... Mar 6 03:02:50.061450 zram_generator::config[2877]: No configuration found. Mar 6 03:02:50.341144 systemd[1]: Reloading finished in 432 ms. Mar 6 03:02:50.420283 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 6 03:02:50.420391 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 6 03:02:50.420723 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 6 03:02:50.420782 systemd[1]: kubelet.service: Consumed 147ms CPU time, 98.5M memory peak. Mar 6 03:02:50.422847 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 6 03:02:50.712786 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 6 03:02:50.722228 (kubelet)[2940]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 6 03:02:50.781436 kubelet[2940]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 6 03:02:51.114792 kubelet[2940]: I0306 03:02:51.114584 2940 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Mar 6 03:02:51.114792 kubelet[2940]: I0306 03:02:51.114643 2940 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 6 03:02:51.116754 kubelet[2940]: I0306 03:02:51.116714 2940 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 6 03:02:51.117538 kubelet[2940]: I0306 03:02:51.116832 2940 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 6 03:02:51.117538 kubelet[2940]: I0306 03:02:51.117250 2940 server.go:951] "Client rotation is on, will bootstrap in background" Mar 6 03:02:51.133876 kubelet[2940]: I0306 03:02:51.133833 2940 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 6 03:02:51.136013 kubelet[2940]: E0306 03:02:51.135972 2940 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.18.81:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.18.81:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 6 03:02:51.142791 kubelet[2940]: I0306 03:02:51.142758 2940 server.go:1418] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 6 03:02:51.152908 kubelet[2940]: I0306 03:02:51.152868 2940 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 6 03:02:51.159113 kubelet[2940]: I0306 03:02:51.159034 2940 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 6 03:02:51.161028 kubelet[2940]: I0306 03:02:51.159098 2940 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-18-81","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 6 03:02:51.161028 kubelet[2940]: I0306 03:02:51.161030 2940 topology_manager.go:143] "Creating topology manager with none policy" Mar 6 03:02:51.161251 kubelet[2940]: I0306 03:02:51.161047 2940 container_manager_linux.go:308] "Creating device plugin manager" Mar 6 03:02:51.161251 kubelet[2940]: I0306 03:02:51.161181 2940 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Mar 6 03:02:51.165590 kubelet[2940]: I0306 03:02:51.165559 2940 state_mem.go:41] "Initialized" logger="CPUManager state memory" Mar 6 03:02:51.165798 kubelet[2940]: I0306 03:02:51.165780 2940 kubelet.go:482] "Attempting to sync node with API server" Mar 6 03:02:51.165883 kubelet[2940]: I0306 03:02:51.165801 2940 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 6 03:02:51.165883 kubelet[2940]: I0306 03:02:51.165834 2940 kubelet.go:394] "Adding apiserver pod source" Mar 6 03:02:51.165883 kubelet[2940]: I0306 03:02:51.165848 2940 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 6 03:02:51.171166 kubelet[2940]: I0306 03:02:51.170635 2940 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Mar 6 03:02:51.174412 kubelet[2940]: I0306 03:02:51.174382 2940 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 6 03:02:51.174586 kubelet[2940]: I0306 03:02:51.174572 2940 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 6 03:02:51.174833 kubelet[2940]: W0306 03:02:51.174819 2940 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 6 03:02:51.181662 kubelet[2940]: I0306 03:02:51.181641 2940 server.go:1257] "Started kubelet" Mar 6 03:02:51.184005 kubelet[2940]: I0306 03:02:51.183968 2940 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Mar 6 03:02:51.192507 kubelet[2940]: E0306 03:02:51.189988 2940 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.18.81:6443/api/v1/namespaces/default/events\": dial tcp 172.31.18.81:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-18-81.189a2170af1bca8a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-18-81,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-18-81,},FirstTimestamp:2026-03-06 03:02:51.18159937 +0000 UTC m=+0.454424360,LastTimestamp:2026-03-06 03:02:51.18159937 +0000 UTC m=+0.454424360,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-18-81,}" Mar 6 03:02:51.193217 kubelet[2940]: I0306 03:02:51.193036 2940 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Mar 6 03:02:51.194170 kubelet[2940]: I0306 03:02:51.194150 2940 server.go:317] "Adding debug handlers to kubelet server" Mar 6 03:02:51.195242 kubelet[2940]: I0306 03:02:51.195226 2940 volume_manager.go:311] "Starting Kubelet Volume Manager" Mar 6 03:02:51.195573 kubelet[2940]: E0306 03:02:51.195554 2940 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ip-172-31-18-81\" not found" Mar 6 03:02:51.198470 kubelet[2940]: I0306 03:02:51.198249 2940 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 6 03:02:51.198470 kubelet[2940]: I0306 03:02:51.198333 2940 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 6 03:02:51.198591 kubelet[2940]: I0306 03:02:51.198562 2940 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 6 03:02:51.198876 kubelet[2940]: I0306 03:02:51.198850 2940 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 6 03:02:51.198948 kubelet[2940]: I0306 03:02:51.198900 2940 reconciler.go:29] "Reconciler: start to sync state" Mar 6 03:02:51.200474 kubelet[2940]: I0306 03:02:51.200103 2940 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 6 03:02:51.202508 kubelet[2940]: E0306 03:02:51.202473 2940 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.81:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-81?timeout=10s\": dial tcp 172.31.18.81:6443: connect: connection refused" interval="200ms" Mar 6 03:02:51.202847 kubelet[2940]: I0306 03:02:51.202828 2940 factory.go:223] Registration of the systemd container factory successfully Mar 6 03:02:51.203026 kubelet[2940]: I0306 03:02:51.202928 2940 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 6 03:02:51.205310 kubelet[2940]: E0306 03:02:51.205287 2940 kubelet.go:1656] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 6 03:02:51.205589 kubelet[2940]: I0306 03:02:51.205486 2940 factory.go:223] Registration of the containerd container factory successfully Mar 6 03:02:51.212788 kubelet[2940]: I0306 03:02:51.212751 2940 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 6 03:02:51.214502 kubelet[2940]: I0306 03:02:51.214301 2940 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 6 03:02:51.214502 kubelet[2940]: I0306 03:02:51.214324 2940 status_manager.go:249] "Starting to sync pod status with apiserver" Mar 6 03:02:51.214502 kubelet[2940]: I0306 03:02:51.214351 2940 kubelet.go:2501] "Starting kubelet main sync loop" Mar 6 03:02:51.214502 kubelet[2940]: E0306 03:02:51.214401 2940 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 6 03:02:51.231840 kubelet[2940]: I0306 03:02:51.231809 2940 cpu_manager.go:225] "Starting" policy="none" Mar 6 03:02:51.231840 kubelet[2940]: I0306 03:02:51.231825 2940 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 6 03:02:51.231840 kubelet[2940]: I0306 03:02:51.231844 2940 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Mar 6 03:02:51.235647 kubelet[2940]: I0306 03:02:51.235612 2940 policy_none.go:50] "Start" Mar 6 03:02:51.235647 kubelet[2940]: I0306 03:02:51.235633 2940 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 6 03:02:51.235647 kubelet[2940]: I0306 03:02:51.235646 2940 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 6 03:02:51.238931 kubelet[2940]: I0306 03:02:51.238896 2940 policy_none.go:44] "Start" Mar 6 03:02:51.243585 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 6 03:02:51.255897 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 6 03:02:51.259837 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 6 03:02:51.268486 kubelet[2940]: E0306 03:02:51.268455 2940 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 6 03:02:51.269301 kubelet[2940]: I0306 03:02:51.269275 2940 eviction_manager.go:194] "Eviction manager: starting control loop" Mar 6 03:02:51.269395 kubelet[2940]: I0306 03:02:51.269292 2940 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 6 03:02:51.272056 kubelet[2940]: I0306 03:02:51.270795 2940 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Mar 6 03:02:51.272815 kubelet[2940]: E0306 03:02:51.272794 2940 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 6 03:02:51.272885 kubelet[2940]: E0306 03:02:51.272838 2940 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-18-81\" not found" Mar 6 03:02:51.329618 systemd[1]: Created slice kubepods-burstable-pod204be96162597c6828be8a34248da0fc.slice - libcontainer container kubepods-burstable-pod204be96162597c6828be8a34248da0fc.slice. Mar 6 03:02:51.338743 kubelet[2940]: E0306 03:02:51.338662 2940 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-81\" not found" node="ip-172-31-18-81" Mar 6 03:02:51.344103 systemd[1]: Created slice kubepods-burstable-pod047077122758db862d163cbe8cfa8ab2.slice - libcontainer container kubepods-burstable-pod047077122758db862d163cbe8cfa8ab2.slice. Mar 6 03:02:51.353012 kubelet[2940]: E0306 03:02:51.352981 2940 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-81\" not found" node="ip-172-31-18-81" Mar 6 03:02:51.356253 systemd[1]: Created slice kubepods-burstable-podfef4088f6319da546f4e9dc73a7c2cee.slice - libcontainer container kubepods-burstable-podfef4088f6319da546f4e9dc73a7c2cee.slice. Mar 6 03:02:51.358535 kubelet[2940]: E0306 03:02:51.358497 2940 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-81\" not found" node="ip-172-31-18-81" Mar 6 03:02:51.371328 kubelet[2940]: I0306 03:02:51.371232 2940 kubelet_node_status.go:74] "Attempting to register node" node="ip-172-31-18-81" Mar 6 03:02:51.372144 kubelet[2940]: E0306 03:02:51.372107 2940 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://172.31.18.81:6443/api/v1/nodes\": dial tcp 172.31.18.81:6443: connect: connection refused" node="ip-172-31-18-81" Mar 6 03:02:51.403123 kubelet[2940]: E0306 03:02:51.403073 2940 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.81:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-81?timeout=10s\": dial tcp 172.31.18.81:6443: connect: connection refused" interval="400ms" Mar 6 03:02:51.500710 kubelet[2940]: I0306 03:02:51.500650 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/204be96162597c6828be8a34248da0fc-k8s-certs\") pod \"kube-apiserver-ip-172-31-18-81\" (UID: \"204be96162597c6828be8a34248da0fc\") " pod="kube-system/kube-apiserver-ip-172-31-18-81" Mar 6 03:02:51.500710 kubelet[2940]: I0306 03:02:51.500693 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/204be96162597c6828be8a34248da0fc-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-18-81\" (UID: \"204be96162597c6828be8a34248da0fc\") " pod="kube-system/kube-apiserver-ip-172-31-18-81" Mar 6 03:02:51.501121 kubelet[2940]: I0306 03:02:51.500734 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/047077122758db862d163cbe8cfa8ab2-ca-certs\") pod \"kube-controller-manager-ip-172-31-18-81\" (UID: \"047077122758db862d163cbe8cfa8ab2\") " pod="kube-system/kube-controller-manager-ip-172-31-18-81" Mar 6 03:02:51.501121 kubelet[2940]: I0306 03:02:51.500782 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/047077122758db862d163cbe8cfa8ab2-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-18-81\" (UID: \"047077122758db862d163cbe8cfa8ab2\") " pod="kube-system/kube-controller-manager-ip-172-31-18-81" Mar 6 03:02:51.501121 kubelet[2940]: I0306 03:02:51.500808 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/047077122758db862d163cbe8cfa8ab2-k8s-certs\") pod \"kube-controller-manager-ip-172-31-18-81\" (UID: \"047077122758db862d163cbe8cfa8ab2\") " pod="kube-system/kube-controller-manager-ip-172-31-18-81" Mar 6 03:02:51.501121 kubelet[2940]: I0306 03:02:51.500831 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/047077122758db862d163cbe8cfa8ab2-kubeconfig\") pod \"kube-controller-manager-ip-172-31-18-81\" (UID: \"047077122758db862d163cbe8cfa8ab2\") " pod="kube-system/kube-controller-manager-ip-172-31-18-81" Mar 6 03:02:51.501121 kubelet[2940]: I0306 03:02:51.500860 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/047077122758db862d163cbe8cfa8ab2-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-18-81\" (UID: \"047077122758db862d163cbe8cfa8ab2\") " pod="kube-system/kube-controller-manager-ip-172-31-18-81" Mar 6 03:02:51.501256 kubelet[2940]: I0306 03:02:51.500890 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fef4088f6319da546f4e9dc73a7c2cee-kubeconfig\") pod \"kube-scheduler-ip-172-31-18-81\" (UID: \"fef4088f6319da546f4e9dc73a7c2cee\") " pod="kube-system/kube-scheduler-ip-172-31-18-81" Mar 6 03:02:51.501256 kubelet[2940]: I0306 03:02:51.500917 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/204be96162597c6828be8a34248da0fc-ca-certs\") pod \"kube-apiserver-ip-172-31-18-81\" (UID: \"204be96162597c6828be8a34248da0fc\") " pod="kube-system/kube-apiserver-ip-172-31-18-81" Mar 6 03:02:51.574443 kubelet[2940]: I0306 03:02:51.574384 2940 kubelet_node_status.go:74] "Attempting to register node" node="ip-172-31-18-81" Mar 6 03:02:51.574982 kubelet[2940]: E0306 03:02:51.574949 2940 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://172.31.18.81:6443/api/v1/nodes\": dial tcp 172.31.18.81:6443: connect: connection refused" node="ip-172-31-18-81" Mar 6 03:02:51.644920 containerd[1983]: time="2026-03-06T03:02:51.644805763Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-18-81,Uid:204be96162597c6828be8a34248da0fc,Namespace:kube-system,Attempt:0,}" Mar 6 03:02:51.657942 containerd[1983]: time="2026-03-06T03:02:51.657901331Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-18-81,Uid:047077122758db862d163cbe8cfa8ab2,Namespace:kube-system,Attempt:0,}" Mar 6 03:02:51.662898 containerd[1983]: time="2026-03-06T03:02:51.662855709Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-18-81,Uid:fef4088f6319da546f4e9dc73a7c2cee,Namespace:kube-system,Attempt:0,}" Mar 6 03:02:51.804585 kubelet[2940]: E0306 03:02:51.804537 2940 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.81:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-81?timeout=10s\": dial tcp 172.31.18.81:6443: connect: connection refused" interval="800ms" Mar 6 03:02:51.977347 kubelet[2940]: I0306 03:02:51.976783 2940 kubelet_node_status.go:74] "Attempting to register node" node="ip-172-31-18-81" Mar 6 03:02:51.977347 kubelet[2940]: E0306 03:02:51.977103 2940 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://172.31.18.81:6443/api/v1/nodes\": dial tcp 172.31.18.81:6443: connect: connection refused" node="ip-172-31-18-81" Mar 6 03:02:52.124005 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2926723119.mount: Deactivated successfully. Mar 6 03:02:52.139849 containerd[1983]: time="2026-03-06T03:02:52.139797433Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 6 03:02:52.148075 containerd[1983]: time="2026-03-06T03:02:52.147875961Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Mar 6 03:02:52.150065 containerd[1983]: time="2026-03-06T03:02:52.150017051Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 6 03:02:52.152277 containerd[1983]: time="2026-03-06T03:02:52.152229958Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 6 03:02:52.155994 containerd[1983]: time="2026-03-06T03:02:52.155927898Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Mar 6 03:02:52.158319 containerd[1983]: time="2026-03-06T03:02:52.158277131Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 6 03:02:52.160345 containerd[1983]: time="2026-03-06T03:02:52.160309459Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Mar 6 03:02:52.162438 containerd[1983]: time="2026-03-06T03:02:52.162377354Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 6 03:02:52.164438 containerd[1983]: time="2026-03-06T03:02:52.163162751Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 511.438982ms" Mar 6 03:02:52.166523 containerd[1983]: time="2026-03-06T03:02:52.166487772Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 500.845251ms" Mar 6 03:02:52.171079 containerd[1983]: time="2026-03-06T03:02:52.171041241Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 510.75523ms" Mar 6 03:02:52.207955 containerd[1983]: time="2026-03-06T03:02:52.207873892Z" level=info msg="connecting to shim c2170353a8295fc017aec4fd4c6943c08af3fbbc411b991cc7cebe536a3e15ef" address="unix:///run/containerd/s/0e1061d83894cdc12194de0e300e72955396b30bbaf523b7aacc2eec8cad6884" namespace=k8s.io protocol=ttrpc version=3 Mar 6 03:02:52.241615 systemd[1]: Started cri-containerd-c2170353a8295fc017aec4fd4c6943c08af3fbbc411b991cc7cebe536a3e15ef.scope - libcontainer container c2170353a8295fc017aec4fd4c6943c08af3fbbc411b991cc7cebe536a3e15ef. Mar 6 03:02:52.255616 containerd[1983]: time="2026-03-06T03:02:52.255568948Z" level=info msg="connecting to shim 1856b59249b0dae0b8b8359bf43e58333fc32f6bc14f387667e8df73ec79ce8c" address="unix:///run/containerd/s/f0b462813fa9eef36ef15cb36c03036ec312228e0881bbb1621a285f5944bcbd" namespace=k8s.io protocol=ttrpc version=3 Mar 6 03:02:52.333475 containerd[1983]: time="2026-03-06T03:02:52.333069014Z" level=info msg="connecting to shim acf4ee4e6dd77f5e82c06266f1725108c2312d0518818730a77d5684e39bcb60" address="unix:///run/containerd/s/088658aed94733e929944d2438d5fe34d62a1bddce2a4b21097988f236ab3f6f" namespace=k8s.io protocol=ttrpc version=3 Mar 6 03:02:52.404646 systemd[1]: Started cri-containerd-1856b59249b0dae0b8b8359bf43e58333fc32f6bc14f387667e8df73ec79ce8c.scope - libcontainer container 1856b59249b0dae0b8b8359bf43e58333fc32f6bc14f387667e8df73ec79ce8c. Mar 6 03:02:52.410288 systemd[1]: Started cri-containerd-acf4ee4e6dd77f5e82c06266f1725108c2312d0518818730a77d5684e39bcb60.scope - libcontainer container acf4ee4e6dd77f5e82c06266f1725108c2312d0518818730a77d5684e39bcb60. Mar 6 03:02:52.423538 containerd[1983]: time="2026-03-06T03:02:52.423475727Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-18-81,Uid:204be96162597c6828be8a34248da0fc,Namespace:kube-system,Attempt:0,} returns sandbox id \"c2170353a8295fc017aec4fd4c6943c08af3fbbc411b991cc7cebe536a3e15ef\"" Mar 6 03:02:52.437309 containerd[1983]: time="2026-03-06T03:02:52.437102858Z" level=info msg="CreateContainer within sandbox \"c2170353a8295fc017aec4fd4c6943c08af3fbbc411b991cc7cebe536a3e15ef\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 6 03:02:52.484139 containerd[1983]: time="2026-03-06T03:02:52.483905733Z" level=info msg="Container 73f407017cc909396b02310aee40b79555b981d7d87f0055f6de21ab743e1b82: CDI devices from CRI Config.CDIDevices: []" Mar 6 03:02:52.512360 containerd[1983]: time="2026-03-06T03:02:52.512043261Z" level=info msg="CreateContainer within sandbox \"c2170353a8295fc017aec4fd4c6943c08af3fbbc411b991cc7cebe536a3e15ef\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"73f407017cc909396b02310aee40b79555b981d7d87f0055f6de21ab743e1b82\"" Mar 6 03:02:52.513530 containerd[1983]: time="2026-03-06T03:02:52.513500853Z" level=info msg="StartContainer for \"73f407017cc909396b02310aee40b79555b981d7d87f0055f6de21ab743e1b82\"" Mar 6 03:02:52.514037 containerd[1983]: time="2026-03-06T03:02:52.513974995Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-18-81,Uid:047077122758db862d163cbe8cfa8ab2,Namespace:kube-system,Attempt:0,} returns sandbox id \"acf4ee4e6dd77f5e82c06266f1725108c2312d0518818730a77d5684e39bcb60\"" Mar 6 03:02:52.518579 containerd[1983]: time="2026-03-06T03:02:52.518543112Z" level=info msg="connecting to shim 73f407017cc909396b02310aee40b79555b981d7d87f0055f6de21ab743e1b82" address="unix:///run/containerd/s/0e1061d83894cdc12194de0e300e72955396b30bbaf523b7aacc2eec8cad6884" protocol=ttrpc version=3 Mar 6 03:02:52.521185 containerd[1983]: time="2026-03-06T03:02:52.521154345Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-18-81,Uid:fef4088f6319da546f4e9dc73a7c2cee,Namespace:kube-system,Attempt:0,} returns sandbox id \"1856b59249b0dae0b8b8359bf43e58333fc32f6bc14f387667e8df73ec79ce8c\"" Mar 6 03:02:52.527993 containerd[1983]: time="2026-03-06T03:02:52.527682856Z" level=info msg="CreateContainer within sandbox \"acf4ee4e6dd77f5e82c06266f1725108c2312d0518818730a77d5684e39bcb60\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 6 03:02:52.533383 containerd[1983]: time="2026-03-06T03:02:52.533346822Z" level=info msg="CreateContainer within sandbox \"1856b59249b0dae0b8b8359bf43e58333fc32f6bc14f387667e8df73ec79ce8c\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 6 03:02:52.546820 systemd[1]: Started cri-containerd-73f407017cc909396b02310aee40b79555b981d7d87f0055f6de21ab743e1b82.scope - libcontainer container 73f407017cc909396b02310aee40b79555b981d7d87f0055f6de21ab743e1b82. Mar 6 03:02:52.555921 containerd[1983]: time="2026-03-06T03:02:52.555870261Z" level=info msg="Container 286148a75fcdaba27cacdbfcea01438d9acbf4f3fd3265403a32afe203b35360: CDI devices from CRI Config.CDIDevices: []" Mar 6 03:02:52.558696 containerd[1983]: time="2026-03-06T03:02:52.558520310Z" level=info msg="Container df34b4edfdcd6c7c9bad6dbe00b123c36f4ce211eeb0201e9aaad8c5187ae2de: CDI devices from CRI Config.CDIDevices: []" Mar 6 03:02:52.577500 containerd[1983]: time="2026-03-06T03:02:52.576037632Z" level=info msg="CreateContainer within sandbox \"acf4ee4e6dd77f5e82c06266f1725108c2312d0518818730a77d5684e39bcb60\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"286148a75fcdaba27cacdbfcea01438d9acbf4f3fd3265403a32afe203b35360\"" Mar 6 03:02:52.578116 containerd[1983]: time="2026-03-06T03:02:52.578080919Z" level=info msg="StartContainer for \"286148a75fcdaba27cacdbfcea01438d9acbf4f3fd3265403a32afe203b35360\"" Mar 6 03:02:52.581504 containerd[1983]: time="2026-03-06T03:02:52.581468453Z" level=info msg="CreateContainer within sandbox \"1856b59249b0dae0b8b8359bf43e58333fc32f6bc14f387667e8df73ec79ce8c\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"df34b4edfdcd6c7c9bad6dbe00b123c36f4ce211eeb0201e9aaad8c5187ae2de\"" Mar 6 03:02:52.581755 containerd[1983]: time="2026-03-06T03:02:52.581683576Z" level=info msg="connecting to shim 286148a75fcdaba27cacdbfcea01438d9acbf4f3fd3265403a32afe203b35360" address="unix:///run/containerd/s/088658aed94733e929944d2438d5fe34d62a1bddce2a4b21097988f236ab3f6f" protocol=ttrpc version=3 Mar 6 03:02:52.584440 containerd[1983]: time="2026-03-06T03:02:52.583350025Z" level=info msg="StartContainer for \"df34b4edfdcd6c7c9bad6dbe00b123c36f4ce211eeb0201e9aaad8c5187ae2de\"" Mar 6 03:02:52.586446 containerd[1983]: time="2026-03-06T03:02:52.585138634Z" level=info msg="connecting to shim df34b4edfdcd6c7c9bad6dbe00b123c36f4ce211eeb0201e9aaad8c5187ae2de" address="unix:///run/containerd/s/f0b462813fa9eef36ef15cb36c03036ec312228e0881bbb1621a285f5944bcbd" protocol=ttrpc version=3 Mar 6 03:02:52.606846 kubelet[2940]: E0306 03:02:52.606802 2940 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.81:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-81?timeout=10s\": dial tcp 172.31.18.81:6443: connect: connection refused" interval="1.6s" Mar 6 03:02:52.611741 systemd[1]: Started cri-containerd-df34b4edfdcd6c7c9bad6dbe00b123c36f4ce211eeb0201e9aaad8c5187ae2de.scope - libcontainer container df34b4edfdcd6c7c9bad6dbe00b123c36f4ce211eeb0201e9aaad8c5187ae2de. Mar 6 03:02:52.623623 systemd[1]: Started cri-containerd-286148a75fcdaba27cacdbfcea01438d9acbf4f3fd3265403a32afe203b35360.scope - libcontainer container 286148a75fcdaba27cacdbfcea01438d9acbf4f3fd3265403a32afe203b35360. Mar 6 03:02:52.663003 containerd[1983]: time="2026-03-06T03:02:52.662908204Z" level=info msg="StartContainer for \"73f407017cc909396b02310aee40b79555b981d7d87f0055f6de21ab743e1b82\" returns successfully" Mar 6 03:02:52.755386 containerd[1983]: time="2026-03-06T03:02:52.754261693Z" level=info msg="StartContainer for \"df34b4edfdcd6c7c9bad6dbe00b123c36f4ce211eeb0201e9aaad8c5187ae2de\" returns successfully" Mar 6 03:02:52.768197 containerd[1983]: time="2026-03-06T03:02:52.767474097Z" level=info msg="StartContainer for \"286148a75fcdaba27cacdbfcea01438d9acbf4f3fd3265403a32afe203b35360\" returns successfully" Mar 6 03:02:52.780116 kubelet[2940]: I0306 03:02:52.780090 2940 kubelet_node_status.go:74] "Attempting to register node" node="ip-172-31-18-81" Mar 6 03:02:53.249450 kubelet[2940]: E0306 03:02:53.248766 2940 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-81\" not found" node="ip-172-31-18-81" Mar 6 03:02:53.252321 kubelet[2940]: E0306 03:02:53.251978 2940 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-81\" not found" node="ip-172-31-18-81" Mar 6 03:02:53.255660 kubelet[2940]: E0306 03:02:53.255640 2940 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-81\" not found" node="ip-172-31-18-81" Mar 6 03:02:54.260080 kubelet[2940]: E0306 03:02:54.260042 2940 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-81\" not found" node="ip-172-31-18-81" Mar 6 03:02:54.261454 kubelet[2940]: E0306 03:02:54.260662 2940 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-81\" not found" node="ip-172-31-18-81" Mar 6 03:02:54.307386 kubelet[2940]: E0306 03:02:54.307352 2940 nodelease.go:50] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-18-81\" not found" node="ip-172-31-18-81" Mar 6 03:02:54.453979 kubelet[2940]: I0306 03:02:54.453942 2940 kubelet_node_status.go:77] "Successfully registered node" node="ip-172-31-18-81" Mar 6 03:02:54.453979 kubelet[2940]: E0306 03:02:54.453979 2940 kubelet_node_status.go:474] "Error updating node status, will retry" err="error getting node \"ip-172-31-18-81\": node \"ip-172-31-18-81\" not found" Mar 6 03:02:54.496810 kubelet[2940]: I0306 03:02:54.496767 2940 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-18-81" Mar 6 03:02:54.506436 kubelet[2940]: E0306 03:02:54.506361 2940 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-18-81\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-18-81" Mar 6 03:02:54.507604 kubelet[2940]: I0306 03:02:54.506411 2940 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-18-81" Mar 6 03:02:54.511844 kubelet[2940]: E0306 03:02:54.511233 2940 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-18-81\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-18-81" Mar 6 03:02:54.511844 kubelet[2940]: I0306 03:02:54.511263 2940 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-18-81" Mar 6 03:02:54.514078 kubelet[2940]: E0306 03:02:54.514038 2940 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-18-81\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-18-81" Mar 6 03:02:55.173039 kubelet[2940]: I0306 03:02:55.173003 2940 apiserver.go:52] "Watching apiserver" Mar 6 03:02:55.200051 kubelet[2940]: I0306 03:02:55.199985 2940 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 6 03:02:55.262450 kubelet[2940]: I0306 03:02:55.261494 2940 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-18-81" Mar 6 03:02:55.640842 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Mar 6 03:02:56.712206 systemd[1]: Reload requested from client PID 3225 ('systemctl') (unit session-7.scope)... Mar 6 03:02:56.712224 systemd[1]: Reloading... Mar 6 03:02:56.845445 zram_generator::config[3269]: No configuration found. Mar 6 03:02:57.107586 systemd[1]: Reloading finished in 394 ms. Mar 6 03:02:57.146361 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 6 03:02:57.161744 systemd[1]: kubelet.service: Deactivated successfully. Mar 6 03:02:57.162038 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 6 03:02:57.162120 systemd[1]: kubelet.service: Consumed 877ms CPU time, 121.8M memory peak. Mar 6 03:02:57.164918 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 6 03:02:57.471138 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 6 03:02:57.483098 (kubelet)[3329]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 6 03:02:57.551041 kubelet[3329]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 6 03:02:57.561991 kubelet[3329]: I0306 03:02:57.561918 3329 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Mar 6 03:02:57.561991 kubelet[3329]: I0306 03:02:57.561979 3329 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 6 03:02:57.561991 kubelet[3329]: I0306 03:02:57.562003 3329 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 6 03:02:57.562221 kubelet[3329]: I0306 03:02:57.562010 3329 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 6 03:02:57.562552 kubelet[3329]: I0306 03:02:57.562457 3329 server.go:951] "Client rotation is on, will bootstrap in background" Mar 6 03:02:57.564053 kubelet[3329]: I0306 03:02:57.564023 3329 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 6 03:02:57.568543 kubelet[3329]: I0306 03:02:57.567808 3329 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 6 03:02:57.578714 kubelet[3329]: I0306 03:02:57.578589 3329 server.go:1418] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 6 03:02:57.582234 kubelet[3329]: I0306 03:02:57.582159 3329 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 6 03:02:57.582478 kubelet[3329]: I0306 03:02:57.582413 3329 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 6 03:02:57.583458 kubelet[3329]: I0306 03:02:57.582475 3329 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-18-81","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 6 03:02:57.583458 kubelet[3329]: I0306 03:02:57.582891 3329 topology_manager.go:143] "Creating topology manager with none policy" Mar 6 03:02:57.583458 kubelet[3329]: I0306 03:02:57.582901 3329 container_manager_linux.go:308] "Creating device plugin manager" Mar 6 03:02:57.583458 kubelet[3329]: I0306 03:02:57.582927 3329 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Mar 6 03:02:57.583458 kubelet[3329]: I0306 03:02:57.583116 3329 state_mem.go:41] "Initialized" logger="CPUManager state memory" Mar 6 03:02:57.583780 kubelet[3329]: I0306 03:02:57.583264 3329 kubelet.go:482] "Attempting to sync node with API server" Mar 6 03:02:57.583780 kubelet[3329]: I0306 03:02:57.583278 3329 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 6 03:02:57.583780 kubelet[3329]: I0306 03:02:57.583295 3329 kubelet.go:394] "Adding apiserver pod source" Mar 6 03:02:57.583780 kubelet[3329]: I0306 03:02:57.583304 3329 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 6 03:02:57.585624 kubelet[3329]: I0306 03:02:57.585170 3329 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Mar 6 03:02:57.586429 kubelet[3329]: I0306 03:02:57.586395 3329 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 6 03:02:57.586588 kubelet[3329]: I0306 03:02:57.586575 3329 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 6 03:02:57.589784 kubelet[3329]: I0306 03:02:57.589759 3329 server.go:1257] "Started kubelet" Mar 6 03:02:57.599464 kubelet[3329]: I0306 03:02:57.598944 3329 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Mar 6 03:02:57.613094 kubelet[3329]: I0306 03:02:57.612586 3329 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Mar 6 03:02:57.614449 kubelet[3329]: I0306 03:02:57.613941 3329 server.go:317] "Adding debug handlers to kubelet server" Mar 6 03:02:57.619449 kubelet[3329]: I0306 03:02:57.617930 3329 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 6 03:02:57.619449 kubelet[3329]: I0306 03:02:57.618014 3329 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 6 03:02:57.619449 kubelet[3329]: I0306 03:02:57.618184 3329 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 6 03:02:57.619449 kubelet[3329]: I0306 03:02:57.618752 3329 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 6 03:02:57.620975 kubelet[3329]: I0306 03:02:57.620777 3329 volume_manager.go:311] "Starting Kubelet Volume Manager" Mar 6 03:02:57.621078 kubelet[3329]: E0306 03:02:57.621056 3329 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ip-172-31-18-81\" not found" Mar 6 03:02:57.624451 kubelet[3329]: I0306 03:02:57.623444 3329 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 6 03:02:57.624451 kubelet[3329]: I0306 03:02:57.623576 3329 reconciler.go:29] "Reconciler: start to sync state" Mar 6 03:02:57.628360 kubelet[3329]: I0306 03:02:57.626539 3329 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 6 03:02:57.628360 kubelet[3329]: I0306 03:02:57.628270 3329 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 6 03:02:57.641237 kubelet[3329]: I0306 03:02:57.641193 3329 status_manager.go:249] "Starting to sync pod status with apiserver" Mar 6 03:02:57.641237 kubelet[3329]: I0306 03:02:57.641244 3329 kubelet.go:2501] "Starting kubelet main sync loop" Mar 6 03:02:57.641410 kubelet[3329]: E0306 03:02:57.641307 3329 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 6 03:02:57.644083 kubelet[3329]: I0306 03:02:57.642853 3329 factory.go:223] Registration of the systemd container factory successfully Mar 6 03:02:57.644083 kubelet[3329]: I0306 03:02:57.643047 3329 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 6 03:02:57.656144 kubelet[3329]: I0306 03:02:57.656114 3329 factory.go:223] Registration of the containerd container factory successfully Mar 6 03:02:57.657879 kubelet[3329]: E0306 03:02:57.657845 3329 kubelet.go:1656] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 6 03:02:57.723465 kubelet[3329]: I0306 03:02:57.722022 3329 cpu_manager.go:225] "Starting" policy="none" Mar 6 03:02:57.723465 kubelet[3329]: I0306 03:02:57.722041 3329 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 6 03:02:57.723465 kubelet[3329]: I0306 03:02:57.722063 3329 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Mar 6 03:02:57.723465 kubelet[3329]: I0306 03:02:57.722217 3329 state_mem.go:94] "Updated default CPUSet" logger="CPUManager state checkpoint.CPUManager state memory" cpuSet="" Mar 6 03:02:57.723465 kubelet[3329]: I0306 03:02:57.722231 3329 state_mem.go:102] "Updated CPUSet assignments" logger="CPUManager state checkpoint.CPUManager state memory" assignments={} Mar 6 03:02:57.723465 kubelet[3329]: I0306 03:02:57.722252 3329 policy_none.go:50] "Start" Mar 6 03:02:57.723465 kubelet[3329]: I0306 03:02:57.722265 3329 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 6 03:02:57.723465 kubelet[3329]: I0306 03:02:57.722277 3329 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 6 03:02:57.723465 kubelet[3329]: I0306 03:02:57.722410 3329 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Mar 6 03:02:57.723465 kubelet[3329]: I0306 03:02:57.722461 3329 policy_none.go:44] "Start" Mar 6 03:02:57.729672 kubelet[3329]: E0306 03:02:57.728147 3329 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 6 03:02:57.729672 kubelet[3329]: I0306 03:02:57.728618 3329 eviction_manager.go:194] "Eviction manager: starting control loop" Mar 6 03:02:57.729672 kubelet[3329]: I0306 03:02:57.728631 3329 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 6 03:02:57.729672 kubelet[3329]: I0306 03:02:57.728964 3329 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Mar 6 03:02:57.731451 kubelet[3329]: E0306 03:02:57.731399 3329 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 6 03:02:57.741989 kubelet[3329]: I0306 03:02:57.741946 3329 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-18-81" Mar 6 03:02:57.744213 kubelet[3329]: I0306 03:02:57.742371 3329 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-18-81" Mar 6 03:02:57.747254 kubelet[3329]: I0306 03:02:57.745277 3329 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-18-81" Mar 6 03:02:57.761115 kubelet[3329]: E0306 03:02:57.761080 3329 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-18-81\" already exists" pod="kube-system/kube-apiserver-ip-172-31-18-81" Mar 6 03:02:57.843379 kubelet[3329]: I0306 03:02:57.843322 3329 kubelet_node_status.go:74] "Attempting to register node" node="ip-172-31-18-81" Mar 6 03:02:57.859948 kubelet[3329]: I0306 03:02:57.859877 3329 kubelet_node_status.go:123] "Node was previously registered" node="ip-172-31-18-81" Mar 6 03:02:57.860160 kubelet[3329]: I0306 03:02:57.860138 3329 kubelet_node_status.go:77] "Successfully registered node" node="ip-172-31-18-81" Mar 6 03:02:57.925257 kubelet[3329]: I0306 03:02:57.924628 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/047077122758db862d163cbe8cfa8ab2-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-18-81\" (UID: \"047077122758db862d163cbe8cfa8ab2\") " pod="kube-system/kube-controller-manager-ip-172-31-18-81" Mar 6 03:02:57.925257 kubelet[3329]: I0306 03:02:57.924677 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fef4088f6319da546f4e9dc73a7c2cee-kubeconfig\") pod \"kube-scheduler-ip-172-31-18-81\" (UID: \"fef4088f6319da546f4e9dc73a7c2cee\") " pod="kube-system/kube-scheduler-ip-172-31-18-81" Mar 6 03:02:57.925257 kubelet[3329]: I0306 03:02:57.924704 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/204be96162597c6828be8a34248da0fc-k8s-certs\") pod \"kube-apiserver-ip-172-31-18-81\" (UID: \"204be96162597c6828be8a34248da0fc\") " pod="kube-system/kube-apiserver-ip-172-31-18-81" Mar 6 03:02:57.925257 kubelet[3329]: I0306 03:02:57.924731 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/204be96162597c6828be8a34248da0fc-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-18-81\" (UID: \"204be96162597c6828be8a34248da0fc\") " pod="kube-system/kube-apiserver-ip-172-31-18-81" Mar 6 03:02:57.925257 kubelet[3329]: I0306 03:02:57.924758 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/047077122758db862d163cbe8cfa8ab2-ca-certs\") pod \"kube-controller-manager-ip-172-31-18-81\" (UID: \"047077122758db862d163cbe8cfa8ab2\") " pod="kube-system/kube-controller-manager-ip-172-31-18-81" Mar 6 03:02:57.925535 kubelet[3329]: I0306 03:02:57.924781 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/047077122758db862d163cbe8cfa8ab2-k8s-certs\") pod \"kube-controller-manager-ip-172-31-18-81\" (UID: \"047077122758db862d163cbe8cfa8ab2\") " pod="kube-system/kube-controller-manager-ip-172-31-18-81" Mar 6 03:02:57.925535 kubelet[3329]: I0306 03:02:57.924821 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/047077122758db862d163cbe8cfa8ab2-kubeconfig\") pod \"kube-controller-manager-ip-172-31-18-81\" (UID: \"047077122758db862d163cbe8cfa8ab2\") " pod="kube-system/kube-controller-manager-ip-172-31-18-81" Mar 6 03:02:57.925535 kubelet[3329]: I0306 03:02:57.924844 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/204be96162597c6828be8a34248da0fc-ca-certs\") pod \"kube-apiserver-ip-172-31-18-81\" (UID: \"204be96162597c6828be8a34248da0fc\") " pod="kube-system/kube-apiserver-ip-172-31-18-81" Mar 6 03:02:57.925535 kubelet[3329]: I0306 03:02:57.924868 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/047077122758db862d163cbe8cfa8ab2-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-18-81\" (UID: \"047077122758db862d163cbe8cfa8ab2\") " pod="kube-system/kube-controller-manager-ip-172-31-18-81" Mar 6 03:02:58.585580 kubelet[3329]: I0306 03:02:58.585397 3329 apiserver.go:52] "Watching apiserver" Mar 6 03:02:58.624795 kubelet[3329]: I0306 03:02:58.623910 3329 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 6 03:02:58.645660 kubelet[3329]: I0306 03:02:58.645589 3329 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-18-81" podStartSLOduration=1.6455550410000002 podStartE2EDuration="1.645555041s" podCreationTimestamp="2026-03-06 03:02:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-06 03:02:58.645395138 +0000 UTC m=+1.157366229" watchObservedRunningTime="2026-03-06 03:02:58.645555041 +0000 UTC m=+1.157526122" Mar 6 03:02:58.679436 kubelet[3329]: I0306 03:02:58.679276 3329 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-18-81" podStartSLOduration=3.679256956 podStartE2EDuration="3.679256956s" podCreationTimestamp="2026-03-06 03:02:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-06 03:02:58.658029298 +0000 UTC m=+1.170000385" watchObservedRunningTime="2026-03-06 03:02:58.679256956 +0000 UTC m=+1.191228042" Mar 6 03:02:58.703447 kubelet[3329]: I0306 03:02:58.703091 3329 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-18-81" podStartSLOduration=1.703075327 podStartE2EDuration="1.703075327s" podCreationTimestamp="2026-03-06 03:02:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-06 03:02:58.681824872 +0000 UTC m=+1.193795960" watchObservedRunningTime="2026-03-06 03:02:58.703075327 +0000 UTC m=+1.215046414" Mar 6 03:03:02.796328 kubelet[3329]: I0306 03:03:02.796287 3329 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 6 03:03:02.797714 containerd[1983]: time="2026-03-06T03:03:02.797661593Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 6 03:03:02.798200 kubelet[3329]: I0306 03:03:02.798014 3329 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 6 03:03:03.813471 systemd[1]: Created slice kubepods-besteffort-pod9cc9889c_435d_4f54_96ac_d475be8423a9.slice - libcontainer container kubepods-besteffort-pod9cc9889c_435d_4f54_96ac_d475be8423a9.slice. Mar 6 03:03:03.869662 kubelet[3329]: I0306 03:03:03.869554 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9cc9889c-435d-4f54-96ac-d475be8423a9-xtables-lock\") pod \"kube-proxy-qn6lb\" (UID: \"9cc9889c-435d-4f54-96ac-d475be8423a9\") " pod="kube-system/kube-proxy-qn6lb" Mar 6 03:03:03.870262 kubelet[3329]: I0306 03:03:03.869634 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9cc9889c-435d-4f54-96ac-d475be8423a9-lib-modules\") pod \"kube-proxy-qn6lb\" (UID: \"9cc9889c-435d-4f54-96ac-d475be8423a9\") " pod="kube-system/kube-proxy-qn6lb" Mar 6 03:03:03.870501 kubelet[3329]: I0306 03:03:03.869801 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6crww\" (UniqueName: \"kubernetes.io/projected/9cc9889c-435d-4f54-96ac-d475be8423a9-kube-api-access-6crww\") pod \"kube-proxy-qn6lb\" (UID: \"9cc9889c-435d-4f54-96ac-d475be8423a9\") " pod="kube-system/kube-proxy-qn6lb" Mar 6 03:03:03.870501 kubelet[3329]: I0306 03:03:03.870465 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9cc9889c-435d-4f54-96ac-d475be8423a9-kube-proxy\") pod \"kube-proxy-qn6lb\" (UID: \"9cc9889c-435d-4f54-96ac-d475be8423a9\") " pod="kube-system/kube-proxy-qn6lb" Mar 6 03:03:04.056235 systemd[1]: Created slice kubepods-besteffort-podc54662b0_debd_40bb_b56f_b5102250da55.slice - libcontainer container kubepods-besteffort-podc54662b0_debd_40bb_b56f_b5102250da55.slice. Mar 6 03:03:04.072951 kubelet[3329]: I0306 03:03:04.072795 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rpjms\" (UniqueName: \"kubernetes.io/projected/c54662b0-debd-40bb-b56f-b5102250da55-kube-api-access-rpjms\") pod \"tigera-operator-6cf4cccc57-pmvkw\" (UID: \"c54662b0-debd-40bb-b56f-b5102250da55\") " pod="tigera-operator/tigera-operator-6cf4cccc57-pmvkw" Mar 6 03:03:04.072951 kubelet[3329]: I0306 03:03:04.072845 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c54662b0-debd-40bb-b56f-b5102250da55-var-lib-calico\") pod \"tigera-operator-6cf4cccc57-pmvkw\" (UID: \"c54662b0-debd-40bb-b56f-b5102250da55\") " pod="tigera-operator/tigera-operator-6cf4cccc57-pmvkw" Mar 6 03:03:04.126310 containerd[1983]: time="2026-03-06T03:03:04.126272455Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qn6lb,Uid:9cc9889c-435d-4f54-96ac-d475be8423a9,Namespace:kube-system,Attempt:0,}" Mar 6 03:03:04.154291 containerd[1983]: time="2026-03-06T03:03:04.153677096Z" level=info msg="connecting to shim c344a0019754800cfdb68d60aa302e559e9f1ced2fef371e360d52ed6c5c890c" address="unix:///run/containerd/s/1158283fd958eca763654090b68d9244aee6676f2c7692264a22756355df4de5" namespace=k8s.io protocol=ttrpc version=3 Mar 6 03:03:04.187730 systemd[1]: Started cri-containerd-c344a0019754800cfdb68d60aa302e559e9f1ced2fef371e360d52ed6c5c890c.scope - libcontainer container c344a0019754800cfdb68d60aa302e559e9f1ced2fef371e360d52ed6c5c890c. Mar 6 03:03:04.223843 containerd[1983]: time="2026-03-06T03:03:04.223800740Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qn6lb,Uid:9cc9889c-435d-4f54-96ac-d475be8423a9,Namespace:kube-system,Attempt:0,} returns sandbox id \"c344a0019754800cfdb68d60aa302e559e9f1ced2fef371e360d52ed6c5c890c\"" Mar 6 03:03:04.235278 containerd[1983]: time="2026-03-06T03:03:04.235211507Z" level=info msg="CreateContainer within sandbox \"c344a0019754800cfdb68d60aa302e559e9f1ced2fef371e360d52ed6c5c890c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 6 03:03:04.272716 containerd[1983]: time="2026-03-06T03:03:04.272172909Z" level=info msg="Container 2480b0f0cea74516600140d239707797b2c743200d2608a565229e2d6cf0d973: CDI devices from CRI Config.CDIDevices: []" Mar 6 03:03:04.281469 containerd[1983]: time="2026-03-06T03:03:04.281393827Z" level=info msg="CreateContainer within sandbox \"c344a0019754800cfdb68d60aa302e559e9f1ced2fef371e360d52ed6c5c890c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"2480b0f0cea74516600140d239707797b2c743200d2608a565229e2d6cf0d973\"" Mar 6 03:03:04.282781 containerd[1983]: time="2026-03-06T03:03:04.282354688Z" level=info msg="StartContainer for \"2480b0f0cea74516600140d239707797b2c743200d2608a565229e2d6cf0d973\"" Mar 6 03:03:04.284695 containerd[1983]: time="2026-03-06T03:03:04.284658604Z" level=info msg="connecting to shim 2480b0f0cea74516600140d239707797b2c743200d2608a565229e2d6cf0d973" address="unix:///run/containerd/s/1158283fd958eca763654090b68d9244aee6676f2c7692264a22756355df4de5" protocol=ttrpc version=3 Mar 6 03:03:04.308028 systemd[1]: Started cri-containerd-2480b0f0cea74516600140d239707797b2c743200d2608a565229e2d6cf0d973.scope - libcontainer container 2480b0f0cea74516600140d239707797b2c743200d2608a565229e2d6cf0d973. Mar 6 03:03:04.366700 containerd[1983]: time="2026-03-06T03:03:04.365116675Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6cf4cccc57-pmvkw,Uid:c54662b0-debd-40bb-b56f-b5102250da55,Namespace:tigera-operator,Attempt:0,}" Mar 6 03:03:04.403776 containerd[1983]: time="2026-03-06T03:03:04.403724032Z" level=info msg="connecting to shim 618adb6d4de6f3d1adab051101c6c7a6db375409469f08f4598ce9505f295988" address="unix:///run/containerd/s/821fb70241586150d055bc6ba4b70bb375d58be72108dab4da57be435ab1e9d8" namespace=k8s.io protocol=ttrpc version=3 Mar 6 03:03:04.408065 containerd[1983]: time="2026-03-06T03:03:04.408016423Z" level=info msg="StartContainer for \"2480b0f0cea74516600140d239707797b2c743200d2608a565229e2d6cf0d973\" returns successfully" Mar 6 03:03:04.447700 systemd[1]: Started cri-containerd-618adb6d4de6f3d1adab051101c6c7a6db375409469f08f4598ce9505f295988.scope - libcontainer container 618adb6d4de6f3d1adab051101c6c7a6db375409469f08f4598ce9505f295988. Mar 6 03:03:04.540503 containerd[1983]: time="2026-03-06T03:03:04.539444457Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6cf4cccc57-pmvkw,Uid:c54662b0-debd-40bb-b56f-b5102250da55,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"618adb6d4de6f3d1adab051101c6c7a6db375409469f08f4598ce9505f295988\"" Mar 6 03:03:04.543632 containerd[1983]: time="2026-03-06T03:03:04.543593742Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Mar 6 03:03:04.766543 kubelet[3329]: I0306 03:03:04.766390 3329 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-qn6lb" podStartSLOduration=1.766339566 podStartE2EDuration="1.766339566s" podCreationTimestamp="2026-03-06 03:03:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-06 03:03:04.765899281 +0000 UTC m=+7.277870368" watchObservedRunningTime="2026-03-06 03:03:04.766339566 +0000 UTC m=+7.278310649" Mar 6 03:03:06.105291 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount242038423.mount: Deactivated successfully. Mar 6 03:03:09.477696 update_engine[1953]: I20260306 03:03:09.477624 1953 update_attempter.cc:509] Updating boot flags... Mar 6 03:03:11.285085 containerd[1983]: time="2026-03-06T03:03:11.285028089Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:03:11.286234 containerd[1983]: time="2026-03-06T03:03:11.286096851Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=40846156" Mar 6 03:03:11.287346 containerd[1983]: time="2026-03-06T03:03:11.287305188Z" level=info msg="ImageCreate event name:\"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:03:11.290534 containerd[1983]: time="2026-03-06T03:03:11.289302519Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:03:11.290534 containerd[1983]: time="2026-03-06T03:03:11.289972907Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"40842151\" in 6.746328165s" Mar 6 03:03:11.290534 containerd[1983]: time="2026-03-06T03:03:11.290005097Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\"" Mar 6 03:03:11.303459 containerd[1983]: time="2026-03-06T03:03:11.303402321Z" level=info msg="CreateContainer within sandbox \"618adb6d4de6f3d1adab051101c6c7a6db375409469f08f4598ce9505f295988\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Mar 6 03:03:11.311940 containerd[1983]: time="2026-03-06T03:03:11.311878231Z" level=info msg="Container 4d355dbba419be32533dd4a5b2cb614f3a836aed8d454c883865888563206b87: CDI devices from CRI Config.CDIDevices: []" Mar 6 03:03:11.324466 containerd[1983]: time="2026-03-06T03:03:11.323993523Z" level=info msg="CreateContainer within sandbox \"618adb6d4de6f3d1adab051101c6c7a6db375409469f08f4598ce9505f295988\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"4d355dbba419be32533dd4a5b2cb614f3a836aed8d454c883865888563206b87\"" Mar 6 03:03:11.325082 containerd[1983]: time="2026-03-06T03:03:11.325029050Z" level=info msg="StartContainer for \"4d355dbba419be32533dd4a5b2cb614f3a836aed8d454c883865888563206b87\"" Mar 6 03:03:11.327623 containerd[1983]: time="2026-03-06T03:03:11.327584116Z" level=info msg="connecting to shim 4d355dbba419be32533dd4a5b2cb614f3a836aed8d454c883865888563206b87" address="unix:///run/containerd/s/821fb70241586150d055bc6ba4b70bb375d58be72108dab4da57be435ab1e9d8" protocol=ttrpc version=3 Mar 6 03:03:11.354823 systemd[1]: Started cri-containerd-4d355dbba419be32533dd4a5b2cb614f3a836aed8d454c883865888563206b87.scope - libcontainer container 4d355dbba419be32533dd4a5b2cb614f3a836aed8d454c883865888563206b87. Mar 6 03:03:11.389473 containerd[1983]: time="2026-03-06T03:03:11.389434579Z" level=info msg="StartContainer for \"4d355dbba419be32533dd4a5b2cb614f3a836aed8d454c883865888563206b87\" returns successfully" Mar 6 03:03:11.788598 kubelet[3329]: I0306 03:03:11.788520 3329 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6cf4cccc57-pmvkw" podStartSLOduration=2.038481933 podStartE2EDuration="8.78678765s" podCreationTimestamp="2026-03-06 03:03:03 +0000 UTC" firstStartedPulling="2026-03-06 03:03:04.542717518 +0000 UTC m=+7.054688586" lastFinishedPulling="2026-03-06 03:03:11.291023238 +0000 UTC m=+13.802994303" observedRunningTime="2026-03-06 03:03:11.786136676 +0000 UTC m=+14.298107764" watchObservedRunningTime="2026-03-06 03:03:11.78678765 +0000 UTC m=+14.298758736" Mar 6 03:03:18.414731 sudo[2350]: pam_unix(sudo:session): session closed for user root Mar 6 03:03:18.493441 sshd[2349]: Connection closed by 68.220.241.50 port 57410 Mar 6 03:03:18.494614 sshd-session[2346]: pam_unix(sshd:session): session closed for user core Mar 6 03:03:18.502575 systemd-logind[1952]: Session 7 logged out. Waiting for processes to exit. Mar 6 03:03:18.502987 systemd[1]: sshd@6-172.31.18.81:22-68.220.241.50:57410.service: Deactivated successfully. Mar 6 03:03:18.507770 systemd[1]: session-7.scope: Deactivated successfully. Mar 6 03:03:18.508114 systemd[1]: session-7.scope: Consumed 4.208s CPU time, 166.5M memory peak. Mar 6 03:03:18.514158 systemd-logind[1952]: Removed session 7. Mar 6 03:03:22.110121 systemd[1]: Created slice kubepods-besteffort-pod12e5637a_2d6c_45d3_a45e_df181fa53524.slice - libcontainer container kubepods-besteffort-pod12e5637a_2d6c_45d3_a45e_df181fa53524.slice. Mar 6 03:03:22.197061 kubelet[3329]: I0306 03:03:22.197017 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/12e5637a-2d6c-45d3-a45e-df181fa53524-typha-certs\") pod \"calico-typha-7c8fd85878-clcd6\" (UID: \"12e5637a-2d6c-45d3-a45e-df181fa53524\") " pod="calico-system/calico-typha-7c8fd85878-clcd6" Mar 6 03:03:22.198634 kubelet[3329]: I0306 03:03:22.197070 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/12e5637a-2d6c-45d3-a45e-df181fa53524-tigera-ca-bundle\") pod \"calico-typha-7c8fd85878-clcd6\" (UID: \"12e5637a-2d6c-45d3-a45e-df181fa53524\") " pod="calico-system/calico-typha-7c8fd85878-clcd6" Mar 6 03:03:22.198634 kubelet[3329]: I0306 03:03:22.197097 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4r2wl\" (UniqueName: \"kubernetes.io/projected/12e5637a-2d6c-45d3-a45e-df181fa53524-kube-api-access-4r2wl\") pod \"calico-typha-7c8fd85878-clcd6\" (UID: \"12e5637a-2d6c-45d3-a45e-df181fa53524\") " pod="calico-system/calico-typha-7c8fd85878-clcd6" Mar 6 03:03:22.217013 systemd[1]: Created slice kubepods-besteffort-pod75e20893_922a_4999_9ce7_db6e98ff63c4.slice - libcontainer container kubepods-besteffort-pod75e20893_922a_4999_9ce7_db6e98ff63c4.slice. Mar 6 03:03:22.333107 kubelet[3329]: E0306 03:03:22.333042 3329 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zsntj" podUID="aa230f7b-c2dc-46a5-ab7f-c83880b50346" Mar 6 03:03:22.398732 kubelet[3329]: I0306 03:03:22.398112 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/75e20893-922a-4999-9ce7-db6e98ff63c4-flexvol-driver-host\") pod \"calico-node-l6jzk\" (UID: \"75e20893-922a-4999-9ce7-db6e98ff63c4\") " pod="calico-system/calico-node-l6jzk" Mar 6 03:03:22.399552 kubelet[3329]: I0306 03:03:22.399375 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/75e20893-922a-4999-9ce7-db6e98ff63c4-node-certs\") pod \"calico-node-l6jzk\" (UID: \"75e20893-922a-4999-9ce7-db6e98ff63c4\") " pod="calico-system/calico-node-l6jzk" Mar 6 03:03:22.399876 kubelet[3329]: I0306 03:03:22.399834 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/75e20893-922a-4999-9ce7-db6e98ff63c4-policysync\") pod \"calico-node-l6jzk\" (UID: \"75e20893-922a-4999-9ce7-db6e98ff63c4\") " pod="calico-system/calico-node-l6jzk" Mar 6 03:03:22.400164 kubelet[3329]: I0306 03:03:22.400021 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/75e20893-922a-4999-9ce7-db6e98ff63c4-sys-fs\") pod \"calico-node-l6jzk\" (UID: \"75e20893-922a-4999-9ce7-db6e98ff63c4\") " pod="calico-system/calico-node-l6jzk" Mar 6 03:03:22.400164 kubelet[3329]: I0306 03:03:22.400134 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/75e20893-922a-4999-9ce7-db6e98ff63c4-var-run-calico\") pod \"calico-node-l6jzk\" (UID: \"75e20893-922a-4999-9ce7-db6e98ff63c4\") " pod="calico-system/calico-node-l6jzk" Mar 6 03:03:22.400406 kubelet[3329]: I0306 03:03:22.400382 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/75e20893-922a-4999-9ce7-db6e98ff63c4-cni-bin-dir\") pod \"calico-node-l6jzk\" (UID: \"75e20893-922a-4999-9ce7-db6e98ff63c4\") " pod="calico-system/calico-node-l6jzk" Mar 6 03:03:22.401073 kubelet[3329]: I0306 03:03:22.400463 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/75e20893-922a-4999-9ce7-db6e98ff63c4-xtables-lock\") pod \"calico-node-l6jzk\" (UID: \"75e20893-922a-4999-9ce7-db6e98ff63c4\") " pod="calico-system/calico-node-l6jzk" Mar 6 03:03:22.401073 kubelet[3329]: I0306 03:03:22.400490 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwp2h\" (UniqueName: \"kubernetes.io/projected/75e20893-922a-4999-9ce7-db6e98ff63c4-kube-api-access-lwp2h\") pod \"calico-node-l6jzk\" (UID: \"75e20893-922a-4999-9ce7-db6e98ff63c4\") " pod="calico-system/calico-node-l6jzk" Mar 6 03:03:22.401073 kubelet[3329]: I0306 03:03:22.400508 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/75e20893-922a-4999-9ce7-db6e98ff63c4-nodeproc\") pod \"calico-node-l6jzk\" (UID: \"75e20893-922a-4999-9ce7-db6e98ff63c4\") " pod="calico-system/calico-node-l6jzk" Mar 6 03:03:22.401073 kubelet[3329]: I0306 03:03:22.400523 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/75e20893-922a-4999-9ce7-db6e98ff63c4-tigera-ca-bundle\") pod \"calico-node-l6jzk\" (UID: \"75e20893-922a-4999-9ce7-db6e98ff63c4\") " pod="calico-system/calico-node-l6jzk" Mar 6 03:03:22.401073 kubelet[3329]: I0306 03:03:22.400539 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/75e20893-922a-4999-9ce7-db6e98ff63c4-lib-modules\") pod \"calico-node-l6jzk\" (UID: \"75e20893-922a-4999-9ce7-db6e98ff63c4\") " pod="calico-system/calico-node-l6jzk" Mar 6 03:03:22.401267 kubelet[3329]: I0306 03:03:22.400561 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/75e20893-922a-4999-9ce7-db6e98ff63c4-var-lib-calico\") pod \"calico-node-l6jzk\" (UID: \"75e20893-922a-4999-9ce7-db6e98ff63c4\") " pod="calico-system/calico-node-l6jzk" Mar 6 03:03:22.401267 kubelet[3329]: I0306 03:03:22.400588 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/75e20893-922a-4999-9ce7-db6e98ff63c4-cni-net-dir\") pod \"calico-node-l6jzk\" (UID: \"75e20893-922a-4999-9ce7-db6e98ff63c4\") " pod="calico-system/calico-node-l6jzk" Mar 6 03:03:22.401267 kubelet[3329]: I0306 03:03:22.400609 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/75e20893-922a-4999-9ce7-db6e98ff63c4-bpffs\") pod \"calico-node-l6jzk\" (UID: \"75e20893-922a-4999-9ce7-db6e98ff63c4\") " pod="calico-system/calico-node-l6jzk" Mar 6 03:03:22.401267 kubelet[3329]: I0306 03:03:22.400630 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/75e20893-922a-4999-9ce7-db6e98ff63c4-cni-log-dir\") pod \"calico-node-l6jzk\" (UID: \"75e20893-922a-4999-9ce7-db6e98ff63c4\") " pod="calico-system/calico-node-l6jzk" Mar 6 03:03:22.423604 containerd[1983]: time="2026-03-06T03:03:22.423556261Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7c8fd85878-clcd6,Uid:12e5637a-2d6c-45d3-a45e-df181fa53524,Namespace:calico-system,Attempt:0,}" Mar 6 03:03:22.460621 containerd[1983]: time="2026-03-06T03:03:22.460573020Z" level=info msg="connecting to shim d78f9ed5a3c2030d86a5bb5519d342f932116f70d4acec75f99eb171fe31bbc9" address="unix:///run/containerd/s/f3b9ab3c471ae44b2653384338b5a40d8ea883459ec268f6377b4114d5e978de" namespace=k8s.io protocol=ttrpc version=3 Mar 6 03:03:22.494726 systemd[1]: Started cri-containerd-d78f9ed5a3c2030d86a5bb5519d342f932116f70d4acec75f99eb171fe31bbc9.scope - libcontainer container d78f9ed5a3c2030d86a5bb5519d342f932116f70d4acec75f99eb171fe31bbc9. Mar 6 03:03:22.501137 kubelet[3329]: I0306 03:03:22.501091 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/aa230f7b-c2dc-46a5-ab7f-c83880b50346-kubelet-dir\") pod \"csi-node-driver-zsntj\" (UID: \"aa230f7b-c2dc-46a5-ab7f-c83880b50346\") " pod="calico-system/csi-node-driver-zsntj" Mar 6 03:03:22.501295 kubelet[3329]: I0306 03:03:22.501183 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czkh9\" (UniqueName: \"kubernetes.io/projected/aa230f7b-c2dc-46a5-ab7f-c83880b50346-kube-api-access-czkh9\") pod \"csi-node-driver-zsntj\" (UID: \"aa230f7b-c2dc-46a5-ab7f-c83880b50346\") " pod="calico-system/csi-node-driver-zsntj" Mar 6 03:03:22.501295 kubelet[3329]: I0306 03:03:22.501228 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/aa230f7b-c2dc-46a5-ab7f-c83880b50346-socket-dir\") pod \"csi-node-driver-zsntj\" (UID: \"aa230f7b-c2dc-46a5-ab7f-c83880b50346\") " pod="calico-system/csi-node-driver-zsntj" Mar 6 03:03:22.501295 kubelet[3329]: I0306 03:03:22.501274 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/aa230f7b-c2dc-46a5-ab7f-c83880b50346-varrun\") pod \"csi-node-driver-zsntj\" (UID: \"aa230f7b-c2dc-46a5-ab7f-c83880b50346\") " pod="calico-system/csi-node-driver-zsntj" Mar 6 03:03:22.501525 kubelet[3329]: I0306 03:03:22.501333 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/aa230f7b-c2dc-46a5-ab7f-c83880b50346-registration-dir\") pod \"csi-node-driver-zsntj\" (UID: \"aa230f7b-c2dc-46a5-ab7f-c83880b50346\") " pod="calico-system/csi-node-driver-zsntj" Mar 6 03:03:22.510932 kubelet[3329]: E0306 03:03:22.510905 3329 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:03:22.511279 kubelet[3329]: W0306 03:03:22.511256 3329 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:03:22.512521 kubelet[3329]: E0306 03:03:22.512463 3329 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:03:22.524135 kubelet[3329]: E0306 03:03:22.524107 3329 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:03:22.524512 kubelet[3329]: W0306 03:03:22.524472 3329 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:03:22.524512 kubelet[3329]: E0306 03:03:22.524504 3329 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:03:22.568215 containerd[1983]: time="2026-03-06T03:03:22.568164434Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7c8fd85878-clcd6,Uid:12e5637a-2d6c-45d3-a45e-df181fa53524,Namespace:calico-system,Attempt:0,} returns sandbox id \"d78f9ed5a3c2030d86a5bb5519d342f932116f70d4acec75f99eb171fe31bbc9\"" Mar 6 03:03:22.570890 containerd[1983]: time="2026-03-06T03:03:22.570850499Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Mar 6 03:03:22.602552 kubelet[3329]: E0306 03:03:22.602511 3329 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:03:22.602552 kubelet[3329]: W0306 03:03:22.602534 3329 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:03:22.602552 kubelet[3329]: E0306 03:03:22.602559 3329 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:03:22.602889 kubelet[3329]: E0306 03:03:22.602864 3329 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:03:22.602889 kubelet[3329]: W0306 03:03:22.602881 3329 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:03:22.603092 kubelet[3329]: E0306 03:03:22.602898 3329 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:03:22.603314 kubelet[3329]: E0306 03:03:22.603292 3329 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:03:22.603314 kubelet[3329]: W0306 03:03:22.603309 3329 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:03:22.603494 kubelet[3329]: E0306 03:03:22.603325 3329 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:03:22.603646 kubelet[3329]: E0306 03:03:22.603626 3329 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:03:22.603646 kubelet[3329]: W0306 03:03:22.603641 3329 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:03:22.603751 kubelet[3329]: E0306 03:03:22.603656 3329 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:03:22.603905 kubelet[3329]: E0306 03:03:22.603888 3329 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:03:22.603905 kubelet[3329]: W0306 03:03:22.603902 3329 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:03:22.604004 kubelet[3329]: E0306 03:03:22.603915 3329 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:03:22.604199 kubelet[3329]: E0306 03:03:22.604182 3329 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:03:22.604199 kubelet[3329]: W0306 03:03:22.604195 3329 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:03:22.604303 kubelet[3329]: E0306 03:03:22.604208 3329 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:03:22.604503 kubelet[3329]: E0306 03:03:22.604484 3329 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:03:22.604503 kubelet[3329]: W0306 03:03:22.604499 3329 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:03:22.604611 kubelet[3329]: E0306 03:03:22.604515 3329 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:03:22.604775 kubelet[3329]: E0306 03:03:22.604759 3329 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:03:22.604775 kubelet[3329]: W0306 03:03:22.604772 3329 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:03:22.604950 kubelet[3329]: E0306 03:03:22.604786 3329 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:03:22.605008 kubelet[3329]: E0306 03:03:22.604997 3329 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:03:22.605053 kubelet[3329]: W0306 03:03:22.605028 3329 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:03:22.605053 kubelet[3329]: E0306 03:03:22.605040 3329 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:03:22.605273 kubelet[3329]: E0306 03:03:22.605257 3329 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:03:22.605273 kubelet[3329]: W0306 03:03:22.605270 3329 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:03:22.605374 kubelet[3329]: E0306 03:03:22.605283 3329 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:03:22.605602 kubelet[3329]: E0306 03:03:22.605585 3329 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:03:22.605602 kubelet[3329]: W0306 03:03:22.605599 3329 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:03:22.605700 kubelet[3329]: E0306 03:03:22.605611 3329 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:03:22.605887 kubelet[3329]: E0306 03:03:22.605868 3329 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:03:22.605887 kubelet[3329]: W0306 03:03:22.605881 3329 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:03:22.605987 kubelet[3329]: E0306 03:03:22.605896 3329 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:03:22.606284 kubelet[3329]: E0306 03:03:22.606249 3329 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:03:22.606351 kubelet[3329]: W0306 03:03:22.606304 3329 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:03:22.606351 kubelet[3329]: E0306 03:03:22.606323 3329 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:03:22.607772 kubelet[3329]: E0306 03:03:22.607698 3329 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:03:22.607772 kubelet[3329]: W0306 03:03:22.607713 3329 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:03:22.607772 kubelet[3329]: E0306 03:03:22.607728 3329 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:03:22.608295 kubelet[3329]: E0306 03:03:22.608283 3329 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:03:22.608445 kubelet[3329]: W0306 03:03:22.608384 3329 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:03:22.608671 kubelet[3329]: E0306 03:03:22.608506 3329 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:03:22.608918 kubelet[3329]: E0306 03:03:22.608904 3329 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:03:22.609065 kubelet[3329]: W0306 03:03:22.608993 3329 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:03:22.609065 kubelet[3329]: E0306 03:03:22.609012 3329 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:03:22.609380 kubelet[3329]: E0306 03:03:22.609340 3329 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:03:22.609380 kubelet[3329]: W0306 03:03:22.609353 3329 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:03:22.609380 kubelet[3329]: E0306 03:03:22.609367 3329 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:03:22.609835 kubelet[3329]: E0306 03:03:22.609795 3329 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:03:22.609835 kubelet[3329]: W0306 03:03:22.609807 3329 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:03:22.609835 kubelet[3329]: E0306 03:03:22.609821 3329 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:03:22.610336 kubelet[3329]: E0306 03:03:22.610293 3329 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:03:22.610336 kubelet[3329]: W0306 03:03:22.610306 3329 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:03:22.610336 kubelet[3329]: E0306 03:03:22.610320 3329 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:03:22.610822 kubelet[3329]: E0306 03:03:22.610810 3329 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:03:22.610960 kubelet[3329]: W0306 03:03:22.610884 3329 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:03:22.610960 kubelet[3329]: E0306 03:03:22.610903 3329 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:03:22.611300 kubelet[3329]: E0306 03:03:22.611261 3329 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:03:22.611300 kubelet[3329]: W0306 03:03:22.611273 3329 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:03:22.611300 kubelet[3329]: E0306 03:03:22.611287 3329 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:03:22.611686 kubelet[3329]: E0306 03:03:22.611648 3329 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:03:22.611686 kubelet[3329]: W0306 03:03:22.611660 3329 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:03:22.611686 kubelet[3329]: E0306 03:03:22.611672 3329 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:03:22.612212 kubelet[3329]: E0306 03:03:22.612064 3329 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:03:22.612212 kubelet[3329]: W0306 03:03:22.612076 3329 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:03:22.612212 kubelet[3329]: E0306 03:03:22.612089 3329 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:03:22.612734 kubelet[3329]: E0306 03:03:22.612589 3329 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:03:22.612734 kubelet[3329]: W0306 03:03:22.612601 3329 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:03:22.612734 kubelet[3329]: E0306 03:03:22.612613 3329 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:03:22.613071 kubelet[3329]: E0306 03:03:22.613018 3329 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:03:22.613071 kubelet[3329]: W0306 03:03:22.613030 3329 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:03:22.613071 kubelet[3329]: E0306 03:03:22.613044 3329 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:03:22.618769 kubelet[3329]: E0306 03:03:22.618741 3329 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:03:22.618769 kubelet[3329]: W0306 03:03:22.618761 3329 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:03:22.618769 kubelet[3329]: E0306 03:03:22.618783 3329 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:03:22.827296 containerd[1983]: time="2026-03-06T03:03:22.827250879Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-l6jzk,Uid:75e20893-922a-4999-9ce7-db6e98ff63c4,Namespace:calico-system,Attempt:0,}" Mar 6 03:03:22.851933 containerd[1983]: time="2026-03-06T03:03:22.851887638Z" level=info msg="connecting to shim 6c71b844ac6edd4bd62b8b5573a078f5be63ca7914b66a730d23bb1b8d0522a4" address="unix:///run/containerd/s/38861d89db923481511732801b7909f146ed581666c067154d8ffa0b261601df" namespace=k8s.io protocol=ttrpc version=3 Mar 6 03:03:22.881705 systemd[1]: Started cri-containerd-6c71b844ac6edd4bd62b8b5573a078f5be63ca7914b66a730d23bb1b8d0522a4.scope - libcontainer container 6c71b844ac6edd4bd62b8b5573a078f5be63ca7914b66a730d23bb1b8d0522a4. Mar 6 03:03:22.925604 containerd[1983]: time="2026-03-06T03:03:22.925564383Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-l6jzk,Uid:75e20893-922a-4999-9ce7-db6e98ff63c4,Namespace:calico-system,Attempt:0,} returns sandbox id \"6c71b844ac6edd4bd62b8b5573a078f5be63ca7914b66a730d23bb1b8d0522a4\"" Mar 6 03:03:23.642917 kubelet[3329]: E0306 03:03:23.642806 3329 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zsntj" podUID="aa230f7b-c2dc-46a5-ab7f-c83880b50346" Mar 6 03:03:23.890491 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount469109852.mount: Deactivated successfully. Mar 6 03:03:24.744901 containerd[1983]: time="2026-03-06T03:03:24.744854412Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:03:24.746852 containerd[1983]: time="2026-03-06T03:03:24.746704889Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=36107596" Mar 6 03:03:24.749011 containerd[1983]: time="2026-03-06T03:03:24.748960476Z" level=info msg="ImageCreate event name:\"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:03:24.752485 containerd[1983]: time="2026-03-06T03:03:24.752446355Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:03:24.753200 containerd[1983]: time="2026-03-06T03:03:24.753167973Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"36107450\" in 2.182271642s" Mar 6 03:03:24.753331 containerd[1983]: time="2026-03-06T03:03:24.753312650Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\"" Mar 6 03:03:24.755055 containerd[1983]: time="2026-03-06T03:03:24.754862368Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Mar 6 03:03:24.777692 containerd[1983]: time="2026-03-06T03:03:24.777630934Z" level=info msg="CreateContainer within sandbox \"d78f9ed5a3c2030d86a5bb5519d342f932116f70d4acec75f99eb171fe31bbc9\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Mar 6 03:03:24.793601 containerd[1983]: time="2026-03-06T03:03:24.793561694Z" level=info msg="Container 2ee6af2d93c22e58d6ecdb22edaf38dbcd17d6892c3557002b09618fdd9e0783: CDI devices from CRI Config.CDIDevices: []" Mar 6 03:03:24.802292 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1973781068.mount: Deactivated successfully. Mar 6 03:03:24.813583 containerd[1983]: time="2026-03-06T03:03:24.813540089Z" level=info msg="CreateContainer within sandbox \"d78f9ed5a3c2030d86a5bb5519d342f932116f70d4acec75f99eb171fe31bbc9\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"2ee6af2d93c22e58d6ecdb22edaf38dbcd17d6892c3557002b09618fdd9e0783\"" Mar 6 03:03:24.814543 containerd[1983]: time="2026-03-06T03:03:24.814367821Z" level=info msg="StartContainer for \"2ee6af2d93c22e58d6ecdb22edaf38dbcd17d6892c3557002b09618fdd9e0783\"" Mar 6 03:03:24.816136 containerd[1983]: time="2026-03-06T03:03:24.816081505Z" level=info msg="connecting to shim 2ee6af2d93c22e58d6ecdb22edaf38dbcd17d6892c3557002b09618fdd9e0783" address="unix:///run/containerd/s/f3b9ab3c471ae44b2653384338b5a40d8ea883459ec268f6377b4114d5e978de" protocol=ttrpc version=3 Mar 6 03:03:24.842757 systemd[1]: Started cri-containerd-2ee6af2d93c22e58d6ecdb22edaf38dbcd17d6892c3557002b09618fdd9e0783.scope - libcontainer container 2ee6af2d93c22e58d6ecdb22edaf38dbcd17d6892c3557002b09618fdd9e0783. Mar 6 03:03:24.928387 containerd[1983]: time="2026-03-06T03:03:24.928282551Z" level=info msg="StartContainer for \"2ee6af2d93c22e58d6ecdb22edaf38dbcd17d6892c3557002b09618fdd9e0783\" returns successfully" Mar 6 03:03:25.643883 kubelet[3329]: E0306 03:03:25.643244 3329 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zsntj" podUID="aa230f7b-c2dc-46a5-ab7f-c83880b50346" Mar 6 03:03:25.902848 kubelet[3329]: I0306 03:03:25.902342 3329 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-typha-7c8fd85878-clcd6" podStartSLOduration=1.7184189380000001 podStartE2EDuration="3.902327337s" podCreationTimestamp="2026-03-06 03:03:22 +0000 UTC" firstStartedPulling="2026-03-06 03:03:22.5703613 +0000 UTC m=+25.082332364" lastFinishedPulling="2026-03-06 03:03:24.754269681 +0000 UTC m=+27.266240763" observedRunningTime="2026-03-06 03:03:25.901985805 +0000 UTC m=+28.413956934" watchObservedRunningTime="2026-03-06 03:03:25.902327337 +0000 UTC m=+28.414298474" Mar 6 03:03:25.922813 kubelet[3329]: E0306 03:03:25.922768 3329 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:03:25.922813 kubelet[3329]: W0306 03:03:25.922794 3329 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:03:25.923123 kubelet[3329]: E0306 03:03:25.922822 3329 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:03:25.923123 kubelet[3329]: E0306 03:03:25.923065 3329 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:03:25.923123 kubelet[3329]: W0306 03:03:25.923076 3329 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:03:25.923123 kubelet[3329]: E0306 03:03:25.923091 3329 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:03:25.923472 kubelet[3329]: E0306 03:03:25.923276 3329 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:03:25.923472 kubelet[3329]: W0306 03:03:25.923286 3329 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:03:25.923472 kubelet[3329]: E0306 03:03:25.923298 3329 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:03:25.923702 kubelet[3329]: E0306 03:03:25.923558 3329 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:03:25.923702 kubelet[3329]: W0306 03:03:25.923569 3329 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:03:25.923702 kubelet[3329]: E0306 03:03:25.923583 3329 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:03:25.923900 kubelet[3329]: E0306 03:03:25.923790 3329 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:03:25.923900 kubelet[3329]: W0306 03:03:25.923799 3329 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:03:25.923900 kubelet[3329]: E0306 03:03:25.923812 3329 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:03:25.924134 kubelet[3329]: E0306 03:03:25.924016 3329 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:03:25.924134 kubelet[3329]: W0306 03:03:25.924025 3329 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:03:25.924134 kubelet[3329]: E0306 03:03:25.924037 3329 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:03:25.924383 kubelet[3329]: E0306 03:03:25.924213 3329 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:03:25.924383 kubelet[3329]: W0306 03:03:25.924222 3329 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:03:25.924383 kubelet[3329]: E0306 03:03:25.924234 3329 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:03:25.924631 kubelet[3329]: E0306 03:03:25.924412 3329 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:03:25.924631 kubelet[3329]: W0306 03:03:25.924446 3329 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:03:25.924631 kubelet[3329]: E0306 03:03:25.924459 3329 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:03:25.924818 kubelet[3329]: E0306 03:03:25.924674 3329 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:03:25.924818 kubelet[3329]: W0306 03:03:25.924683 3329 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:03:25.924818 kubelet[3329]: E0306 03:03:25.924694 3329 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:03:25.925007 kubelet[3329]: E0306 03:03:25.924863 3329 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:03:25.925007 kubelet[3329]: W0306 03:03:25.924871 3329 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:03:25.925007 kubelet[3329]: E0306 03:03:25.924882 3329 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:03:25.925168 kubelet[3329]: E0306 03:03:25.925052 3329 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:03:25.925168 kubelet[3329]: W0306 03:03:25.925060 3329 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:03:25.925168 kubelet[3329]: E0306 03:03:25.925071 3329 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:03:25.925449 kubelet[3329]: E0306 03:03:25.925406 3329 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:03:25.925520 kubelet[3329]: W0306 03:03:25.925451 3329 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:03:25.925520 kubelet[3329]: E0306 03:03:25.925468 3329 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:03:25.925683 kubelet[3329]: E0306 03:03:25.925666 3329 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:03:25.925683 kubelet[3329]: W0306 03:03:25.925679 3329 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:03:25.925788 kubelet[3329]: E0306 03:03:25.925691 3329 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:03:25.925892 kubelet[3329]: E0306 03:03:25.925876 3329 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:03:25.925892 kubelet[3329]: W0306 03:03:25.925888 3329 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:03:25.925986 kubelet[3329]: E0306 03:03:25.925901 3329 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:03:25.926125 kubelet[3329]: E0306 03:03:25.926106 3329 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:03:25.926125 kubelet[3329]: W0306 03:03:25.926121 3329 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:03:25.926232 kubelet[3329]: E0306 03:03:25.926134 3329 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:03:25.939219 kubelet[3329]: E0306 03:03:25.939183 3329 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:03:25.939219 kubelet[3329]: W0306 03:03:25.939211 3329 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:03:25.939443 kubelet[3329]: E0306 03:03:25.939243 3329 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:03:25.939610 kubelet[3329]: E0306 03:03:25.939588 3329 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:03:25.939692 kubelet[3329]: W0306 03:03:25.939609 3329 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:03:25.939692 kubelet[3329]: E0306 03:03:25.939624 3329 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:03:25.940001 kubelet[3329]: E0306 03:03:25.939971 3329 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:03:25.940001 kubelet[3329]: W0306 03:03:25.939986 3329 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:03:25.940001 kubelet[3329]: E0306 03:03:25.940000 3329 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:03:25.940282 kubelet[3329]: E0306 03:03:25.940265 3329 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:03:25.940282 kubelet[3329]: W0306 03:03:25.940279 3329 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:03:25.940385 kubelet[3329]: E0306 03:03:25.940292 3329 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:03:25.940917 kubelet[3329]: E0306 03:03:25.940895 3329 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:03:25.940917 kubelet[3329]: W0306 03:03:25.940914 3329 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:03:25.941030 kubelet[3329]: E0306 03:03:25.940929 3329 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:03:25.941888 kubelet[3329]: E0306 03:03:25.941732 3329 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:03:25.941888 kubelet[3329]: W0306 03:03:25.941748 3329 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:03:25.941888 kubelet[3329]: E0306 03:03:25.941762 3329 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:03:25.942190 kubelet[3329]: E0306 03:03:25.942171 3329 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:03:25.942304 kubelet[3329]: W0306 03:03:25.942186 3329 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:03:25.942363 kubelet[3329]: E0306 03:03:25.942307 3329 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:03:25.942679 kubelet[3329]: E0306 03:03:25.942618 3329 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:03:25.942679 kubelet[3329]: W0306 03:03:25.942628 3329 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:03:25.942679 kubelet[3329]: E0306 03:03:25.942642 3329 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:03:25.942994 kubelet[3329]: E0306 03:03:25.942979 3329 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:03:25.943135 kubelet[3329]: W0306 03:03:25.943079 3329 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:03:25.943135 kubelet[3329]: E0306 03:03:25.943118 3329 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:03:25.944110 kubelet[3329]: E0306 03:03:25.943996 3329 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:03:25.944110 kubelet[3329]: W0306 03:03:25.944011 3329 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:03:25.944110 kubelet[3329]: E0306 03:03:25.944025 3329 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:03:25.944803 kubelet[3329]: E0306 03:03:25.944753 3329 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:03:25.944803 kubelet[3329]: W0306 03:03:25.944767 3329 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:03:25.944803 kubelet[3329]: E0306 03:03:25.944780 3329 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:03:25.945619 kubelet[3329]: E0306 03:03:25.945573 3329 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:03:25.945619 kubelet[3329]: W0306 03:03:25.945588 3329 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:03:25.945619 kubelet[3329]: E0306 03:03:25.945602 3329 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:03:25.946113 kubelet[3329]: E0306 03:03:25.946070 3329 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:03:25.946113 kubelet[3329]: W0306 03:03:25.946085 3329 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:03:25.946113 kubelet[3329]: E0306 03:03:25.946098 3329 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:03:25.946740 kubelet[3329]: E0306 03:03:25.946619 3329 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:03:25.946740 kubelet[3329]: W0306 03:03:25.946635 3329 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:03:25.946740 kubelet[3329]: E0306 03:03:25.946648 3329 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:03:25.947659 kubelet[3329]: E0306 03:03:25.947507 3329 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:03:25.947659 kubelet[3329]: W0306 03:03:25.947522 3329 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:03:25.947659 kubelet[3329]: E0306 03:03:25.947535 3329 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:03:25.948308 kubelet[3329]: E0306 03:03:25.948264 3329 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:03:25.948308 kubelet[3329]: W0306 03:03:25.948278 3329 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:03:25.948308 kubelet[3329]: E0306 03:03:25.948292 3329 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:03:25.948922 kubelet[3329]: E0306 03:03:25.948874 3329 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:03:25.948922 kubelet[3329]: W0306 03:03:25.948889 3329 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:03:25.948922 kubelet[3329]: E0306 03:03:25.948903 3329 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:03:25.949408 kubelet[3329]: E0306 03:03:25.949356 3329 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:03:25.949408 kubelet[3329]: W0306 03:03:25.949369 3329 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:03:25.949408 kubelet[3329]: E0306 03:03:25.949382 3329 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:03:26.891580 kubelet[3329]: I0306 03:03:26.891537 3329 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Mar 6 03:03:26.934002 kubelet[3329]: E0306 03:03:26.933965 3329 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:03:26.934002 kubelet[3329]: W0306 03:03:26.933990 3329 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:03:26.934239 kubelet[3329]: E0306 03:03:26.934016 3329 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:03:26.934239 kubelet[3329]: E0306 03:03:26.934214 3329 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:03:26.934239 kubelet[3329]: W0306 03:03:26.934224 3329 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:03:26.934239 kubelet[3329]: E0306 03:03:26.934236 3329 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:03:26.934609 kubelet[3329]: E0306 03:03:26.934564 3329 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:03:26.934609 kubelet[3329]: W0306 03:03:26.934576 3329 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:03:26.934609 kubelet[3329]: E0306 03:03:26.934590 3329 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:03:26.934890 kubelet[3329]: E0306 03:03:26.934869 3329 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:03:26.934890 kubelet[3329]: W0306 03:03:26.934884 3329 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:03:26.935017 kubelet[3329]: E0306 03:03:26.934900 3329 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:03:26.935123 kubelet[3329]: E0306 03:03:26.935104 3329 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:03:26.935123 kubelet[3329]: W0306 03:03:26.935118 3329 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:03:26.935240 kubelet[3329]: E0306 03:03:26.935133 3329 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:03:26.935338 kubelet[3329]: E0306 03:03:26.935319 3329 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:03:26.935338 kubelet[3329]: W0306 03:03:26.935331 3329 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:03:26.935490 kubelet[3329]: E0306 03:03:26.935343 3329 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:03:26.935574 kubelet[3329]: E0306 03:03:26.935556 3329 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:03:26.935574 kubelet[3329]: W0306 03:03:26.935570 3329 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:03:26.935666 kubelet[3329]: E0306 03:03:26.935584 3329 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:03:26.935796 kubelet[3329]: E0306 03:03:26.935780 3329 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:03:26.935796 kubelet[3329]: W0306 03:03:26.935793 3329 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:03:26.935894 kubelet[3329]: E0306 03:03:26.935805 3329 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:03:26.936006 kubelet[3329]: E0306 03:03:26.935993 3329 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:03:26.936068 kubelet[3329]: W0306 03:03:26.936006 3329 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:03:26.936068 kubelet[3329]: E0306 03:03:26.936018 3329 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:03:26.936231 kubelet[3329]: E0306 03:03:26.936215 3329 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:03:26.936231 kubelet[3329]: W0306 03:03:26.936228 3329 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:03:26.936336 kubelet[3329]: E0306 03:03:26.936240 3329 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:03:26.936446 kubelet[3329]: E0306 03:03:26.936413 3329 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:03:26.936446 kubelet[3329]: W0306 03:03:26.936437 3329 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:03:26.936550 kubelet[3329]: E0306 03:03:26.936457 3329 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:03:26.936720 kubelet[3329]: E0306 03:03:26.936701 3329 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:03:26.936720 kubelet[3329]: W0306 03:03:26.936716 3329 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:03:26.936824 kubelet[3329]: E0306 03:03:26.936731 3329 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:03:26.936934 kubelet[3329]: E0306 03:03:26.936915 3329 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:03:26.936934 kubelet[3329]: W0306 03:03:26.936929 3329 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:03:26.937060 kubelet[3329]: E0306 03:03:26.936942 3329 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:03:26.937134 kubelet[3329]: E0306 03:03:26.937127 3329 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:03:26.937180 kubelet[3329]: W0306 03:03:26.937136 3329 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:03:26.937180 kubelet[3329]: E0306 03:03:26.937147 3329 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:03:26.937337 kubelet[3329]: E0306 03:03:26.937322 3329 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:03:26.937337 kubelet[3329]: W0306 03:03:26.937334 3329 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:03:26.937452 kubelet[3329]: E0306 03:03:26.937346 3329 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:03:26.947908 kubelet[3329]: E0306 03:03:26.947876 3329 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:03:26.947908 kubelet[3329]: W0306 03:03:26.947900 3329 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:03:26.948130 kubelet[3329]: E0306 03:03:26.947924 3329 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:03:26.948233 kubelet[3329]: E0306 03:03:26.948213 3329 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:03:26.948233 kubelet[3329]: W0306 03:03:26.948226 3329 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:03:26.948358 kubelet[3329]: E0306 03:03:26.948240 3329 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:03:26.948528 kubelet[3329]: E0306 03:03:26.948514 3329 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:03:26.948728 kubelet[3329]: W0306 03:03:26.948528 3329 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:03:26.948728 kubelet[3329]: E0306 03:03:26.948541 3329 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:03:26.948859 kubelet[3329]: E0306 03:03:26.948839 3329 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:03:26.948859 kubelet[3329]: W0306 03:03:26.948855 3329 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:03:26.949023 kubelet[3329]: E0306 03:03:26.948871 3329 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:03:26.949124 kubelet[3329]: E0306 03:03:26.949109 3329 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:03:26.949124 kubelet[3329]: W0306 03:03:26.949122 3329 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:03:26.949283 kubelet[3329]: E0306 03:03:26.949136 3329 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:03:26.949343 kubelet[3329]: E0306 03:03:26.949323 3329 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:03:26.949343 kubelet[3329]: W0306 03:03:26.949333 3329 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:03:26.949500 kubelet[3329]: E0306 03:03:26.949345 3329 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:03:26.949626 kubelet[3329]: E0306 03:03:26.949608 3329 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:03:26.949626 kubelet[3329]: W0306 03:03:26.949623 3329 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:03:26.949753 kubelet[3329]: E0306 03:03:26.949637 3329 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:03:26.950063 kubelet[3329]: E0306 03:03:26.950045 3329 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:03:26.950063 kubelet[3329]: W0306 03:03:26.950059 3329 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:03:26.950276 kubelet[3329]: E0306 03:03:26.950072 3329 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:03:26.950347 kubelet[3329]: E0306 03:03:26.950283 3329 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:03:26.950347 kubelet[3329]: W0306 03:03:26.950293 3329 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:03:26.950347 kubelet[3329]: E0306 03:03:26.950306 3329 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:03:26.950675 kubelet[3329]: E0306 03:03:26.950656 3329 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:03:26.950675 kubelet[3329]: W0306 03:03:26.950672 3329 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:03:26.950795 kubelet[3329]: E0306 03:03:26.950687 3329 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:03:26.950903 kubelet[3329]: E0306 03:03:26.950886 3329 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:03:26.950973 kubelet[3329]: W0306 03:03:26.950919 3329 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:03:26.950973 kubelet[3329]: E0306 03:03:26.950933 3329 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:03:26.951157 kubelet[3329]: E0306 03:03:26.951125 3329 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:03:26.951157 kubelet[3329]: W0306 03:03:26.951139 3329 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:03:26.951157 kubelet[3329]: E0306 03:03:26.951150 3329 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:03:26.951461 kubelet[3329]: E0306 03:03:26.951414 3329 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:03:26.951461 kubelet[3329]: W0306 03:03:26.951456 3329 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:03:26.951577 kubelet[3329]: E0306 03:03:26.951470 3329 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:03:26.951967 kubelet[3329]: E0306 03:03:26.951855 3329 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:03:26.951967 kubelet[3329]: W0306 03:03:26.951872 3329 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:03:26.951967 kubelet[3329]: E0306 03:03:26.951885 3329 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:03:26.952191 kubelet[3329]: E0306 03:03:26.952173 3329 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:03:26.952191 kubelet[3329]: W0306 03:03:26.952187 3329 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:03:26.952289 kubelet[3329]: E0306 03:03:26.952199 3329 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:03:26.952487 kubelet[3329]: E0306 03:03:26.952469 3329 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:03:26.952487 kubelet[3329]: W0306 03:03:26.952483 3329 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:03:26.952601 kubelet[3329]: E0306 03:03:26.952497 3329 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:03:26.952853 kubelet[3329]: E0306 03:03:26.952832 3329 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:03:26.952853 kubelet[3329]: W0306 03:03:26.952845 3329 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:03:26.952961 kubelet[3329]: E0306 03:03:26.952859 3329 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:03:26.953086 kubelet[3329]: E0306 03:03:26.953069 3329 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:03:26.953086 kubelet[3329]: W0306 03:03:26.953082 3329 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:03:26.953165 kubelet[3329]: E0306 03:03:26.953095 3329 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:03:27.644082 kubelet[3329]: E0306 03:03:27.643712 3329 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zsntj" podUID="aa230f7b-c2dc-46a5-ab7f-c83880b50346" Mar 6 03:03:28.301295 containerd[1983]: time="2026-03-06T03:03:28.301241779Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:03:28.303328 containerd[1983]: time="2026-03-06T03:03:28.303154693Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4630250" Mar 6 03:03:28.305510 containerd[1983]: time="2026-03-06T03:03:28.305470793Z" level=info msg="ImageCreate event name:\"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:03:28.309448 containerd[1983]: time="2026-03-06T03:03:28.309032194Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:03:28.309849 containerd[1983]: time="2026-03-06T03:03:28.309813392Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"6186255\" in 3.554910505s" Mar 6 03:03:28.309967 containerd[1983]: time="2026-03-06T03:03:28.309948239Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\"" Mar 6 03:03:28.318232 containerd[1983]: time="2026-03-06T03:03:28.318184254Z" level=info msg="CreateContainer within sandbox \"6c71b844ac6edd4bd62b8b5573a078f5be63ca7914b66a730d23bb1b8d0522a4\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 6 03:03:28.335660 containerd[1983]: time="2026-03-06T03:03:28.335614586Z" level=info msg="Container dafcb982b74bdd3d454f947fa532dc8f370c54c61d920698c92f349745853161: CDI devices from CRI Config.CDIDevices: []" Mar 6 03:03:28.351908 containerd[1983]: time="2026-03-06T03:03:28.351865976Z" level=info msg="CreateContainer within sandbox \"6c71b844ac6edd4bd62b8b5573a078f5be63ca7914b66a730d23bb1b8d0522a4\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"dafcb982b74bdd3d454f947fa532dc8f370c54c61d920698c92f349745853161\"" Mar 6 03:03:28.352876 containerd[1983]: time="2026-03-06T03:03:28.352844587Z" level=info msg="StartContainer for \"dafcb982b74bdd3d454f947fa532dc8f370c54c61d920698c92f349745853161\"" Mar 6 03:03:28.355172 containerd[1983]: time="2026-03-06T03:03:28.355133364Z" level=info msg="connecting to shim dafcb982b74bdd3d454f947fa532dc8f370c54c61d920698c92f349745853161" address="unix:///run/containerd/s/38861d89db923481511732801b7909f146ed581666c067154d8ffa0b261601df" protocol=ttrpc version=3 Mar 6 03:03:28.388663 systemd[1]: Started cri-containerd-dafcb982b74bdd3d454f947fa532dc8f370c54c61d920698c92f349745853161.scope - libcontainer container dafcb982b74bdd3d454f947fa532dc8f370c54c61d920698c92f349745853161. Mar 6 03:03:28.474057 containerd[1983]: time="2026-03-06T03:03:28.473001427Z" level=info msg="StartContainer for \"dafcb982b74bdd3d454f947fa532dc8f370c54c61d920698c92f349745853161\" returns successfully" Mar 6 03:03:28.478109 systemd[1]: cri-containerd-dafcb982b74bdd3d454f947fa532dc8f370c54c61d920698c92f349745853161.scope: Deactivated successfully. Mar 6 03:03:28.521282 containerd[1983]: time="2026-03-06T03:03:28.521054779Z" level=info msg="received container exit event container_id:\"dafcb982b74bdd3d454f947fa532dc8f370c54c61d920698c92f349745853161\" id:\"dafcb982b74bdd3d454f947fa532dc8f370c54c61d920698c92f349745853161\" pid:4168 exited_at:{seconds:1772766208 nanos:483849988}" Mar 6 03:03:28.556726 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dafcb982b74bdd3d454f947fa532dc8f370c54c61d920698c92f349745853161-rootfs.mount: Deactivated successfully. Mar 6 03:03:28.888089 containerd[1983]: time="2026-03-06T03:03:28.887890706Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Mar 6 03:03:29.642856 kubelet[3329]: E0306 03:03:29.642761 3329 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zsntj" podUID="aa230f7b-c2dc-46a5-ab7f-c83880b50346" Mar 6 03:03:31.645001 kubelet[3329]: E0306 03:03:31.642644 3329 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zsntj" podUID="aa230f7b-c2dc-46a5-ab7f-c83880b50346" Mar 6 03:03:33.642491 kubelet[3329]: E0306 03:03:33.641993 3329 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zsntj" podUID="aa230f7b-c2dc-46a5-ab7f-c83880b50346" Mar 6 03:03:35.645293 kubelet[3329]: E0306 03:03:35.643309 3329 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zsntj" podUID="aa230f7b-c2dc-46a5-ab7f-c83880b50346" Mar 6 03:03:37.642843 kubelet[3329]: E0306 03:03:37.642581 3329 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zsntj" podUID="aa230f7b-c2dc-46a5-ab7f-c83880b50346" Mar 6 03:03:39.642261 kubelet[3329]: E0306 03:03:39.642144 3329 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zsntj" podUID="aa230f7b-c2dc-46a5-ab7f-c83880b50346" Mar 6 03:03:41.071466 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1499771010.mount: Deactivated successfully. Mar 6 03:03:41.132102 containerd[1983]: time="2026-03-06T03:03:41.132058338Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:03:41.134902 containerd[1983]: time="2026-03-06T03:03:41.134584271Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=159838564" Mar 6 03:03:41.135884 containerd[1983]: time="2026-03-06T03:03:41.135151643Z" level=info msg="ImageCreate event name:\"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:03:41.139008 containerd[1983]: time="2026-03-06T03:03:41.138956470Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:03:41.139728 containerd[1983]: time="2026-03-06T03:03:41.139689671Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"159838426\" in 12.251724832s" Mar 6 03:03:41.139860 containerd[1983]: time="2026-03-06T03:03:41.139733866Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\"" Mar 6 03:03:41.172748 containerd[1983]: time="2026-03-06T03:03:41.146540184Z" level=info msg="CreateContainer within sandbox \"6c71b844ac6edd4bd62b8b5573a078f5be63ca7914b66a730d23bb1b8d0522a4\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Mar 6 03:03:41.172748 containerd[1983]: time="2026-03-06T03:03:41.159616900Z" level=info msg="Container bde05bffb510c9f545d46ae34198e5a9837d5c089c568427e4e6b2daad787f0f: CDI devices from CRI Config.CDIDevices: []" Mar 6 03:03:41.165996 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3179270058.mount: Deactivated successfully. Mar 6 03:03:41.195359 containerd[1983]: time="2026-03-06T03:03:41.195306035Z" level=info msg="CreateContainer within sandbox \"6c71b844ac6edd4bd62b8b5573a078f5be63ca7914b66a730d23bb1b8d0522a4\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"bde05bffb510c9f545d46ae34198e5a9837d5c089c568427e4e6b2daad787f0f\"" Mar 6 03:03:41.196383 containerd[1983]: time="2026-03-06T03:03:41.195974298Z" level=info msg="StartContainer for \"bde05bffb510c9f545d46ae34198e5a9837d5c089c568427e4e6b2daad787f0f\"" Mar 6 03:03:41.203026 containerd[1983]: time="2026-03-06T03:03:41.202976756Z" level=info msg="connecting to shim bde05bffb510c9f545d46ae34198e5a9837d5c089c568427e4e6b2daad787f0f" address="unix:///run/containerd/s/38861d89db923481511732801b7909f146ed581666c067154d8ffa0b261601df" protocol=ttrpc version=3 Mar 6 03:03:41.297677 systemd[1]: Started cri-containerd-bde05bffb510c9f545d46ae34198e5a9837d5c089c568427e4e6b2daad787f0f.scope - libcontainer container bde05bffb510c9f545d46ae34198e5a9837d5c089c568427e4e6b2daad787f0f. Mar 6 03:03:41.395769 containerd[1983]: time="2026-03-06T03:03:41.395073395Z" level=info msg="StartContainer for \"bde05bffb510c9f545d46ae34198e5a9837d5c089c568427e4e6b2daad787f0f\" returns successfully" Mar 6 03:03:41.643238 kubelet[3329]: E0306 03:03:41.642711 3329 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zsntj" podUID="aa230f7b-c2dc-46a5-ab7f-c83880b50346" Mar 6 03:03:42.522399 systemd[1]: cri-containerd-bde05bffb510c9f545d46ae34198e5a9837d5c089c568427e4e6b2daad787f0f.scope: Deactivated successfully. Mar 6 03:03:42.523487 systemd[1]: cri-containerd-bde05bffb510c9f545d46ae34198e5a9837d5c089c568427e4e6b2daad787f0f.scope: Consumed 92ms CPU time, 35.1M memory peak, 12.5M read from disk. Mar 6 03:03:42.566613 containerd[1983]: time="2026-03-06T03:03:42.566537418Z" level=info msg="received container exit event container_id:\"bde05bffb510c9f545d46ae34198e5a9837d5c089c568427e4e6b2daad787f0f\" id:\"bde05bffb510c9f545d46ae34198e5a9837d5c089c568427e4e6b2daad787f0f\" pid:4227 exited_at:{seconds:1772766222 nanos:565906844}" Mar 6 03:03:42.597230 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bde05bffb510c9f545d46ae34198e5a9837d5c089c568427e4e6b2daad787f0f-rootfs.mount: Deactivated successfully. Mar 6 03:03:42.937013 containerd[1983]: time="2026-03-06T03:03:42.936898271Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Mar 6 03:03:43.642242 kubelet[3329]: E0306 03:03:43.642123 3329 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zsntj" podUID="aa230f7b-c2dc-46a5-ab7f-c83880b50346" Mar 6 03:03:45.645624 kubelet[3329]: E0306 03:03:45.643478 3329 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zsntj" podUID="aa230f7b-c2dc-46a5-ab7f-c83880b50346" Mar 6 03:03:45.881544 containerd[1983]: time="2026-03-06T03:03:45.881491074Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:03:45.882807 containerd[1983]: time="2026-03-06T03:03:45.882695346Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=70611671" Mar 6 03:03:45.883868 containerd[1983]: time="2026-03-06T03:03:45.883582451Z" level=info msg="ImageCreate event name:\"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:03:45.885880 containerd[1983]: time="2026-03-06T03:03:45.885845127Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:03:45.886898 containerd[1983]: time="2026-03-06T03:03:45.886867273Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"72167716\" in 2.949925836s" Mar 6 03:03:45.887035 containerd[1983]: time="2026-03-06T03:03:45.887012127Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\"" Mar 6 03:03:45.892908 containerd[1983]: time="2026-03-06T03:03:45.892855853Z" level=info msg="CreateContainer within sandbox \"6c71b844ac6edd4bd62b8b5573a078f5be63ca7914b66a730d23bb1b8d0522a4\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 6 03:03:45.905440 containerd[1983]: time="2026-03-06T03:03:45.903596330Z" level=info msg="Container f13023d27b14c1936525576323f087103a742bf9d67d758d7eedb6226c821b0a: CDI devices from CRI Config.CDIDevices: []" Mar 6 03:03:45.929314 containerd[1983]: time="2026-03-06T03:03:45.929263785Z" level=info msg="CreateContainer within sandbox \"6c71b844ac6edd4bd62b8b5573a078f5be63ca7914b66a730d23bb1b8d0522a4\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"f13023d27b14c1936525576323f087103a742bf9d67d758d7eedb6226c821b0a\"" Mar 6 03:03:45.941557 containerd[1983]: time="2026-03-06T03:03:45.941510396Z" level=info msg="StartContainer for \"f13023d27b14c1936525576323f087103a742bf9d67d758d7eedb6226c821b0a\"" Mar 6 03:03:45.943804 containerd[1983]: time="2026-03-06T03:03:45.943655189Z" level=info msg="connecting to shim f13023d27b14c1936525576323f087103a742bf9d67d758d7eedb6226c821b0a" address="unix:///run/containerd/s/38861d89db923481511732801b7909f146ed581666c067154d8ffa0b261601df" protocol=ttrpc version=3 Mar 6 03:03:46.002660 systemd[1]: Started cri-containerd-f13023d27b14c1936525576323f087103a742bf9d67d758d7eedb6226c821b0a.scope - libcontainer container f13023d27b14c1936525576323f087103a742bf9d67d758d7eedb6226c821b0a. Mar 6 03:03:46.087625 containerd[1983]: time="2026-03-06T03:03:46.087586183Z" level=info msg="StartContainer for \"f13023d27b14c1936525576323f087103a742bf9d67d758d7eedb6226c821b0a\" returns successfully" Mar 6 03:03:47.056557 systemd[1]: cri-containerd-f13023d27b14c1936525576323f087103a742bf9d67d758d7eedb6226c821b0a.scope: Deactivated successfully. Mar 6 03:03:47.057435 systemd[1]: cri-containerd-f13023d27b14c1936525576323f087103a742bf9d67d758d7eedb6226c821b0a.scope: Consumed 656ms CPU time, 171.8M memory peak, 6.5M read from disk, 177M written to disk. Mar 6 03:03:47.064156 containerd[1983]: time="2026-03-06T03:03:47.062024815Z" level=info msg="received container exit event container_id:\"f13023d27b14c1936525576323f087103a742bf9d67d758d7eedb6226c821b0a\" id:\"f13023d27b14c1936525576323f087103a742bf9d67d758d7eedb6226c821b0a\" pid:4284 exited_at:{seconds:1772766227 nanos:61548419}" Mar 6 03:03:47.109377 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f13023d27b14c1936525576323f087103a742bf9d67d758d7eedb6226c821b0a-rootfs.mount: Deactivated successfully. Mar 6 03:03:47.151588 kubelet[3329]: I0306 03:03:47.150506 3329 kubelet_node_status.go:427] "Fast updating node status as it just became ready" Mar 6 03:03:47.346023 systemd[1]: Created slice kubepods-burstable-pode854168f_88e6_4370_8105_c1eae5b8cb91.slice - libcontainer container kubepods-burstable-pode854168f_88e6_4370_8105_c1eae5b8cb91.slice. Mar 6 03:03:47.357590 systemd[1]: Created slice kubepods-besteffort-pod289ec3ea_b2e0_4ded_ad19_fc4c25fc97e9.slice - libcontainer container kubepods-besteffort-pod289ec3ea_b2e0_4ded_ad19_fc4c25fc97e9.slice. Mar 6 03:03:47.373566 systemd[1]: Created slice kubepods-besteffort-pod5de748ec_8a9e_43a7_9a5d_22442eda9add.slice - libcontainer container kubepods-besteffort-pod5de748ec_8a9e_43a7_9a5d_22442eda9add.slice. Mar 6 03:03:47.387091 systemd[1]: Created slice kubepods-besteffort-pod754aef33_4132_4e75_8c11_8ccce15a7459.slice - libcontainer container kubepods-besteffort-pod754aef33_4132_4e75_8c11_8ccce15a7459.slice. Mar 6 03:03:47.391821 kubelet[3329]: I0306 03:03:47.391133 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqp5l\" (UniqueName: \"kubernetes.io/projected/289ec3ea-b2e0-4ded-ad19-fc4c25fc97e9-kube-api-access-bqp5l\") pod \"calico-kube-controllers-6d48547fb8-6cbb5\" (UID: \"289ec3ea-b2e0-4ded-ad19-fc4c25fc97e9\") " pod="calico-system/calico-kube-controllers-6d48547fb8-6cbb5" Mar 6 03:03:47.391821 kubelet[3329]: I0306 03:03:47.391181 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/289ec3ea-b2e0-4ded-ad19-fc4c25fc97e9-tigera-ca-bundle\") pod \"calico-kube-controllers-6d48547fb8-6cbb5\" (UID: \"289ec3ea-b2e0-4ded-ad19-fc4c25fc97e9\") " pod="calico-system/calico-kube-controllers-6d48547fb8-6cbb5" Mar 6 03:03:47.391821 kubelet[3329]: I0306 03:03:47.391206 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/30d37140-2c8f-4d28-bae5-0ab16e26d339-nginx-config\") pod \"whisker-c8bfd94c4-wv9xg\" (UID: \"30d37140-2c8f-4d28-bae5-0ab16e26d339\") " pod="calico-system/whisker-c8bfd94c4-wv9xg" Mar 6 03:03:47.403614 systemd[1]: Created slice kubepods-burstable-poda66f999f_3bca_46b9_bcee_a4b3df253d8f.slice - libcontainer container kubepods-burstable-poda66f999f_3bca_46b9_bcee_a4b3df253d8f.slice. Mar 6 03:03:47.415108 systemd[1]: Created slice kubepods-besteffort-pod031c3aba_3f87_40fc_a1e7_11afbad831cd.slice - libcontainer container kubepods-besteffort-pod031c3aba_3f87_40fc_a1e7_11afbad831cd.slice. Mar 6 03:03:47.425303 systemd[1]: Created slice kubepods-besteffort-pod30d37140_2c8f_4d28_bae5_0ab16e26d339.slice - libcontainer container kubepods-besteffort-pod30d37140_2c8f_4d28_bae5_0ab16e26d339.slice. Mar 6 03:03:47.492454 kubelet[3329]: I0306 03:03:47.492256 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/30d37140-2c8f-4d28-bae5-0ab16e26d339-whisker-backend-key-pair\") pod \"whisker-c8bfd94c4-wv9xg\" (UID: \"30d37140-2c8f-4d28-bae5-0ab16e26d339\") " pod="calico-system/whisker-c8bfd94c4-wv9xg" Mar 6 03:03:47.492454 kubelet[3329]: I0306 03:03:47.492306 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/5de748ec-8a9e-43a7-9a5d-22442eda9add-calico-apiserver-certs\") pod \"calico-apiserver-7fbfb65546-bwshp\" (UID: \"5de748ec-8a9e-43a7-9a5d-22442eda9add\") " pod="calico-system/calico-apiserver-7fbfb65546-bwshp" Mar 6 03:03:47.492454 kubelet[3329]: I0306 03:03:47.492331 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/031c3aba-3f87-40fc-a1e7-11afbad831cd-goldmane-ca-bundle\") pod \"goldmane-9f7667bb8-zcmkp\" (UID: \"031c3aba-3f87-40fc-a1e7-11afbad831cd\") " pod="calico-system/goldmane-9f7667bb8-zcmkp" Mar 6 03:03:47.492454 kubelet[3329]: I0306 03:03:47.492383 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/30d37140-2c8f-4d28-bae5-0ab16e26d339-whisker-ca-bundle\") pod \"whisker-c8bfd94c4-wv9xg\" (UID: \"30d37140-2c8f-4d28-bae5-0ab16e26d339\") " pod="calico-system/whisker-c8bfd94c4-wv9xg" Mar 6 03:03:47.526005 kubelet[3329]: I0306 03:03:47.492408 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4z9gp\" (UniqueName: \"kubernetes.io/projected/30d37140-2c8f-4d28-bae5-0ab16e26d339-kube-api-access-4z9gp\") pod \"whisker-c8bfd94c4-wv9xg\" (UID: \"30d37140-2c8f-4d28-bae5-0ab16e26d339\") " pod="calico-system/whisker-c8bfd94c4-wv9xg" Mar 6 03:03:47.526005 kubelet[3329]: I0306 03:03:47.493008 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a66f999f-3bca-46b9-bcee-a4b3df253d8f-config-volume\") pod \"coredns-7d764666f9-d8hdh\" (UID: \"a66f999f-3bca-46b9-bcee-a4b3df253d8f\") " pod="kube-system/coredns-7d764666f9-d8hdh" Mar 6 03:03:47.526005 kubelet[3329]: I0306 03:03:47.493074 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4w86c\" (UniqueName: \"kubernetes.io/projected/e854168f-88e6-4370-8105-c1eae5b8cb91-kube-api-access-4w86c\") pod \"coredns-7d764666f9-5nmrp\" (UID: \"e854168f-88e6-4370-8105-c1eae5b8cb91\") " pod="kube-system/coredns-7d764666f9-5nmrp" Mar 6 03:03:47.526005 kubelet[3329]: I0306 03:03:47.493318 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/754aef33-4132-4e75-8c11-8ccce15a7459-calico-apiserver-certs\") pod \"calico-apiserver-7fbfb65546-qf94t\" (UID: \"754aef33-4132-4e75-8c11-8ccce15a7459\") " pod="calico-system/calico-apiserver-7fbfb65546-qf94t" Mar 6 03:03:47.526005 kubelet[3329]: I0306 03:03:47.493339 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/031c3aba-3f87-40fc-a1e7-11afbad831cd-goldmane-key-pair\") pod \"goldmane-9f7667bb8-zcmkp\" (UID: \"031c3aba-3f87-40fc-a1e7-11afbad831cd\") " pod="calico-system/goldmane-9f7667bb8-zcmkp" Mar 6 03:03:47.526429 kubelet[3329]: I0306 03:03:47.494559 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dcqq2\" (UniqueName: \"kubernetes.io/projected/a66f999f-3bca-46b9-bcee-a4b3df253d8f-kube-api-access-dcqq2\") pod \"coredns-7d764666f9-d8hdh\" (UID: \"a66f999f-3bca-46b9-bcee-a4b3df253d8f\") " pod="kube-system/coredns-7d764666f9-d8hdh" Mar 6 03:03:47.526429 kubelet[3329]: I0306 03:03:47.495461 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e854168f-88e6-4370-8105-c1eae5b8cb91-config-volume\") pod \"coredns-7d764666f9-5nmrp\" (UID: \"e854168f-88e6-4370-8105-c1eae5b8cb91\") " pod="kube-system/coredns-7d764666f9-5nmrp" Mar 6 03:03:47.526429 kubelet[3329]: I0306 03:03:47.495489 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/031c3aba-3f87-40fc-a1e7-11afbad831cd-config\") pod \"goldmane-9f7667bb8-zcmkp\" (UID: \"031c3aba-3f87-40fc-a1e7-11afbad831cd\") " pod="calico-system/goldmane-9f7667bb8-zcmkp" Mar 6 03:03:47.526429 kubelet[3329]: I0306 03:03:47.495533 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7d8sk\" (UniqueName: \"kubernetes.io/projected/754aef33-4132-4e75-8c11-8ccce15a7459-kube-api-access-7d8sk\") pod \"calico-apiserver-7fbfb65546-qf94t\" (UID: \"754aef33-4132-4e75-8c11-8ccce15a7459\") " pod="calico-system/calico-apiserver-7fbfb65546-qf94t" Mar 6 03:03:47.526429 kubelet[3329]: I0306 03:03:47.495576 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nn64r\" (UniqueName: \"kubernetes.io/projected/031c3aba-3f87-40fc-a1e7-11afbad831cd-kube-api-access-nn64r\") pod \"goldmane-9f7667bb8-zcmkp\" (UID: \"031c3aba-3f87-40fc-a1e7-11afbad831cd\") " pod="calico-system/goldmane-9f7667bb8-zcmkp" Mar 6 03:03:47.526654 kubelet[3329]: I0306 03:03:47.495606 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfm8p\" (UniqueName: \"kubernetes.io/projected/5de748ec-8a9e-43a7-9a5d-22442eda9add-kube-api-access-jfm8p\") pod \"calico-apiserver-7fbfb65546-bwshp\" (UID: \"5de748ec-8a9e-43a7-9a5d-22442eda9add\") " pod="calico-system/calico-apiserver-7fbfb65546-bwshp" Mar 6 03:03:47.671831 systemd[1]: Created slice kubepods-besteffort-podaa230f7b_c2dc_46a5_ab7f_c83880b50346.slice - libcontainer container kubepods-besteffort-podaa230f7b_c2dc_46a5_ab7f_c83880b50346.slice. Mar 6 03:03:47.673958 containerd[1983]: time="2026-03-06T03:03:47.671905215Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-5nmrp,Uid:e854168f-88e6-4370-8105-c1eae5b8cb91,Namespace:kube-system,Attempt:0,}" Mar 6 03:03:47.677730 containerd[1983]: time="2026-03-06T03:03:47.677674172Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d48547fb8-6cbb5,Uid:289ec3ea-b2e0-4ded-ad19-fc4c25fc97e9,Namespace:calico-system,Attempt:0,}" Mar 6 03:03:47.683768 containerd[1983]: time="2026-03-06T03:03:47.683692081Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fbfb65546-bwshp,Uid:5de748ec-8a9e-43a7-9a5d-22442eda9add,Namespace:calico-system,Attempt:0,}" Mar 6 03:03:47.687078 containerd[1983]: time="2026-03-06T03:03:47.687047427Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zsntj,Uid:aa230f7b-c2dc-46a5-ab7f-c83880b50346,Namespace:calico-system,Attempt:0,}" Mar 6 03:03:47.708099 containerd[1983]: time="2026-03-06T03:03:47.708057393Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fbfb65546-qf94t,Uid:754aef33-4132-4e75-8c11-8ccce15a7459,Namespace:calico-system,Attempt:0,}" Mar 6 03:03:47.724371 containerd[1983]: time="2026-03-06T03:03:47.724307252Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-d8hdh,Uid:a66f999f-3bca-46b9-bcee-a4b3df253d8f,Namespace:kube-system,Attempt:0,}" Mar 6 03:03:47.729019 containerd[1983]: time="2026-03-06T03:03:47.728487430Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-zcmkp,Uid:031c3aba-3f87-40fc-a1e7-11afbad831cd,Namespace:calico-system,Attempt:0,}" Mar 6 03:03:47.741397 containerd[1983]: time="2026-03-06T03:03:47.740938563Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-c8bfd94c4-wv9xg,Uid:30d37140-2c8f-4d28-bae5-0ab16e26d339,Namespace:calico-system,Attempt:0,}" Mar 6 03:03:48.043821 containerd[1983]: time="2026-03-06T03:03:48.043778740Z" level=info msg="CreateContainer within sandbox \"6c71b844ac6edd4bd62b8b5573a078f5be63ca7914b66a730d23bb1b8d0522a4\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 6 03:03:48.158442 containerd[1983]: time="2026-03-06T03:03:48.158114603Z" level=info msg="Container 2bdf0c46cf1013e0d2ad20e57fbe775c8c883b68ac6697640aa5b28609e1ab55: CDI devices from CRI Config.CDIDevices: []" Mar 6 03:03:48.158291 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3259174755.mount: Deactivated successfully. Mar 6 03:03:48.223652 containerd[1983]: time="2026-03-06T03:03:48.223468996Z" level=info msg="CreateContainer within sandbox \"6c71b844ac6edd4bd62b8b5573a078f5be63ca7914b66a730d23bb1b8d0522a4\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"2bdf0c46cf1013e0d2ad20e57fbe775c8c883b68ac6697640aa5b28609e1ab55\"" Mar 6 03:03:48.224407 containerd[1983]: time="2026-03-06T03:03:48.224291104Z" level=info msg="StartContainer for \"2bdf0c46cf1013e0d2ad20e57fbe775c8c883b68ac6697640aa5b28609e1ab55\"" Mar 6 03:03:48.233378 containerd[1983]: time="2026-03-06T03:03:48.233335931Z" level=info msg="connecting to shim 2bdf0c46cf1013e0d2ad20e57fbe775c8c883b68ac6697640aa5b28609e1ab55" address="unix:///run/containerd/s/38861d89db923481511732801b7909f146ed581666c067154d8ffa0b261601df" protocol=ttrpc version=3 Mar 6 03:03:48.271663 containerd[1983]: time="2026-03-06T03:03:48.270732373Z" level=error msg="Failed to destroy network for sandbox \"cff9e2ae57176f8e58ff89cbbdd0447a82157d995dbcdbe20ea13b647d5306e4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 03:03:48.276493 systemd[1]: run-netns-cni\x2df68063a2\x2d9497\x2d55e8\x2d7956\x2deb8c04c0780e.mount: Deactivated successfully. Mar 6 03:03:48.279223 containerd[1983]: time="2026-03-06T03:03:48.279175545Z" level=error msg="Failed to destroy network for sandbox \"63857c41fe6d6a38d7a138d5c9579c47dd84827d635fe9f2626604e54142c297\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 03:03:48.281453 containerd[1983]: time="2026-03-06T03:03:48.281374705Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-5nmrp,Uid:e854168f-88e6-4370-8105-c1eae5b8cb91,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"cff9e2ae57176f8e58ff89cbbdd0447a82157d995dbcdbe20ea13b647d5306e4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 03:03:48.284442 containerd[1983]: time="2026-03-06T03:03:48.283762944Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fbfb65546-bwshp,Uid:5de748ec-8a9e-43a7-9a5d-22442eda9add,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"63857c41fe6d6a38d7a138d5c9579c47dd84827d635fe9f2626604e54142c297\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 03:03:48.303338 kubelet[3329]: E0306 03:03:48.303204 3329 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"63857c41fe6d6a38d7a138d5c9579c47dd84827d635fe9f2626604e54142c297\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 03:03:48.303338 kubelet[3329]: E0306 03:03:48.303294 3329 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"63857c41fe6d6a38d7a138d5c9579c47dd84827d635fe9f2626604e54142c297\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-7fbfb65546-bwshp" Mar 6 03:03:48.304366 kubelet[3329]: E0306 03:03:48.304298 3329 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cff9e2ae57176f8e58ff89cbbdd0447a82157d995dbcdbe20ea13b647d5306e4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 03:03:48.304366 kubelet[3329]: E0306 03:03:48.304358 3329 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cff9e2ae57176f8e58ff89cbbdd0447a82157d995dbcdbe20ea13b647d5306e4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-5nmrp" Mar 6 03:03:48.304899 kubelet[3329]: E0306 03:03:48.304382 3329 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cff9e2ae57176f8e58ff89cbbdd0447a82157d995dbcdbe20ea13b647d5306e4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-5nmrp" Mar 6 03:03:48.304899 kubelet[3329]: E0306 03:03:48.304459 3329 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7d764666f9-5nmrp_kube-system(e854168f-88e6-4370-8105-c1eae5b8cb91)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7d764666f9-5nmrp_kube-system(e854168f-88e6-4370-8105-c1eae5b8cb91)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cff9e2ae57176f8e58ff89cbbdd0447a82157d995dbcdbe20ea13b647d5306e4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-5nmrp" podUID="e854168f-88e6-4370-8105-c1eae5b8cb91" Mar 6 03:03:48.305443 kubelet[3329]: E0306 03:03:48.303320 3329 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"63857c41fe6d6a38d7a138d5c9579c47dd84827d635fe9f2626604e54142c297\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-7fbfb65546-bwshp" Mar 6 03:03:48.305679 kubelet[3329]: E0306 03:03:48.305504 3329 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7fbfb65546-bwshp_calico-system(5de748ec-8a9e-43a7-9a5d-22442eda9add)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7fbfb65546-bwshp_calico-system(5de748ec-8a9e-43a7-9a5d-22442eda9add)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"63857c41fe6d6a38d7a138d5c9579c47dd84827d635fe9f2626604e54142c297\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-7fbfb65546-bwshp" podUID="5de748ec-8a9e-43a7-9a5d-22442eda9add" Mar 6 03:03:48.345114 containerd[1983]: time="2026-03-06T03:03:48.344868168Z" level=error msg="Failed to destroy network for sandbox \"c40b5b3f7a5525384c5c24db27ffc74891fb686d94ce8aba50d8d60d29ea14eb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 03:03:48.347885 containerd[1983]: time="2026-03-06T03:03:48.347830044Z" level=error msg="Failed to destroy network for sandbox \"8a6fa2bc5a41f6687abf76b02ec37f27cd4698f18e5b4561e676e55b36974185\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 03:03:48.354378 containerd[1983]: time="2026-03-06T03:03:48.354031900Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fbfb65546-qf94t,Uid:754aef33-4132-4e75-8c11-8ccce15a7459,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c40b5b3f7a5525384c5c24db27ffc74891fb686d94ce8aba50d8d60d29ea14eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 03:03:48.354575 kubelet[3329]: E0306 03:03:48.354388 3329 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c40b5b3f7a5525384c5c24db27ffc74891fb686d94ce8aba50d8d60d29ea14eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 03:03:48.354575 kubelet[3329]: E0306 03:03:48.354469 3329 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c40b5b3f7a5525384c5c24db27ffc74891fb686d94ce8aba50d8d60d29ea14eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-7fbfb65546-qf94t" Mar 6 03:03:48.354575 kubelet[3329]: E0306 03:03:48.354492 3329 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c40b5b3f7a5525384c5c24db27ffc74891fb686d94ce8aba50d8d60d29ea14eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-7fbfb65546-qf94t" Mar 6 03:03:48.354728 kubelet[3329]: E0306 03:03:48.354560 3329 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7fbfb65546-qf94t_calico-system(754aef33-4132-4e75-8c11-8ccce15a7459)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7fbfb65546-qf94t_calico-system(754aef33-4132-4e75-8c11-8ccce15a7459)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c40b5b3f7a5525384c5c24db27ffc74891fb686d94ce8aba50d8d60d29ea14eb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-7fbfb65546-qf94t" podUID="754aef33-4132-4e75-8c11-8ccce15a7459" Mar 6 03:03:48.357218 containerd[1983]: time="2026-03-06T03:03:48.357135845Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d48547fb8-6cbb5,Uid:289ec3ea-b2e0-4ded-ad19-fc4c25fc97e9,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8a6fa2bc5a41f6687abf76b02ec37f27cd4698f18e5b4561e676e55b36974185\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 03:03:48.358510 kubelet[3329]: E0306 03:03:48.357481 3329 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8a6fa2bc5a41f6687abf76b02ec37f27cd4698f18e5b4561e676e55b36974185\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 03:03:48.358510 kubelet[3329]: E0306 03:03:48.357554 3329 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8a6fa2bc5a41f6687abf76b02ec37f27cd4698f18e5b4561e676e55b36974185\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6d48547fb8-6cbb5" Mar 6 03:03:48.358510 kubelet[3329]: E0306 03:03:48.357599 3329 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8a6fa2bc5a41f6687abf76b02ec37f27cd4698f18e5b4561e676e55b36974185\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6d48547fb8-6cbb5" Mar 6 03:03:48.359829 kubelet[3329]: E0306 03:03:48.357685 3329 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6d48547fb8-6cbb5_calico-system(289ec3ea-b2e0-4ded-ad19-fc4c25fc97e9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6d48547fb8-6cbb5_calico-system(289ec3ea-b2e0-4ded-ad19-fc4c25fc97e9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8a6fa2bc5a41f6687abf76b02ec37f27cd4698f18e5b4561e676e55b36974185\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6d48547fb8-6cbb5" podUID="289ec3ea-b2e0-4ded-ad19-fc4c25fc97e9" Mar 6 03:03:48.365220 systemd[1]: Started cri-containerd-2bdf0c46cf1013e0d2ad20e57fbe775c8c883b68ac6697640aa5b28609e1ab55.scope - libcontainer container 2bdf0c46cf1013e0d2ad20e57fbe775c8c883b68ac6697640aa5b28609e1ab55. Mar 6 03:03:48.382927 containerd[1983]: time="2026-03-06T03:03:48.382878634Z" level=error msg="Failed to destroy network for sandbox \"0ee5db2a543ea185eeee5e16f3736e10ee64390829a0c57d8cf07b0715fa7544\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 03:03:48.384122 containerd[1983]: time="2026-03-06T03:03:48.384078532Z" level=error msg="Failed to destroy network for sandbox \"528eab00574cb273bb9fea6f9498c601b6eba66d7bd8e8769ae5e13bbd56af1f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 03:03:48.386323 containerd[1983]: time="2026-03-06T03:03:48.386270088Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zsntj,Uid:aa230f7b-c2dc-46a5-ab7f-c83880b50346,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ee5db2a543ea185eeee5e16f3736e10ee64390829a0c57d8cf07b0715fa7544\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 03:03:48.388135 kubelet[3329]: E0306 03:03:48.387547 3329 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ee5db2a543ea185eeee5e16f3736e10ee64390829a0c57d8cf07b0715fa7544\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 03:03:48.388351 containerd[1983]: time="2026-03-06T03:03:48.388310133Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-d8hdh,Uid:a66f999f-3bca-46b9-bcee-a4b3df253d8f,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"528eab00574cb273bb9fea6f9498c601b6eba66d7bd8e8769ae5e13bbd56af1f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 03:03:48.388522 kubelet[3329]: E0306 03:03:48.388489 3329 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ee5db2a543ea185eeee5e16f3736e10ee64390829a0c57d8cf07b0715fa7544\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zsntj" Mar 6 03:03:48.388609 kubelet[3329]: E0306 03:03:48.388551 3329 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ee5db2a543ea185eeee5e16f3736e10ee64390829a0c57d8cf07b0715fa7544\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zsntj" Mar 6 03:03:48.389072 kubelet[3329]: E0306 03:03:48.388648 3329 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-zsntj_calico-system(aa230f7b-c2dc-46a5-ab7f-c83880b50346)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-zsntj_calico-system(aa230f7b-c2dc-46a5-ab7f-c83880b50346)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0ee5db2a543ea185eeee5e16f3736e10ee64390829a0c57d8cf07b0715fa7544\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-zsntj" podUID="aa230f7b-c2dc-46a5-ab7f-c83880b50346" Mar 6 03:03:48.389072 kubelet[3329]: E0306 03:03:48.388810 3329 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"528eab00574cb273bb9fea6f9498c601b6eba66d7bd8e8769ae5e13bbd56af1f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 03:03:48.389072 kubelet[3329]: E0306 03:03:48.388843 3329 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"528eab00574cb273bb9fea6f9498c601b6eba66d7bd8e8769ae5e13bbd56af1f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-d8hdh" Mar 6 03:03:48.389255 kubelet[3329]: E0306 03:03:48.388864 3329 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"528eab00574cb273bb9fea6f9498c601b6eba66d7bd8e8769ae5e13bbd56af1f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-d8hdh" Mar 6 03:03:48.389255 kubelet[3329]: E0306 03:03:48.388911 3329 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7d764666f9-d8hdh_kube-system(a66f999f-3bca-46b9-bcee-a4b3df253d8f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7d764666f9-d8hdh_kube-system(a66f999f-3bca-46b9-bcee-a4b3df253d8f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"528eab00574cb273bb9fea6f9498c601b6eba66d7bd8e8769ae5e13bbd56af1f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-d8hdh" podUID="a66f999f-3bca-46b9-bcee-a4b3df253d8f" Mar 6 03:03:48.390554 containerd[1983]: time="2026-03-06T03:03:48.390522807Z" level=error msg="Failed to destroy network for sandbox \"3c7e01e601332a05a67041f66e6df09a76fcc879e2b34a7e64c035ed8602537a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 03:03:48.392607 containerd[1983]: time="2026-03-06T03:03:48.392543912Z" level=error msg="Failed to destroy network for sandbox \"5cbafdbb19fb2887e66f5a5db1051b1fcecdb56e0ef773c75ee6ac4bfcb17142\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 03:03:48.393238 containerd[1983]: time="2026-03-06T03:03:48.393105540Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-c8bfd94c4-wv9xg,Uid:30d37140-2c8f-4d28-bae5-0ab16e26d339,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c7e01e601332a05a67041f66e6df09a76fcc879e2b34a7e64c035ed8602537a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 03:03:48.393942 kubelet[3329]: E0306 03:03:48.393316 3329 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c7e01e601332a05a67041f66e6df09a76fcc879e2b34a7e64c035ed8602537a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 03:03:48.393942 kubelet[3329]: E0306 03:03:48.393361 3329 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c7e01e601332a05a67041f66e6df09a76fcc879e2b34a7e64c035ed8602537a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-c8bfd94c4-wv9xg" Mar 6 03:03:48.393942 kubelet[3329]: E0306 03:03:48.393380 3329 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c7e01e601332a05a67041f66e6df09a76fcc879e2b34a7e64c035ed8602537a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-c8bfd94c4-wv9xg" Mar 6 03:03:48.394127 kubelet[3329]: E0306 03:03:48.393524 3329 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-c8bfd94c4-wv9xg_calico-system(30d37140-2c8f-4d28-bae5-0ab16e26d339)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-c8bfd94c4-wv9xg_calico-system(30d37140-2c8f-4d28-bae5-0ab16e26d339)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3c7e01e601332a05a67041f66e6df09a76fcc879e2b34a7e64c035ed8602537a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-c8bfd94c4-wv9xg" podUID="30d37140-2c8f-4d28-bae5-0ab16e26d339" Mar 6 03:03:48.395192 containerd[1983]: time="2026-03-06T03:03:48.395133409Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-zcmkp,Uid:031c3aba-3f87-40fc-a1e7-11afbad831cd,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5cbafdbb19fb2887e66f5a5db1051b1fcecdb56e0ef773c75ee6ac4bfcb17142\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 03:03:48.395598 kubelet[3329]: E0306 03:03:48.395459 3329 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5cbafdbb19fb2887e66f5a5db1051b1fcecdb56e0ef773c75ee6ac4bfcb17142\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 03:03:48.395598 kubelet[3329]: E0306 03:03:48.395505 3329 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5cbafdbb19fb2887e66f5a5db1051b1fcecdb56e0ef773c75ee6ac4bfcb17142\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-9f7667bb8-zcmkp" Mar 6 03:03:48.395598 kubelet[3329]: E0306 03:03:48.395543 3329 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5cbafdbb19fb2887e66f5a5db1051b1fcecdb56e0ef773c75ee6ac4bfcb17142\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-9f7667bb8-zcmkp" Mar 6 03:03:48.395744 kubelet[3329]: E0306 03:03:48.395629 3329 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-9f7667bb8-zcmkp_calico-system(031c3aba-3f87-40fc-a1e7-11afbad831cd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-9f7667bb8-zcmkp_calico-system(031c3aba-3f87-40fc-a1e7-11afbad831cd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5cbafdbb19fb2887e66f5a5db1051b1fcecdb56e0ef773c75ee6ac4bfcb17142\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-9f7667bb8-zcmkp" podUID="031c3aba-3f87-40fc-a1e7-11afbad831cd" Mar 6 03:03:48.463201 containerd[1983]: time="2026-03-06T03:03:48.463155157Z" level=info msg="StartContainer for \"2bdf0c46cf1013e0d2ad20e57fbe775c8c883b68ac6697640aa5b28609e1ab55\" returns successfully" Mar 6 03:03:49.108623 systemd[1]: run-netns-cni\x2d3de61e70\x2deb36\x2d7763\x2d3fa4\x2d6ae150d31e72.mount: Deactivated successfully. Mar 6 03:03:49.109925 systemd[1]: run-netns-cni\x2da379ef79\x2d77d6\x2d9ade\x2d6eed\x2d7835e65790fc.mount: Deactivated successfully. Mar 6 03:03:49.110157 systemd[1]: run-netns-cni\x2dc4e31f61\x2d3993\x2d480d\x2db907\x2db46cfd0e30a7.mount: Deactivated successfully. Mar 6 03:03:49.110252 systemd[1]: run-netns-cni\x2dd3e9f79c\x2d8e57\x2d088a\x2dffbc\x2d5eb11e43e122.mount: Deactivated successfully. Mar 6 03:03:49.110336 systemd[1]: run-netns-cni\x2d4413212e\x2deacd\x2d4f1a\x2dd4fa\x2dea9512b542c5.mount: Deactivated successfully. Mar 6 03:03:49.110444 systemd[1]: run-netns-cni\x2dbad8b6a2\x2dfc2c\x2df310\x2d2f21\x2d83d418252e62.mount: Deactivated successfully. Mar 6 03:03:49.110525 systemd[1]: run-netns-cni\x2dd3e4b918\x2dcd6d\x2df78c\x2de110\x2de1025b348d2a.mount: Deactivated successfully. Mar 6 03:03:50.015011 kubelet[3329]: I0306 03:03:50.014973 3329 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Mar 6 03:03:50.017179 kubelet[3329]: I0306 03:03:50.016879 3329 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Mar 6 03:03:50.196385 kubelet[3329]: I0306 03:03:50.194105 3329 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-node-l6jzk" podStartSLOduration=3.115066005 podStartE2EDuration="28.194084815s" podCreationTimestamp="2026-03-06 03:03:22 +0000 UTC" firstStartedPulling="2026-03-06 03:03:22.927722379 +0000 UTC m=+25.439693445" lastFinishedPulling="2026-03-06 03:03:48.006741176 +0000 UTC m=+50.518712255" observedRunningTime="2026-03-06 03:03:49.031833147 +0000 UTC m=+51.543804235" watchObservedRunningTime="2026-03-06 03:03:50.194084815 +0000 UTC m=+52.706055905" Mar 6 03:03:50.420756 kubelet[3329]: I0306 03:03:50.420608 3329 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30d37140-2c8f-4d28-bae5-0ab16e26d339-nginx-config" pod "30d37140-2c8f-4d28-bae5-0ab16e26d339" (UID: "30d37140-2c8f-4d28-bae5-0ab16e26d339"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 6 03:03:50.423940 kubelet[3329]: I0306 03:03:50.423870 3329 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/30d37140-2c8f-4d28-bae5-0ab16e26d339-nginx-config\" (UniqueName: \"kubernetes.io/configmap/30d37140-2c8f-4d28-bae5-0ab16e26d339-nginx-config\") pod \"30d37140-2c8f-4d28-bae5-0ab16e26d339\" (UID: \"30d37140-2c8f-4d28-bae5-0ab16e26d339\") " Mar 6 03:03:50.424283 kubelet[3329]: I0306 03:03:50.424196 3329 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/projected/30d37140-2c8f-4d28-bae5-0ab16e26d339-kube-api-access-4z9gp\" (UniqueName: \"kubernetes.io/projected/30d37140-2c8f-4d28-bae5-0ab16e26d339-kube-api-access-4z9gp\") pod \"30d37140-2c8f-4d28-bae5-0ab16e26d339\" (UID: \"30d37140-2c8f-4d28-bae5-0ab16e26d339\") " Mar 6 03:03:50.424609 kubelet[3329]: I0306 03:03:50.424374 3329 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/secret/30d37140-2c8f-4d28-bae5-0ab16e26d339-whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/30d37140-2c8f-4d28-bae5-0ab16e26d339-whisker-backend-key-pair\") pod \"30d37140-2c8f-4d28-bae5-0ab16e26d339\" (UID: \"30d37140-2c8f-4d28-bae5-0ab16e26d339\") " Mar 6 03:03:50.424789 kubelet[3329]: I0306 03:03:50.424717 3329 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/30d37140-2c8f-4d28-bae5-0ab16e26d339-whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/30d37140-2c8f-4d28-bae5-0ab16e26d339-whisker-ca-bundle\") pod \"30d37140-2c8f-4d28-bae5-0ab16e26d339\" (UID: \"30d37140-2c8f-4d28-bae5-0ab16e26d339\") " Mar 6 03:03:50.425212 kubelet[3329]: I0306 03:03:50.425156 3329 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/30d37140-2c8f-4d28-bae5-0ab16e26d339-nginx-config\") on node \"ip-172-31-18-81\" DevicePath \"\"" Mar 6 03:03:50.425737 kubelet[3329]: I0306 03:03:50.425706 3329 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30d37140-2c8f-4d28-bae5-0ab16e26d339-whisker-ca-bundle" pod "30d37140-2c8f-4d28-bae5-0ab16e26d339" (UID: "30d37140-2c8f-4d28-bae5-0ab16e26d339"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 6 03:03:50.436571 kubelet[3329]: I0306 03:03:50.436121 3329 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30d37140-2c8f-4d28-bae5-0ab16e26d339-whisker-backend-key-pair" pod "30d37140-2c8f-4d28-bae5-0ab16e26d339" (UID: "30d37140-2c8f-4d28-bae5-0ab16e26d339"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 6 03:03:50.436868 kubelet[3329]: I0306 03:03:50.436814 3329 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30d37140-2c8f-4d28-bae5-0ab16e26d339-kube-api-access-4z9gp" pod "30d37140-2c8f-4d28-bae5-0ab16e26d339" (UID: "30d37140-2c8f-4d28-bae5-0ab16e26d339"). InnerVolumeSpecName "kube-api-access-4z9gp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 6 03:03:50.438770 systemd[1]: var-lib-kubelet-pods-30d37140\x2d2c8f\x2d4d28\x2dbae5\x2d0ab16e26d339-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Mar 6 03:03:50.446226 systemd[1]: var-lib-kubelet-pods-30d37140\x2d2c8f\x2d4d28\x2dbae5\x2d0ab16e26d339-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d4z9gp.mount: Deactivated successfully. Mar 6 03:03:50.526479 kubelet[3329]: I0306 03:03:50.526398 3329 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4z9gp\" (UniqueName: \"kubernetes.io/projected/30d37140-2c8f-4d28-bae5-0ab16e26d339-kube-api-access-4z9gp\") on node \"ip-172-31-18-81\" DevicePath \"\"" Mar 6 03:03:50.526479 kubelet[3329]: I0306 03:03:50.526472 3329 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/30d37140-2c8f-4d28-bae5-0ab16e26d339-whisker-backend-key-pair\") on node \"ip-172-31-18-81\" DevicePath \"\"" Mar 6 03:03:50.526479 kubelet[3329]: I0306 03:03:50.526485 3329 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/30d37140-2c8f-4d28-bae5-0ab16e26d339-whisker-ca-bundle\") on node \"ip-172-31-18-81\" DevicePath \"\"" Mar 6 03:03:51.028150 systemd[1]: Removed slice kubepods-besteffort-pod30d37140_2c8f_4d28_bae5_0ab16e26d339.slice - libcontainer container kubepods-besteffort-pod30d37140_2c8f_4d28_bae5_0ab16e26d339.slice. Mar 6 03:03:51.261759 systemd[1]: Created slice kubepods-besteffort-pod4c267a6b_90d8_401f_84a7_480cc5e89352.slice - libcontainer container kubepods-besteffort-pod4c267a6b_90d8_401f_84a7_480cc5e89352.slice. Mar 6 03:03:51.331219 kubelet[3329]: I0306 03:03:51.331095 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/4c267a6b-90d8-401f-84a7-480cc5e89352-nginx-config\") pod \"whisker-5c457c4f6c-w494m\" (UID: \"4c267a6b-90d8-401f-84a7-480cc5e89352\") " pod="calico-system/whisker-5c457c4f6c-w494m" Mar 6 03:03:51.331219 kubelet[3329]: I0306 03:03:51.331153 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqhk2\" (UniqueName: \"kubernetes.io/projected/4c267a6b-90d8-401f-84a7-480cc5e89352-kube-api-access-sqhk2\") pod \"whisker-5c457c4f6c-w494m\" (UID: \"4c267a6b-90d8-401f-84a7-480cc5e89352\") " pod="calico-system/whisker-5c457c4f6c-w494m" Mar 6 03:03:51.331219 kubelet[3329]: I0306 03:03:51.331180 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/4c267a6b-90d8-401f-84a7-480cc5e89352-whisker-backend-key-pair\") pod \"whisker-5c457c4f6c-w494m\" (UID: \"4c267a6b-90d8-401f-84a7-480cc5e89352\") " pod="calico-system/whisker-5c457c4f6c-w494m" Mar 6 03:03:51.331219 kubelet[3329]: I0306 03:03:51.331208 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4c267a6b-90d8-401f-84a7-480cc5e89352-whisker-ca-bundle\") pod \"whisker-5c457c4f6c-w494m\" (UID: \"4c267a6b-90d8-401f-84a7-480cc5e89352\") " pod="calico-system/whisker-5c457c4f6c-w494m" Mar 6 03:03:51.578103 containerd[1983]: time="2026-03-06T03:03:51.577880739Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5c457c4f6c-w494m,Uid:4c267a6b-90d8-401f-84a7-480cc5e89352,Namespace:calico-system,Attempt:0,}" Mar 6 03:03:51.649513 kubelet[3329]: I0306 03:03:51.649143 3329 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="30d37140-2c8f-4d28-bae5-0ab16e26d339" path="/var/lib/kubelet/pods/30d37140-2c8f-4d28-bae5-0ab16e26d339/volumes" Mar 6 03:03:52.140316 systemd-networkd[1838]: cali654957cafb3: Link UP Mar 6 03:03:52.140575 systemd-networkd[1838]: cali654957cafb3: Gained carrier Mar 6 03:03:52.175690 containerd[1983]: 2026-03-06 03:03:51.639 [ERROR][4758] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 6 03:03:52.175690 containerd[1983]: 2026-03-06 03:03:51.708 [INFO][4758] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--81-k8s-whisker--5c457c4f6c--w494m-eth0 whisker-5c457c4f6c- calico-system 4c267a6b-90d8-401f-84a7-480cc5e89352 930 0 2026-03-06 03:03:51 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:5c457c4f6c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-18-81 whisker-5c457c4f6c-w494m eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali654957cafb3 [] [] }} ContainerID="0a6580cd4c3f030307f32467885aa8431c58ba9b181404953e03afee489e212b" Namespace="calico-system" Pod="whisker-5c457c4f6c-w494m" WorkloadEndpoint="ip--172--31--18--81-k8s-whisker--5c457c4f6c--w494m-" Mar 6 03:03:52.175690 containerd[1983]: 2026-03-06 03:03:51.708 [INFO][4758] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0a6580cd4c3f030307f32467885aa8431c58ba9b181404953e03afee489e212b" Namespace="calico-system" Pod="whisker-5c457c4f6c-w494m" WorkloadEndpoint="ip--172--31--18--81-k8s-whisker--5c457c4f6c--w494m-eth0" Mar 6 03:03:52.175690 containerd[1983]: 2026-03-06 03:03:51.956 [INFO][4771] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0a6580cd4c3f030307f32467885aa8431c58ba9b181404953e03afee489e212b" HandleID="k8s-pod-network.0a6580cd4c3f030307f32467885aa8431c58ba9b181404953e03afee489e212b" Workload="ip--172--31--18--81-k8s-whisker--5c457c4f6c--w494m-eth0" Mar 6 03:03:52.176218 containerd[1983]: 2026-03-06 03:03:51.997 [INFO][4771] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="0a6580cd4c3f030307f32467885aa8431c58ba9b181404953e03afee489e212b" HandleID="k8s-pod-network.0a6580cd4c3f030307f32467885aa8431c58ba9b181404953e03afee489e212b" Workload="ip--172--31--18--81-k8s-whisker--5c457c4f6c--w494m-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002f6cc0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-18-81", "pod":"whisker-5c457c4f6c-w494m", "timestamp":"2026-03-06 03:03:51.956547816 +0000 UTC"}, Hostname:"ip-172-31-18-81", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0000ee2c0)} Mar 6 03:03:52.176218 containerd[1983]: 2026-03-06 03:03:51.997 [INFO][4771] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 03:03:52.176218 containerd[1983]: 2026-03-06 03:03:51.997 [INFO][4771] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 03:03:52.176218 containerd[1983]: 2026-03-06 03:03:51.997 [INFO][4771] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-81' Mar 6 03:03:52.176218 containerd[1983]: 2026-03-06 03:03:52.003 [INFO][4771] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.0a6580cd4c3f030307f32467885aa8431c58ba9b181404953e03afee489e212b" host="ip-172-31-18-81" Mar 6 03:03:52.176218 containerd[1983]: 2026-03-06 03:03:52.017 [INFO][4771] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-18-81" Mar 6 03:03:52.176218 containerd[1983]: 2026-03-06 03:03:52.035 [INFO][4771] ipam/ipam.go 526: Trying affinity for 192.168.49.0/26 host="ip-172-31-18-81" Mar 6 03:03:52.176218 containerd[1983]: 2026-03-06 03:03:52.040 [INFO][4771] ipam/ipam.go 160: Attempting to load block cidr=192.168.49.0/26 host="ip-172-31-18-81" Mar 6 03:03:52.176218 containerd[1983]: 2026-03-06 03:03:52.042 [INFO][4771] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.49.0/26 host="ip-172-31-18-81" Mar 6 03:03:52.179748 containerd[1983]: 2026-03-06 03:03:52.043 [INFO][4771] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.49.0/26 handle="k8s-pod-network.0a6580cd4c3f030307f32467885aa8431c58ba9b181404953e03afee489e212b" host="ip-172-31-18-81" Mar 6 03:03:52.179748 containerd[1983]: 2026-03-06 03:03:52.045 [INFO][4771] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.0a6580cd4c3f030307f32467885aa8431c58ba9b181404953e03afee489e212b Mar 6 03:03:52.179748 containerd[1983]: 2026-03-06 03:03:52.052 [INFO][4771] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.49.0/26 handle="k8s-pod-network.0a6580cd4c3f030307f32467885aa8431c58ba9b181404953e03afee489e212b" host="ip-172-31-18-81" Mar 6 03:03:52.179748 containerd[1983]: 2026-03-06 03:03:52.063 [INFO][4771] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.49.1/26] block=192.168.49.0/26 handle="k8s-pod-network.0a6580cd4c3f030307f32467885aa8431c58ba9b181404953e03afee489e212b" host="ip-172-31-18-81" Mar 6 03:03:52.179748 containerd[1983]: 2026-03-06 03:03:52.065 [INFO][4771] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.49.1/26] handle="k8s-pod-network.0a6580cd4c3f030307f32467885aa8431c58ba9b181404953e03afee489e212b" host="ip-172-31-18-81" Mar 6 03:03:52.179748 containerd[1983]: 2026-03-06 03:03:52.065 [INFO][4771] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 03:03:52.179748 containerd[1983]: 2026-03-06 03:03:52.065 [INFO][4771] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.49.1/26] IPv6=[] ContainerID="0a6580cd4c3f030307f32467885aa8431c58ba9b181404953e03afee489e212b" HandleID="k8s-pod-network.0a6580cd4c3f030307f32467885aa8431c58ba9b181404953e03afee489e212b" Workload="ip--172--31--18--81-k8s-whisker--5c457c4f6c--w494m-eth0" Mar 6 03:03:52.180030 containerd[1983]: 2026-03-06 03:03:52.070 [INFO][4758] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0a6580cd4c3f030307f32467885aa8431c58ba9b181404953e03afee489e212b" Namespace="calico-system" Pod="whisker-5c457c4f6c-w494m" WorkloadEndpoint="ip--172--31--18--81-k8s-whisker--5c457c4f6c--w494m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--81-k8s-whisker--5c457c4f6c--w494m-eth0", GenerateName:"whisker-5c457c4f6c-", Namespace:"calico-system", SelfLink:"", UID:"4c267a6b-90d8-401f-84a7-480cc5e89352", ResourceVersion:"930", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 3, 3, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5c457c4f6c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-81", ContainerID:"", Pod:"whisker-5c457c4f6c-w494m", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.49.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali654957cafb3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 03:03:52.180030 containerd[1983]: 2026-03-06 03:03:52.070 [INFO][4758] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.49.1/32] ContainerID="0a6580cd4c3f030307f32467885aa8431c58ba9b181404953e03afee489e212b" Namespace="calico-system" Pod="whisker-5c457c4f6c-w494m" WorkloadEndpoint="ip--172--31--18--81-k8s-whisker--5c457c4f6c--w494m-eth0" Mar 6 03:03:52.180160 containerd[1983]: 2026-03-06 03:03:52.070 [INFO][4758] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali654957cafb3 ContainerID="0a6580cd4c3f030307f32467885aa8431c58ba9b181404953e03afee489e212b" Namespace="calico-system" Pod="whisker-5c457c4f6c-w494m" WorkloadEndpoint="ip--172--31--18--81-k8s-whisker--5c457c4f6c--w494m-eth0" Mar 6 03:03:52.180160 containerd[1983]: 2026-03-06 03:03:52.137 [INFO][4758] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0a6580cd4c3f030307f32467885aa8431c58ba9b181404953e03afee489e212b" Namespace="calico-system" Pod="whisker-5c457c4f6c-w494m" WorkloadEndpoint="ip--172--31--18--81-k8s-whisker--5c457c4f6c--w494m-eth0" Mar 6 03:03:52.180227 containerd[1983]: 2026-03-06 03:03:52.139 [INFO][4758] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0a6580cd4c3f030307f32467885aa8431c58ba9b181404953e03afee489e212b" Namespace="calico-system" Pod="whisker-5c457c4f6c-w494m" WorkloadEndpoint="ip--172--31--18--81-k8s-whisker--5c457c4f6c--w494m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--81-k8s-whisker--5c457c4f6c--w494m-eth0", GenerateName:"whisker-5c457c4f6c-", Namespace:"calico-system", SelfLink:"", UID:"4c267a6b-90d8-401f-84a7-480cc5e89352", ResourceVersion:"930", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 3, 3, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5c457c4f6c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-81", ContainerID:"0a6580cd4c3f030307f32467885aa8431c58ba9b181404953e03afee489e212b", Pod:"whisker-5c457c4f6c-w494m", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.49.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali654957cafb3", MAC:"5e:16:75:df:e2:c0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 03:03:52.180295 containerd[1983]: 2026-03-06 03:03:52.170 [INFO][4758] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0a6580cd4c3f030307f32467885aa8431c58ba9b181404953e03afee489e212b" Namespace="calico-system" Pod="whisker-5c457c4f6c-w494m" WorkloadEndpoint="ip--172--31--18--81-k8s-whisker--5c457c4f6c--w494m-eth0" Mar 6 03:03:52.229145 (udev-worker)[4801]: Network interface NamePolicy= disabled on kernel command line. Mar 6 03:03:52.394687 containerd[1983]: time="2026-03-06T03:03:52.394567155Z" level=info msg="connecting to shim 0a6580cd4c3f030307f32467885aa8431c58ba9b181404953e03afee489e212b" address="unix:///run/containerd/s/3639307c8a7f6309334ffc9bba4038dd3dbd58f3c297c7e4fa9aebcc1fabb64b" namespace=k8s.io protocol=ttrpc version=3 Mar 6 03:03:52.450691 systemd[1]: Started cri-containerd-0a6580cd4c3f030307f32467885aa8431c58ba9b181404953e03afee489e212b.scope - libcontainer container 0a6580cd4c3f030307f32467885aa8431c58ba9b181404953e03afee489e212b. Mar 6 03:03:52.535960 containerd[1983]: time="2026-03-06T03:03:52.535897697Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5c457c4f6c-w494m,Uid:4c267a6b-90d8-401f-84a7-480cc5e89352,Namespace:calico-system,Attempt:0,} returns sandbox id \"0a6580cd4c3f030307f32467885aa8431c58ba9b181404953e03afee489e212b\"" Mar 6 03:03:52.581061 containerd[1983]: time="2026-03-06T03:03:52.581021565Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Mar 6 03:03:53.671508 systemd-networkd[1838]: cali654957cafb3: Gained IPv6LL Mar 6 03:03:53.904593 systemd-networkd[1838]: vxlan.calico: Link UP Mar 6 03:03:53.904602 systemd-networkd[1838]: vxlan.calico: Gained carrier Mar 6 03:03:53.905618 (udev-worker)[4800]: Network interface NamePolicy= disabled on kernel command line. Mar 6 03:03:54.585511 containerd[1983]: time="2026-03-06T03:03:54.584497168Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:03:54.590784 containerd[1983]: time="2026-03-06T03:03:54.590382068Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=6039889" Mar 6 03:03:54.594171 containerd[1983]: time="2026-03-06T03:03:54.594112048Z" level=info msg="ImageCreate event name:\"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:03:54.600149 containerd[1983]: time="2026-03-06T03:03:54.600103728Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:03:54.600948 containerd[1983]: time="2026-03-06T03:03:54.600906822Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7595926\" in 2.019832148s" Mar 6 03:03:54.601050 containerd[1983]: time="2026-03-06T03:03:54.600952082Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\"" Mar 6 03:03:54.777286 containerd[1983]: time="2026-03-06T03:03:54.777245198Z" level=info msg="CreateContainer within sandbox \"0a6580cd4c3f030307f32467885aa8431c58ba9b181404953e03afee489e212b\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Mar 6 03:03:54.796283 containerd[1983]: time="2026-03-06T03:03:54.796240417Z" level=info msg="Container 7d7d33e7def295123c4cf2be0cbf02b085a86b5e94cc93f6f238ef0232f52141: CDI devices from CRI Config.CDIDevices: []" Mar 6 03:03:54.807087 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2599764395.mount: Deactivated successfully. Mar 6 03:03:54.816576 containerd[1983]: time="2026-03-06T03:03:54.816531728Z" level=info msg="CreateContainer within sandbox \"0a6580cd4c3f030307f32467885aa8431c58ba9b181404953e03afee489e212b\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"7d7d33e7def295123c4cf2be0cbf02b085a86b5e94cc93f6f238ef0232f52141\"" Mar 6 03:03:54.817241 containerd[1983]: time="2026-03-06T03:03:54.817203763Z" level=info msg="StartContainer for \"7d7d33e7def295123c4cf2be0cbf02b085a86b5e94cc93f6f238ef0232f52141\"" Mar 6 03:03:54.820807 containerd[1983]: time="2026-03-06T03:03:54.820758538Z" level=info msg="connecting to shim 7d7d33e7def295123c4cf2be0cbf02b085a86b5e94cc93f6f238ef0232f52141" address="unix:///run/containerd/s/3639307c8a7f6309334ffc9bba4038dd3dbd58f3c297c7e4fa9aebcc1fabb64b" protocol=ttrpc version=3 Mar 6 03:03:54.895366 systemd[1]: Started cri-containerd-7d7d33e7def295123c4cf2be0cbf02b085a86b5e94cc93f6f238ef0232f52141.scope - libcontainer container 7d7d33e7def295123c4cf2be0cbf02b085a86b5e94cc93f6f238ef0232f52141. Mar 6 03:03:55.017576 containerd[1983]: time="2026-03-06T03:03:55.017521774Z" level=info msg="StartContainer for \"7d7d33e7def295123c4cf2be0cbf02b085a86b5e94cc93f6f238ef0232f52141\" returns successfully" Mar 6 03:03:55.026777 containerd[1983]: time="2026-03-06T03:03:55.026710260Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Mar 6 03:03:55.648569 systemd-networkd[1838]: vxlan.calico: Gained IPv6LL Mar 6 03:03:56.079125 systemd[1]: Started sshd@7-172.31.18.81:22-68.220.241.50:39222.service - OpenSSH per-connection server daemon (68.220.241.50:39222). Mar 6 03:03:56.599556 sshd[4998]: Accepted publickey for core from 68.220.241.50 port 39222 ssh2: RSA SHA256:qfQ2rypVMEWpTsWSSrgOtJoJZgil1uRcnShauR4vebY Mar 6 03:03:56.604564 sshd-session[4998]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 03:03:56.613549 systemd-logind[1952]: New session 8 of user core. Mar 6 03:03:56.618828 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 6 03:03:56.715351 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1732876023.mount: Deactivated successfully. Mar 6 03:03:56.739394 containerd[1983]: time="2026-03-06T03:03:56.738543660Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:03:56.739905 containerd[1983]: time="2026-03-06T03:03:56.739876221Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=17609475" Mar 6 03:03:56.740809 containerd[1983]: time="2026-03-06T03:03:56.740745844Z" level=info msg="ImageCreate event name:\"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:03:56.745868 containerd[1983]: time="2026-03-06T03:03:56.745831390Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:03:56.747305 containerd[1983]: time="2026-03-06T03:03:56.747261159Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"17609305\" in 1.720484799s" Mar 6 03:03:56.747391 containerd[1983]: time="2026-03-06T03:03:56.747327044Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\"" Mar 6 03:03:56.753238 containerd[1983]: time="2026-03-06T03:03:56.753196880Z" level=info msg="CreateContainer within sandbox \"0a6580cd4c3f030307f32467885aa8431c58ba9b181404953e03afee489e212b\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Mar 6 03:03:56.762608 containerd[1983]: time="2026-03-06T03:03:56.762563137Z" level=info msg="Container 4659d06af999a39ed0a83a68eaea1b8b0c2b304cbfa23646e4f8583e5f333cb2: CDI devices from CRI Config.CDIDevices: []" Mar 6 03:03:56.788084 containerd[1983]: time="2026-03-06T03:03:56.788008234Z" level=info msg="CreateContainer within sandbox \"0a6580cd4c3f030307f32467885aa8431c58ba9b181404953e03afee489e212b\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"4659d06af999a39ed0a83a68eaea1b8b0c2b304cbfa23646e4f8583e5f333cb2\"" Mar 6 03:03:56.792261 containerd[1983]: time="2026-03-06T03:03:56.790409255Z" level=info msg="StartContainer for \"4659d06af999a39ed0a83a68eaea1b8b0c2b304cbfa23646e4f8583e5f333cb2\"" Mar 6 03:03:56.809049 containerd[1983]: time="2026-03-06T03:03:56.808995700Z" level=info msg="connecting to shim 4659d06af999a39ed0a83a68eaea1b8b0c2b304cbfa23646e4f8583e5f333cb2" address="unix:///run/containerd/s/3639307c8a7f6309334ffc9bba4038dd3dbd58f3c297c7e4fa9aebcc1fabb64b" protocol=ttrpc version=3 Mar 6 03:03:56.860681 systemd[1]: Started cri-containerd-4659d06af999a39ed0a83a68eaea1b8b0c2b304cbfa23646e4f8583e5f333cb2.scope - libcontainer container 4659d06af999a39ed0a83a68eaea1b8b0c2b304cbfa23646e4f8583e5f333cb2. Mar 6 03:03:56.939984 containerd[1983]: time="2026-03-06T03:03:56.939176583Z" level=info msg="StartContainer for \"4659d06af999a39ed0a83a68eaea1b8b0c2b304cbfa23646e4f8583e5f333cb2\" returns successfully" Mar 6 03:03:57.713238 sshd[5005]: Connection closed by 68.220.241.50 port 39222 Mar 6 03:03:57.722367 systemd[1]: sshd@7-172.31.18.81:22-68.220.241.50:39222.service: Deactivated successfully. Mar 6 03:03:57.714653 sshd-session[4998]: pam_unix(sshd:session): session closed for user core Mar 6 03:03:57.725652 systemd[1]: session-8.scope: Deactivated successfully. Mar 6 03:03:57.730862 systemd-logind[1952]: Session 8 logged out. Waiting for processes to exit. Mar 6 03:03:57.733719 systemd-logind[1952]: Removed session 8. Mar 6 03:03:57.837599 ntpd[2217]: Listen normally on 6 vxlan.calico 192.168.49.0:123 Mar 6 03:03:57.837665 ntpd[2217]: Listen normally on 7 cali654957cafb3 [fe80::ecee:eeff:feee:eeee%4]:123 Mar 6 03:03:57.838115 ntpd[2217]: 6 Mar 03:03:57 ntpd[2217]: Listen normally on 6 vxlan.calico 192.168.49.0:123 Mar 6 03:03:57.838115 ntpd[2217]: 6 Mar 03:03:57 ntpd[2217]: Listen normally on 7 cali654957cafb3 [fe80::ecee:eeff:feee:eeee%4]:123 Mar 6 03:03:57.838115 ntpd[2217]: 6 Mar 03:03:57 ntpd[2217]: Listen normally on 8 vxlan.calico [fe80::6485:82ff:fef6:91cf%5]:123 Mar 6 03:03:57.837697 ntpd[2217]: Listen normally on 8 vxlan.calico [fe80::6485:82ff:fef6:91cf%5]:123 Mar 6 03:03:58.648655 containerd[1983]: time="2026-03-06T03:03:58.648611801Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zsntj,Uid:aa230f7b-c2dc-46a5-ab7f-c83880b50346,Namespace:calico-system,Attempt:0,}" Mar 6 03:03:58.811920 systemd-networkd[1838]: calib0f52df67d1: Link UP Mar 6 03:03:58.814693 systemd-networkd[1838]: calib0f52df67d1: Gained carrier Mar 6 03:03:58.842108 kubelet[3329]: I0306 03:03:58.836287 3329 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/whisker-5c457c4f6c-w494m" podStartSLOduration=3.664486013 podStartE2EDuration="7.832376721s" podCreationTimestamp="2026-03-06 03:03:51 +0000 UTC" firstStartedPulling="2026-03-06 03:03:52.580367877 +0000 UTC m=+55.092338946" lastFinishedPulling="2026-03-06 03:03:56.748258575 +0000 UTC m=+59.260229654" observedRunningTime="2026-03-06 03:03:57.192021452 +0000 UTC m=+59.703992539" watchObservedRunningTime="2026-03-06 03:03:58.832376721 +0000 UTC m=+61.344347811" Mar 6 03:03:58.876804 containerd[1983]: 2026-03-06 03:03:58.718 [INFO][5058] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--81-k8s-csi--node--driver--zsntj-eth0 csi-node-driver- calico-system aa230f7b-c2dc-46a5-ab7f-c83880b50346 709 0 2026-03-06 03:03:22 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:589b8b8d94 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-18-81 csi-node-driver-zsntj eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calib0f52df67d1 [] [] }} ContainerID="a9840a586f0f847e4ab0fea2422a129fa00eb685e28d7761c5ea71edea603a28" Namespace="calico-system" Pod="csi-node-driver-zsntj" WorkloadEndpoint="ip--172--31--18--81-k8s-csi--node--driver--zsntj-" Mar 6 03:03:58.876804 containerd[1983]: 2026-03-06 03:03:58.719 [INFO][5058] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a9840a586f0f847e4ab0fea2422a129fa00eb685e28d7761c5ea71edea603a28" Namespace="calico-system" Pod="csi-node-driver-zsntj" WorkloadEndpoint="ip--172--31--18--81-k8s-csi--node--driver--zsntj-eth0" Mar 6 03:03:58.876804 containerd[1983]: 2026-03-06 03:03:58.755 [INFO][5070] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a9840a586f0f847e4ab0fea2422a129fa00eb685e28d7761c5ea71edea603a28" HandleID="k8s-pod-network.a9840a586f0f847e4ab0fea2422a129fa00eb685e28d7761c5ea71edea603a28" Workload="ip--172--31--18--81-k8s-csi--node--driver--zsntj-eth0" Mar 6 03:03:58.844529 (udev-worker)[5080]: Network interface NamePolicy= disabled on kernel command line. Mar 6 03:03:58.877324 containerd[1983]: 2026-03-06 03:03:58.763 [INFO][5070] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="a9840a586f0f847e4ab0fea2422a129fa00eb685e28d7761c5ea71edea603a28" HandleID="k8s-pod-network.a9840a586f0f847e4ab0fea2422a129fa00eb685e28d7761c5ea71edea603a28" Workload="ip--172--31--18--81-k8s-csi--node--driver--zsntj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002efbd0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-18-81", "pod":"csi-node-driver-zsntj", "timestamp":"2026-03-06 03:03:58.755698752 +0000 UTC"}, Hostname:"ip-172-31-18-81", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0003b7080)} Mar 6 03:03:58.877324 containerd[1983]: 2026-03-06 03:03:58.763 [INFO][5070] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 03:03:58.877324 containerd[1983]: 2026-03-06 03:03:58.763 [INFO][5070] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 03:03:58.877324 containerd[1983]: 2026-03-06 03:03:58.763 [INFO][5070] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-81' Mar 6 03:03:58.877324 containerd[1983]: 2026-03-06 03:03:58.766 [INFO][5070] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.a9840a586f0f847e4ab0fea2422a129fa00eb685e28d7761c5ea71edea603a28" host="ip-172-31-18-81" Mar 6 03:03:58.877324 containerd[1983]: 2026-03-06 03:03:58.773 [INFO][5070] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-18-81" Mar 6 03:03:58.877324 containerd[1983]: 2026-03-06 03:03:58.779 [INFO][5070] ipam/ipam.go 526: Trying affinity for 192.168.49.0/26 host="ip-172-31-18-81" Mar 6 03:03:58.877324 containerd[1983]: 2026-03-06 03:03:58.781 [INFO][5070] ipam/ipam.go 160: Attempting to load block cidr=192.168.49.0/26 host="ip-172-31-18-81" Mar 6 03:03:58.877324 containerd[1983]: 2026-03-06 03:03:58.783 [INFO][5070] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.49.0/26 host="ip-172-31-18-81" Mar 6 03:03:58.877772 containerd[1983]: 2026-03-06 03:03:58.783 [INFO][5070] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.49.0/26 handle="k8s-pod-network.a9840a586f0f847e4ab0fea2422a129fa00eb685e28d7761c5ea71edea603a28" host="ip-172-31-18-81" Mar 6 03:03:58.877772 containerd[1983]: 2026-03-06 03:03:58.786 [INFO][5070] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.a9840a586f0f847e4ab0fea2422a129fa00eb685e28d7761c5ea71edea603a28 Mar 6 03:03:58.877772 containerd[1983]: 2026-03-06 03:03:58.791 [INFO][5070] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.49.0/26 handle="k8s-pod-network.a9840a586f0f847e4ab0fea2422a129fa00eb685e28d7761c5ea71edea603a28" host="ip-172-31-18-81" Mar 6 03:03:58.877772 containerd[1983]: 2026-03-06 03:03:58.799 [INFO][5070] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.49.2/26] block=192.168.49.0/26 handle="k8s-pod-network.a9840a586f0f847e4ab0fea2422a129fa00eb685e28d7761c5ea71edea603a28" host="ip-172-31-18-81" Mar 6 03:03:58.877772 containerd[1983]: 2026-03-06 03:03:58.799 [INFO][5070] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.49.2/26] handle="k8s-pod-network.a9840a586f0f847e4ab0fea2422a129fa00eb685e28d7761c5ea71edea603a28" host="ip-172-31-18-81" Mar 6 03:03:58.877772 containerd[1983]: 2026-03-06 03:03:58.799 [INFO][5070] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 03:03:58.877772 containerd[1983]: 2026-03-06 03:03:58.799 [INFO][5070] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.49.2/26] IPv6=[] ContainerID="a9840a586f0f847e4ab0fea2422a129fa00eb685e28d7761c5ea71edea603a28" HandleID="k8s-pod-network.a9840a586f0f847e4ab0fea2422a129fa00eb685e28d7761c5ea71edea603a28" Workload="ip--172--31--18--81-k8s-csi--node--driver--zsntj-eth0" Mar 6 03:03:58.882743 containerd[1983]: 2026-03-06 03:03:58.803 [INFO][5058] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a9840a586f0f847e4ab0fea2422a129fa00eb685e28d7761c5ea71edea603a28" Namespace="calico-system" Pod="csi-node-driver-zsntj" WorkloadEndpoint="ip--172--31--18--81-k8s-csi--node--driver--zsntj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--81-k8s-csi--node--driver--zsntj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"aa230f7b-c2dc-46a5-ab7f-c83880b50346", ResourceVersion:"709", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 3, 3, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-81", ContainerID:"", Pod:"csi-node-driver-zsntj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.49.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib0f52df67d1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 03:03:58.882858 containerd[1983]: 2026-03-06 03:03:58.803 [INFO][5058] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.49.2/32] ContainerID="a9840a586f0f847e4ab0fea2422a129fa00eb685e28d7761c5ea71edea603a28" Namespace="calico-system" Pod="csi-node-driver-zsntj" WorkloadEndpoint="ip--172--31--18--81-k8s-csi--node--driver--zsntj-eth0" Mar 6 03:03:58.882858 containerd[1983]: 2026-03-06 03:03:58.803 [INFO][5058] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib0f52df67d1 ContainerID="a9840a586f0f847e4ab0fea2422a129fa00eb685e28d7761c5ea71edea603a28" Namespace="calico-system" Pod="csi-node-driver-zsntj" WorkloadEndpoint="ip--172--31--18--81-k8s-csi--node--driver--zsntj-eth0" Mar 6 03:03:58.882858 containerd[1983]: 2026-03-06 03:03:58.815 [INFO][5058] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a9840a586f0f847e4ab0fea2422a129fa00eb685e28d7761c5ea71edea603a28" Namespace="calico-system" Pod="csi-node-driver-zsntj" WorkloadEndpoint="ip--172--31--18--81-k8s-csi--node--driver--zsntj-eth0" Mar 6 03:03:58.882987 containerd[1983]: 2026-03-06 03:03:58.817 [INFO][5058] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a9840a586f0f847e4ab0fea2422a129fa00eb685e28d7761c5ea71edea603a28" Namespace="calico-system" Pod="csi-node-driver-zsntj" WorkloadEndpoint="ip--172--31--18--81-k8s-csi--node--driver--zsntj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--81-k8s-csi--node--driver--zsntj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"aa230f7b-c2dc-46a5-ab7f-c83880b50346", ResourceVersion:"709", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 3, 3, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-81", ContainerID:"a9840a586f0f847e4ab0fea2422a129fa00eb685e28d7761c5ea71edea603a28", Pod:"csi-node-driver-zsntj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.49.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib0f52df67d1", MAC:"5a:d4:a9:2b:d3:43", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 03:03:58.883082 containerd[1983]: 2026-03-06 03:03:58.829 [INFO][5058] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a9840a586f0f847e4ab0fea2422a129fa00eb685e28d7761c5ea71edea603a28" Namespace="calico-system" Pod="csi-node-driver-zsntj" WorkloadEndpoint="ip--172--31--18--81-k8s-csi--node--driver--zsntj-eth0" Mar 6 03:03:58.921463 containerd[1983]: time="2026-03-06T03:03:58.919700957Z" level=info msg="connecting to shim a9840a586f0f847e4ab0fea2422a129fa00eb685e28d7761c5ea71edea603a28" address="unix:///run/containerd/s/e9fc2660cef22404fd4e79e2a757935a865b9bcd7548e6fd6c119650dd234524" namespace=k8s.io protocol=ttrpc version=3 Mar 6 03:03:58.972656 systemd[1]: Started cri-containerd-a9840a586f0f847e4ab0fea2422a129fa00eb685e28d7761c5ea71edea603a28.scope - libcontainer container a9840a586f0f847e4ab0fea2422a129fa00eb685e28d7761c5ea71edea603a28. Mar 6 03:03:59.025037 containerd[1983]: time="2026-03-06T03:03:59.025000247Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zsntj,Uid:aa230f7b-c2dc-46a5-ab7f-c83880b50346,Namespace:calico-system,Attempt:0,} returns sandbox id \"a9840a586f0f847e4ab0fea2422a129fa00eb685e28d7761c5ea71edea603a28\"" Mar 6 03:03:59.027334 containerd[1983]: time="2026-03-06T03:03:59.027301971Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Mar 6 03:03:59.645840 containerd[1983]: time="2026-03-06T03:03:59.645725287Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d48547fb8-6cbb5,Uid:289ec3ea-b2e0-4ded-ad19-fc4c25fc97e9,Namespace:calico-system,Attempt:0,}" Mar 6 03:03:59.651026 containerd[1983]: time="2026-03-06T03:03:59.650640595Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fbfb65546-bwshp,Uid:5de748ec-8a9e-43a7-9a5d-22442eda9add,Namespace:calico-system,Attempt:0,}" Mar 6 03:03:59.842093 systemd-networkd[1838]: calic66315d401d: Link UP Mar 6 03:03:59.843688 systemd-networkd[1838]: calic66315d401d: Gained carrier Mar 6 03:03:59.869449 containerd[1983]: 2026-03-06 03:03:59.723 [INFO][5156] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--81-k8s-calico--kube--controllers--6d48547fb8--6cbb5-eth0 calico-kube-controllers-6d48547fb8- calico-system 289ec3ea-b2e0-4ded-ad19-fc4c25fc97e9 863 0 2026-03-06 03:03:22 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6d48547fb8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-18-81 calico-kube-controllers-6d48547fb8-6cbb5 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calic66315d401d [] [] }} ContainerID="ac7738190806e0da70c1d7b79028f870282e2b9e7f85b576a5ff116ccda1234e" Namespace="calico-system" Pod="calico-kube-controllers-6d48547fb8-6cbb5" WorkloadEndpoint="ip--172--31--18--81-k8s-calico--kube--controllers--6d48547fb8--6cbb5-" Mar 6 03:03:59.869449 containerd[1983]: 2026-03-06 03:03:59.724 [INFO][5156] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ac7738190806e0da70c1d7b79028f870282e2b9e7f85b576a5ff116ccda1234e" Namespace="calico-system" Pod="calico-kube-controllers-6d48547fb8-6cbb5" WorkloadEndpoint="ip--172--31--18--81-k8s-calico--kube--controllers--6d48547fb8--6cbb5-eth0" Mar 6 03:03:59.869449 containerd[1983]: 2026-03-06 03:03:59.769 [INFO][5179] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ac7738190806e0da70c1d7b79028f870282e2b9e7f85b576a5ff116ccda1234e" HandleID="k8s-pod-network.ac7738190806e0da70c1d7b79028f870282e2b9e7f85b576a5ff116ccda1234e" Workload="ip--172--31--18--81-k8s-calico--kube--controllers--6d48547fb8--6cbb5-eth0" Mar 6 03:03:59.869839 containerd[1983]: 2026-03-06 03:03:59.780 [INFO][5179] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="ac7738190806e0da70c1d7b79028f870282e2b9e7f85b576a5ff116ccda1234e" HandleID="k8s-pod-network.ac7738190806e0da70c1d7b79028f870282e2b9e7f85b576a5ff116ccda1234e" Workload="ip--172--31--18--81-k8s-calico--kube--controllers--6d48547fb8--6cbb5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000380cf0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-18-81", "pod":"calico-kube-controllers-6d48547fb8-6cbb5", "timestamp":"2026-03-06 03:03:59.769185758 +0000 UTC"}, Hostname:"ip-172-31-18-81", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00036ef20)} Mar 6 03:03:59.869839 containerd[1983]: 2026-03-06 03:03:59.781 [INFO][5179] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 03:03:59.869839 containerd[1983]: 2026-03-06 03:03:59.781 [INFO][5179] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 03:03:59.869839 containerd[1983]: 2026-03-06 03:03:59.781 [INFO][5179] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-81' Mar 6 03:03:59.869839 containerd[1983]: 2026-03-06 03:03:59.785 [INFO][5179] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.ac7738190806e0da70c1d7b79028f870282e2b9e7f85b576a5ff116ccda1234e" host="ip-172-31-18-81" Mar 6 03:03:59.869839 containerd[1983]: 2026-03-06 03:03:59.792 [INFO][5179] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-18-81" Mar 6 03:03:59.869839 containerd[1983]: 2026-03-06 03:03:59.802 [INFO][5179] ipam/ipam.go 526: Trying affinity for 192.168.49.0/26 host="ip-172-31-18-81" Mar 6 03:03:59.869839 containerd[1983]: 2026-03-06 03:03:59.807 [INFO][5179] ipam/ipam.go 160: Attempting to load block cidr=192.168.49.0/26 host="ip-172-31-18-81" Mar 6 03:03:59.869839 containerd[1983]: 2026-03-06 03:03:59.812 [INFO][5179] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.49.0/26 host="ip-172-31-18-81" Mar 6 03:03:59.871800 containerd[1983]: 2026-03-06 03:03:59.812 [INFO][5179] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.49.0/26 handle="k8s-pod-network.ac7738190806e0da70c1d7b79028f870282e2b9e7f85b576a5ff116ccda1234e" host="ip-172-31-18-81" Mar 6 03:03:59.871800 containerd[1983]: 2026-03-06 03:03:59.816 [INFO][5179] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.ac7738190806e0da70c1d7b79028f870282e2b9e7f85b576a5ff116ccda1234e Mar 6 03:03:59.871800 containerd[1983]: 2026-03-06 03:03:59.822 [INFO][5179] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.49.0/26 handle="k8s-pod-network.ac7738190806e0da70c1d7b79028f870282e2b9e7f85b576a5ff116ccda1234e" host="ip-172-31-18-81" Mar 6 03:03:59.871800 containerd[1983]: 2026-03-06 03:03:59.832 [INFO][5179] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.49.3/26] block=192.168.49.0/26 handle="k8s-pod-network.ac7738190806e0da70c1d7b79028f870282e2b9e7f85b576a5ff116ccda1234e" host="ip-172-31-18-81" Mar 6 03:03:59.871800 containerd[1983]: 2026-03-06 03:03:59.833 [INFO][5179] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.49.3/26] handle="k8s-pod-network.ac7738190806e0da70c1d7b79028f870282e2b9e7f85b576a5ff116ccda1234e" host="ip-172-31-18-81" Mar 6 03:03:59.871800 containerd[1983]: 2026-03-06 03:03:59.833 [INFO][5179] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 03:03:59.871800 containerd[1983]: 2026-03-06 03:03:59.833 [INFO][5179] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.49.3/26] IPv6=[] ContainerID="ac7738190806e0da70c1d7b79028f870282e2b9e7f85b576a5ff116ccda1234e" HandleID="k8s-pod-network.ac7738190806e0da70c1d7b79028f870282e2b9e7f85b576a5ff116ccda1234e" Workload="ip--172--31--18--81-k8s-calico--kube--controllers--6d48547fb8--6cbb5-eth0" Mar 6 03:03:59.872338 containerd[1983]: 2026-03-06 03:03:59.836 [INFO][5156] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ac7738190806e0da70c1d7b79028f870282e2b9e7f85b576a5ff116ccda1234e" Namespace="calico-system" Pod="calico-kube-controllers-6d48547fb8-6cbb5" WorkloadEndpoint="ip--172--31--18--81-k8s-calico--kube--controllers--6d48547fb8--6cbb5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--81-k8s-calico--kube--controllers--6d48547fb8--6cbb5-eth0", GenerateName:"calico-kube-controllers-6d48547fb8-", Namespace:"calico-system", SelfLink:"", UID:"289ec3ea-b2e0-4ded-ad19-fc4c25fc97e9", ResourceVersion:"863", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 3, 3, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6d48547fb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-81", ContainerID:"", Pod:"calico-kube-controllers-6d48547fb8-6cbb5", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.49.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic66315d401d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 03:03:59.872621 containerd[1983]: 2026-03-06 03:03:59.837 [INFO][5156] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.49.3/32] ContainerID="ac7738190806e0da70c1d7b79028f870282e2b9e7f85b576a5ff116ccda1234e" Namespace="calico-system" Pod="calico-kube-controllers-6d48547fb8-6cbb5" WorkloadEndpoint="ip--172--31--18--81-k8s-calico--kube--controllers--6d48547fb8--6cbb5-eth0" Mar 6 03:03:59.872621 containerd[1983]: 2026-03-06 03:03:59.838 [INFO][5156] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic66315d401d ContainerID="ac7738190806e0da70c1d7b79028f870282e2b9e7f85b576a5ff116ccda1234e" Namespace="calico-system" Pod="calico-kube-controllers-6d48547fb8-6cbb5" WorkloadEndpoint="ip--172--31--18--81-k8s-calico--kube--controllers--6d48547fb8--6cbb5-eth0" Mar 6 03:03:59.872621 containerd[1983]: 2026-03-06 03:03:59.844 [INFO][5156] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ac7738190806e0da70c1d7b79028f870282e2b9e7f85b576a5ff116ccda1234e" Namespace="calico-system" Pod="calico-kube-controllers-6d48547fb8-6cbb5" WorkloadEndpoint="ip--172--31--18--81-k8s-calico--kube--controllers--6d48547fb8--6cbb5-eth0" Mar 6 03:03:59.872789 containerd[1983]: 2026-03-06 03:03:59.845 [INFO][5156] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ac7738190806e0da70c1d7b79028f870282e2b9e7f85b576a5ff116ccda1234e" Namespace="calico-system" Pod="calico-kube-controllers-6d48547fb8-6cbb5" WorkloadEndpoint="ip--172--31--18--81-k8s-calico--kube--controllers--6d48547fb8--6cbb5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--81-k8s-calico--kube--controllers--6d48547fb8--6cbb5-eth0", GenerateName:"calico-kube-controllers-6d48547fb8-", Namespace:"calico-system", SelfLink:"", UID:"289ec3ea-b2e0-4ded-ad19-fc4c25fc97e9", ResourceVersion:"863", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 3, 3, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6d48547fb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-81", ContainerID:"ac7738190806e0da70c1d7b79028f870282e2b9e7f85b576a5ff116ccda1234e", Pod:"calico-kube-controllers-6d48547fb8-6cbb5", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.49.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic66315d401d", MAC:"d2:24:7e:28:7c:97", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 03:03:59.872891 containerd[1983]: 2026-03-06 03:03:59.862 [INFO][5156] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ac7738190806e0da70c1d7b79028f870282e2b9e7f85b576a5ff116ccda1234e" Namespace="calico-system" Pod="calico-kube-controllers-6d48547fb8-6cbb5" WorkloadEndpoint="ip--172--31--18--81-k8s-calico--kube--controllers--6d48547fb8--6cbb5-eth0" Mar 6 03:03:59.974731 containerd[1983]: time="2026-03-06T03:03:59.973409957Z" level=info msg="connecting to shim ac7738190806e0da70c1d7b79028f870282e2b9e7f85b576a5ff116ccda1234e" address="unix:///run/containerd/s/de37f765f4f544a9f83f0683e82420377c33b112014aa255d330619b229c0834" namespace=k8s.io protocol=ttrpc version=3 Mar 6 03:04:00.026211 systemd-networkd[1838]: cali21cd075e471: Link UP Mar 6 03:04:00.026942 systemd[1]: Started cri-containerd-ac7738190806e0da70c1d7b79028f870282e2b9e7f85b576a5ff116ccda1234e.scope - libcontainer container ac7738190806e0da70c1d7b79028f870282e2b9e7f85b576a5ff116ccda1234e. Mar 6 03:04:00.029696 systemd-networkd[1838]: cali21cd075e471: Gained carrier Mar 6 03:04:00.058510 containerd[1983]: 2026-03-06 03:03:59.731 [INFO][5165] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--81-k8s-calico--apiserver--7fbfb65546--bwshp-eth0 calico-apiserver-7fbfb65546- calico-system 5de748ec-8a9e-43a7-9a5d-22442eda9add 861 0 2026-03-06 03:03:21 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7fbfb65546 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-18-81 calico-apiserver-7fbfb65546-bwshp eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali21cd075e471 [] [] }} ContainerID="14421cb07c5db05277b00e76b0391f0e4d8a078dbb5948d74c517bfd5ef7dfb2" Namespace="calico-system" Pod="calico-apiserver-7fbfb65546-bwshp" WorkloadEndpoint="ip--172--31--18--81-k8s-calico--apiserver--7fbfb65546--bwshp-" Mar 6 03:04:00.058510 containerd[1983]: 2026-03-06 03:03:59.732 [INFO][5165] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="14421cb07c5db05277b00e76b0391f0e4d8a078dbb5948d74c517bfd5ef7dfb2" Namespace="calico-system" Pod="calico-apiserver-7fbfb65546-bwshp" WorkloadEndpoint="ip--172--31--18--81-k8s-calico--apiserver--7fbfb65546--bwshp-eth0" Mar 6 03:04:00.058510 containerd[1983]: 2026-03-06 03:03:59.789 [INFO][5184] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="14421cb07c5db05277b00e76b0391f0e4d8a078dbb5948d74c517bfd5ef7dfb2" HandleID="k8s-pod-network.14421cb07c5db05277b00e76b0391f0e4d8a078dbb5948d74c517bfd5ef7dfb2" Workload="ip--172--31--18--81-k8s-calico--apiserver--7fbfb65546--bwshp-eth0" Mar 6 03:04:00.058818 containerd[1983]: 2026-03-06 03:03:59.802 [INFO][5184] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="14421cb07c5db05277b00e76b0391f0e4d8a078dbb5948d74c517bfd5ef7dfb2" HandleID="k8s-pod-network.14421cb07c5db05277b00e76b0391f0e4d8a078dbb5948d74c517bfd5ef7dfb2" Workload="ip--172--31--18--81-k8s-calico--apiserver--7fbfb65546--bwshp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002fdc90), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-18-81", "pod":"calico-apiserver-7fbfb65546-bwshp", "timestamp":"2026-03-06 03:03:59.789389661 +0000 UTC"}, Hostname:"ip-172-31-18-81", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00046b080)} Mar 6 03:04:00.058818 containerd[1983]: 2026-03-06 03:03:59.803 [INFO][5184] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 03:04:00.058818 containerd[1983]: 2026-03-06 03:03:59.833 [INFO][5184] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 03:04:00.058818 containerd[1983]: 2026-03-06 03:03:59.833 [INFO][5184] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-81' Mar 6 03:04:00.058818 containerd[1983]: 2026-03-06 03:03:59.888 [INFO][5184] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.14421cb07c5db05277b00e76b0391f0e4d8a078dbb5948d74c517bfd5ef7dfb2" host="ip-172-31-18-81" Mar 6 03:04:00.058818 containerd[1983]: 2026-03-06 03:03:59.914 [INFO][5184] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-18-81" Mar 6 03:04:00.058818 containerd[1983]: 2026-03-06 03:03:59.933 [INFO][5184] ipam/ipam.go 526: Trying affinity for 192.168.49.0/26 host="ip-172-31-18-81" Mar 6 03:04:00.058818 containerd[1983]: 2026-03-06 03:03:59.938 [INFO][5184] ipam/ipam.go 160: Attempting to load block cidr=192.168.49.0/26 host="ip-172-31-18-81" Mar 6 03:04:00.058818 containerd[1983]: 2026-03-06 03:03:59.942 [INFO][5184] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.49.0/26 host="ip-172-31-18-81" Mar 6 03:04:00.059205 containerd[1983]: 2026-03-06 03:03:59.942 [INFO][5184] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.49.0/26 handle="k8s-pod-network.14421cb07c5db05277b00e76b0391f0e4d8a078dbb5948d74c517bfd5ef7dfb2" host="ip-172-31-18-81" Mar 6 03:04:00.059205 containerd[1983]: 2026-03-06 03:03:59.946 [INFO][5184] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.14421cb07c5db05277b00e76b0391f0e4d8a078dbb5948d74c517bfd5ef7dfb2 Mar 6 03:04:00.059205 containerd[1983]: 2026-03-06 03:03:59.956 [INFO][5184] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.49.0/26 handle="k8s-pod-network.14421cb07c5db05277b00e76b0391f0e4d8a078dbb5948d74c517bfd5ef7dfb2" host="ip-172-31-18-81" Mar 6 03:04:00.059205 containerd[1983]: 2026-03-06 03:03:59.978 [INFO][5184] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.49.4/26] block=192.168.49.0/26 handle="k8s-pod-network.14421cb07c5db05277b00e76b0391f0e4d8a078dbb5948d74c517bfd5ef7dfb2" host="ip-172-31-18-81" Mar 6 03:04:00.059205 containerd[1983]: 2026-03-06 03:03:59.979 [INFO][5184] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.49.4/26] handle="k8s-pod-network.14421cb07c5db05277b00e76b0391f0e4d8a078dbb5948d74c517bfd5ef7dfb2" host="ip-172-31-18-81" Mar 6 03:04:00.059205 containerd[1983]: 2026-03-06 03:03:59.979 [INFO][5184] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 03:04:00.059205 containerd[1983]: 2026-03-06 03:03:59.979 [INFO][5184] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.49.4/26] IPv6=[] ContainerID="14421cb07c5db05277b00e76b0391f0e4d8a078dbb5948d74c517bfd5ef7dfb2" HandleID="k8s-pod-network.14421cb07c5db05277b00e76b0391f0e4d8a078dbb5948d74c517bfd5ef7dfb2" Workload="ip--172--31--18--81-k8s-calico--apiserver--7fbfb65546--bwshp-eth0" Mar 6 03:04:00.059507 containerd[1983]: 2026-03-06 03:03:59.992 [INFO][5165] cni-plugin/k8s.go 418: Populated endpoint ContainerID="14421cb07c5db05277b00e76b0391f0e4d8a078dbb5948d74c517bfd5ef7dfb2" Namespace="calico-system" Pod="calico-apiserver-7fbfb65546-bwshp" WorkloadEndpoint="ip--172--31--18--81-k8s-calico--apiserver--7fbfb65546--bwshp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--81-k8s-calico--apiserver--7fbfb65546--bwshp-eth0", GenerateName:"calico-apiserver-7fbfb65546-", Namespace:"calico-system", SelfLink:"", UID:"5de748ec-8a9e-43a7-9a5d-22442eda9add", ResourceVersion:"861", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 3, 3, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7fbfb65546", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-81", ContainerID:"", Pod:"calico-apiserver-7fbfb65546-bwshp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.49.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali21cd075e471", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 03:04:00.059612 containerd[1983]: 2026-03-06 03:04:00.005 [INFO][5165] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.49.4/32] ContainerID="14421cb07c5db05277b00e76b0391f0e4d8a078dbb5948d74c517bfd5ef7dfb2" Namespace="calico-system" Pod="calico-apiserver-7fbfb65546-bwshp" WorkloadEndpoint="ip--172--31--18--81-k8s-calico--apiserver--7fbfb65546--bwshp-eth0" Mar 6 03:04:00.059612 containerd[1983]: 2026-03-06 03:04:00.012 [INFO][5165] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali21cd075e471 ContainerID="14421cb07c5db05277b00e76b0391f0e4d8a078dbb5948d74c517bfd5ef7dfb2" Namespace="calico-system" Pod="calico-apiserver-7fbfb65546-bwshp" WorkloadEndpoint="ip--172--31--18--81-k8s-calico--apiserver--7fbfb65546--bwshp-eth0" Mar 6 03:04:00.059612 containerd[1983]: 2026-03-06 03:04:00.030 [INFO][5165] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="14421cb07c5db05277b00e76b0391f0e4d8a078dbb5948d74c517bfd5ef7dfb2" Namespace="calico-system" Pod="calico-apiserver-7fbfb65546-bwshp" WorkloadEndpoint="ip--172--31--18--81-k8s-calico--apiserver--7fbfb65546--bwshp-eth0" Mar 6 03:04:00.059743 containerd[1983]: 2026-03-06 03:04:00.033 [INFO][5165] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="14421cb07c5db05277b00e76b0391f0e4d8a078dbb5948d74c517bfd5ef7dfb2" Namespace="calico-system" Pod="calico-apiserver-7fbfb65546-bwshp" WorkloadEndpoint="ip--172--31--18--81-k8s-calico--apiserver--7fbfb65546--bwshp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--81-k8s-calico--apiserver--7fbfb65546--bwshp-eth0", GenerateName:"calico-apiserver-7fbfb65546-", Namespace:"calico-system", SelfLink:"", UID:"5de748ec-8a9e-43a7-9a5d-22442eda9add", ResourceVersion:"861", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 3, 3, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7fbfb65546", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-81", ContainerID:"14421cb07c5db05277b00e76b0391f0e4d8a078dbb5948d74c517bfd5ef7dfb2", Pod:"calico-apiserver-7fbfb65546-bwshp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.49.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali21cd075e471", MAC:"06:cd:93:2b:ab:8c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 03:04:00.059839 containerd[1983]: 2026-03-06 03:04:00.052 [INFO][5165] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="14421cb07c5db05277b00e76b0391f0e4d8a078dbb5948d74c517bfd5ef7dfb2" Namespace="calico-system" Pod="calico-apiserver-7fbfb65546-bwshp" WorkloadEndpoint="ip--172--31--18--81-k8s-calico--apiserver--7fbfb65546--bwshp-eth0" Mar 6 03:04:00.116516 containerd[1983]: time="2026-03-06T03:04:00.116466458Z" level=info msg="connecting to shim 14421cb07c5db05277b00e76b0391f0e4d8a078dbb5948d74c517bfd5ef7dfb2" address="unix:///run/containerd/s/e63ddb8ef1923c693f0f03e34be9888a11e57cbfc2b273f05f734029e7aeb1db" namespace=k8s.io protocol=ttrpc version=3 Mar 6 03:04:00.171801 systemd[1]: Started cri-containerd-14421cb07c5db05277b00e76b0391f0e4d8a078dbb5948d74c517bfd5ef7dfb2.scope - libcontainer container 14421cb07c5db05277b00e76b0391f0e4d8a078dbb5948d74c517bfd5ef7dfb2. Mar 6 03:04:00.241011 containerd[1983]: time="2026-03-06T03:04:00.240966263Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d48547fb8-6cbb5,Uid:289ec3ea-b2e0-4ded-ad19-fc4c25fc97e9,Namespace:calico-system,Attempt:0,} returns sandbox id \"ac7738190806e0da70c1d7b79028f870282e2b9e7f85b576a5ff116ccda1234e\"" Mar 6 03:04:00.305280 containerd[1983]: time="2026-03-06T03:04:00.305236361Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fbfb65546-bwshp,Uid:5de748ec-8a9e-43a7-9a5d-22442eda9add,Namespace:calico-system,Attempt:0,} returns sandbox id \"14421cb07c5db05277b00e76b0391f0e4d8a078dbb5948d74c517bfd5ef7dfb2\"" Mar 6 03:04:00.461140 containerd[1983]: time="2026-03-06T03:04:00.461089684Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:04:00.461990 containerd[1983]: time="2026-03-06T03:04:00.461953517Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8792502" Mar 6 03:04:00.464303 containerd[1983]: time="2026-03-06T03:04:00.463095527Z" level=info msg="ImageCreate event name:\"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:04:00.465312 containerd[1983]: time="2026-03-06T03:04:00.465278740Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:04:00.466051 containerd[1983]: time="2026-03-06T03:04:00.466002717Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"10348547\" in 1.438665449s" Mar 6 03:04:00.466277 containerd[1983]: time="2026-03-06T03:04:00.466253371Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\"" Mar 6 03:04:00.467862 containerd[1983]: time="2026-03-06T03:04:00.467836725Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Mar 6 03:04:00.473470 containerd[1983]: time="2026-03-06T03:04:00.473435922Z" level=info msg="CreateContainer within sandbox \"a9840a586f0f847e4ab0fea2422a129fa00eb685e28d7761c5ea71edea603a28\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Mar 6 03:04:00.484521 containerd[1983]: time="2026-03-06T03:04:00.484253070Z" level=info msg="Container 8102d4b65fe335a29602dc2258e8188affc876824224228a6c6e687415dfd335: CDI devices from CRI Config.CDIDevices: []" Mar 6 03:04:00.508263 containerd[1983]: time="2026-03-06T03:04:00.508105898Z" level=info msg="CreateContainer within sandbox \"a9840a586f0f847e4ab0fea2422a129fa00eb685e28d7761c5ea71edea603a28\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"8102d4b65fe335a29602dc2258e8188affc876824224228a6c6e687415dfd335\"" Mar 6 03:04:00.509161 containerd[1983]: time="2026-03-06T03:04:00.509079466Z" level=info msg="StartContainer for \"8102d4b65fe335a29602dc2258e8188affc876824224228a6c6e687415dfd335\"" Mar 6 03:04:00.511312 containerd[1983]: time="2026-03-06T03:04:00.511278705Z" level=info msg="connecting to shim 8102d4b65fe335a29602dc2258e8188affc876824224228a6c6e687415dfd335" address="unix:///run/containerd/s/e9fc2660cef22404fd4e79e2a757935a865b9bcd7548e6fd6c119650dd234524" protocol=ttrpc version=3 Mar 6 03:04:00.532646 systemd[1]: Started cri-containerd-8102d4b65fe335a29602dc2258e8188affc876824224228a6c6e687415dfd335.scope - libcontainer container 8102d4b65fe335a29602dc2258e8188affc876824224228a6c6e687415dfd335. Mar 6 03:04:00.606538 containerd[1983]: time="2026-03-06T03:04:00.606497839Z" level=info msg="StartContainer for \"8102d4b65fe335a29602dc2258e8188affc876824224228a6c6e687415dfd335\" returns successfully" Mar 6 03:04:00.646579 containerd[1983]: time="2026-03-06T03:04:00.646529496Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fbfb65546-qf94t,Uid:754aef33-4132-4e75-8c11-8ccce15a7459,Namespace:calico-system,Attempt:0,}" Mar 6 03:04:00.706510 systemd-networkd[1838]: calib0f52df67d1: Gained IPv6LL Mar 6 03:04:00.842835 systemd-networkd[1838]: cali1f3b9bd8277: Link UP Mar 6 03:04:00.845846 systemd-networkd[1838]: cali1f3b9bd8277: Gained carrier Mar 6 03:04:00.868807 containerd[1983]: 2026-03-06 03:04:00.712 [INFO][5354] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--81-k8s-calico--apiserver--7fbfb65546--qf94t-eth0 calico-apiserver-7fbfb65546- calico-system 754aef33-4132-4e75-8c11-8ccce15a7459 856 0 2026-03-06 03:03:21 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7fbfb65546 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-18-81 calico-apiserver-7fbfb65546-qf94t eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali1f3b9bd8277 [] [] }} ContainerID="a22a53b86b9d1524cefb5f2ac1f1a4632c94fe1ee29fbb513b6c10a6bf5bc88e" Namespace="calico-system" Pod="calico-apiserver-7fbfb65546-qf94t" WorkloadEndpoint="ip--172--31--18--81-k8s-calico--apiserver--7fbfb65546--qf94t-" Mar 6 03:04:00.868807 containerd[1983]: 2026-03-06 03:04:00.712 [INFO][5354] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a22a53b86b9d1524cefb5f2ac1f1a4632c94fe1ee29fbb513b6c10a6bf5bc88e" Namespace="calico-system" Pod="calico-apiserver-7fbfb65546-qf94t" WorkloadEndpoint="ip--172--31--18--81-k8s-calico--apiserver--7fbfb65546--qf94t-eth0" Mar 6 03:04:00.868807 containerd[1983]: 2026-03-06 03:04:00.779 [INFO][5369] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a22a53b86b9d1524cefb5f2ac1f1a4632c94fe1ee29fbb513b6c10a6bf5bc88e" HandleID="k8s-pod-network.a22a53b86b9d1524cefb5f2ac1f1a4632c94fe1ee29fbb513b6c10a6bf5bc88e" Workload="ip--172--31--18--81-k8s-calico--apiserver--7fbfb65546--qf94t-eth0" Mar 6 03:04:00.874627 containerd[1983]: 2026-03-06 03:04:00.794 [INFO][5369] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="a22a53b86b9d1524cefb5f2ac1f1a4632c94fe1ee29fbb513b6c10a6bf5bc88e" HandleID="k8s-pod-network.a22a53b86b9d1524cefb5f2ac1f1a4632c94fe1ee29fbb513b6c10a6bf5bc88e" Workload="ip--172--31--18--81-k8s-calico--apiserver--7fbfb65546--qf94t-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000102010), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-18-81", "pod":"calico-apiserver-7fbfb65546-qf94t", "timestamp":"2026-03-06 03:04:00.7792727 +0000 UTC"}, Hostname:"ip-172-31-18-81", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0001926e0)} Mar 6 03:04:00.874627 containerd[1983]: 2026-03-06 03:04:00.794 [INFO][5369] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 03:04:00.874627 containerd[1983]: 2026-03-06 03:04:00.794 [INFO][5369] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 03:04:00.874627 containerd[1983]: 2026-03-06 03:04:00.794 [INFO][5369] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-81' Mar 6 03:04:00.874627 containerd[1983]: 2026-03-06 03:04:00.797 [INFO][5369] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.a22a53b86b9d1524cefb5f2ac1f1a4632c94fe1ee29fbb513b6c10a6bf5bc88e" host="ip-172-31-18-81" Mar 6 03:04:00.874627 containerd[1983]: 2026-03-06 03:04:00.809 [INFO][5369] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-18-81" Mar 6 03:04:00.874627 containerd[1983]: 2026-03-06 03:04:00.814 [INFO][5369] ipam/ipam.go 526: Trying affinity for 192.168.49.0/26 host="ip-172-31-18-81" Mar 6 03:04:00.874627 containerd[1983]: 2026-03-06 03:04:00.817 [INFO][5369] ipam/ipam.go 160: Attempting to load block cidr=192.168.49.0/26 host="ip-172-31-18-81" Mar 6 03:04:00.874627 containerd[1983]: 2026-03-06 03:04:00.819 [INFO][5369] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.49.0/26 host="ip-172-31-18-81" Mar 6 03:04:00.875557 containerd[1983]: 2026-03-06 03:04:00.819 [INFO][5369] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.49.0/26 handle="k8s-pod-network.a22a53b86b9d1524cefb5f2ac1f1a4632c94fe1ee29fbb513b6c10a6bf5bc88e" host="ip-172-31-18-81" Mar 6 03:04:00.875557 containerd[1983]: 2026-03-06 03:04:00.821 [INFO][5369] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.a22a53b86b9d1524cefb5f2ac1f1a4632c94fe1ee29fbb513b6c10a6bf5bc88e Mar 6 03:04:00.875557 containerd[1983]: 2026-03-06 03:04:00.826 [INFO][5369] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.49.0/26 handle="k8s-pod-network.a22a53b86b9d1524cefb5f2ac1f1a4632c94fe1ee29fbb513b6c10a6bf5bc88e" host="ip-172-31-18-81" Mar 6 03:04:00.875557 containerd[1983]: 2026-03-06 03:04:00.835 [INFO][5369] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.49.5/26] block=192.168.49.0/26 handle="k8s-pod-network.a22a53b86b9d1524cefb5f2ac1f1a4632c94fe1ee29fbb513b6c10a6bf5bc88e" host="ip-172-31-18-81" Mar 6 03:04:00.875557 containerd[1983]: 2026-03-06 03:04:00.836 [INFO][5369] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.49.5/26] handle="k8s-pod-network.a22a53b86b9d1524cefb5f2ac1f1a4632c94fe1ee29fbb513b6c10a6bf5bc88e" host="ip-172-31-18-81" Mar 6 03:04:00.875557 containerd[1983]: 2026-03-06 03:04:00.836 [INFO][5369] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 03:04:00.875557 containerd[1983]: 2026-03-06 03:04:00.836 [INFO][5369] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.49.5/26] IPv6=[] ContainerID="a22a53b86b9d1524cefb5f2ac1f1a4632c94fe1ee29fbb513b6c10a6bf5bc88e" HandleID="k8s-pod-network.a22a53b86b9d1524cefb5f2ac1f1a4632c94fe1ee29fbb513b6c10a6bf5bc88e" Workload="ip--172--31--18--81-k8s-calico--apiserver--7fbfb65546--qf94t-eth0" Mar 6 03:04:00.875834 containerd[1983]: 2026-03-06 03:04:00.838 [INFO][5354] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a22a53b86b9d1524cefb5f2ac1f1a4632c94fe1ee29fbb513b6c10a6bf5bc88e" Namespace="calico-system" Pod="calico-apiserver-7fbfb65546-qf94t" WorkloadEndpoint="ip--172--31--18--81-k8s-calico--apiserver--7fbfb65546--qf94t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--81-k8s-calico--apiserver--7fbfb65546--qf94t-eth0", GenerateName:"calico-apiserver-7fbfb65546-", Namespace:"calico-system", SelfLink:"", UID:"754aef33-4132-4e75-8c11-8ccce15a7459", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 3, 3, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7fbfb65546", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-81", ContainerID:"", Pod:"calico-apiserver-7fbfb65546-qf94t", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.49.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali1f3b9bd8277", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 03:04:00.877123 containerd[1983]: 2026-03-06 03:04:00.839 [INFO][5354] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.49.5/32] ContainerID="a22a53b86b9d1524cefb5f2ac1f1a4632c94fe1ee29fbb513b6c10a6bf5bc88e" Namespace="calico-system" Pod="calico-apiserver-7fbfb65546-qf94t" WorkloadEndpoint="ip--172--31--18--81-k8s-calico--apiserver--7fbfb65546--qf94t-eth0" Mar 6 03:04:00.877123 containerd[1983]: 2026-03-06 03:04:00.839 [INFO][5354] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1f3b9bd8277 ContainerID="a22a53b86b9d1524cefb5f2ac1f1a4632c94fe1ee29fbb513b6c10a6bf5bc88e" Namespace="calico-system" Pod="calico-apiserver-7fbfb65546-qf94t" WorkloadEndpoint="ip--172--31--18--81-k8s-calico--apiserver--7fbfb65546--qf94t-eth0" Mar 6 03:04:00.877123 containerd[1983]: 2026-03-06 03:04:00.845 [INFO][5354] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a22a53b86b9d1524cefb5f2ac1f1a4632c94fe1ee29fbb513b6c10a6bf5bc88e" Namespace="calico-system" Pod="calico-apiserver-7fbfb65546-qf94t" WorkloadEndpoint="ip--172--31--18--81-k8s-calico--apiserver--7fbfb65546--qf94t-eth0" Mar 6 03:04:00.877271 containerd[1983]: 2026-03-06 03:04:00.846 [INFO][5354] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a22a53b86b9d1524cefb5f2ac1f1a4632c94fe1ee29fbb513b6c10a6bf5bc88e" Namespace="calico-system" Pod="calico-apiserver-7fbfb65546-qf94t" WorkloadEndpoint="ip--172--31--18--81-k8s-calico--apiserver--7fbfb65546--qf94t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--81-k8s-calico--apiserver--7fbfb65546--qf94t-eth0", GenerateName:"calico-apiserver-7fbfb65546-", Namespace:"calico-system", SelfLink:"", UID:"754aef33-4132-4e75-8c11-8ccce15a7459", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 3, 3, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7fbfb65546", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-81", ContainerID:"a22a53b86b9d1524cefb5f2ac1f1a4632c94fe1ee29fbb513b6c10a6bf5bc88e", Pod:"calico-apiserver-7fbfb65546-qf94t", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.49.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali1f3b9bd8277", MAC:"e6:3f:a1:94:6f:63", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 03:04:00.877379 containerd[1983]: 2026-03-06 03:04:00.861 [INFO][5354] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a22a53b86b9d1524cefb5f2ac1f1a4632c94fe1ee29fbb513b6c10a6bf5bc88e" Namespace="calico-system" Pod="calico-apiserver-7fbfb65546-qf94t" WorkloadEndpoint="ip--172--31--18--81-k8s-calico--apiserver--7fbfb65546--qf94t-eth0" Mar 6 03:04:00.944045 containerd[1983]: time="2026-03-06T03:04:00.943939206Z" level=info msg="connecting to shim a22a53b86b9d1524cefb5f2ac1f1a4632c94fe1ee29fbb513b6c10a6bf5bc88e" address="unix:///run/containerd/s/470306e146ac2a8f12d59d88d132ffe75a35a9a7e1324ecec5c1b6edd875615d" namespace=k8s.io protocol=ttrpc version=3 Mar 6 03:04:00.984631 systemd[1]: Started cri-containerd-a22a53b86b9d1524cefb5f2ac1f1a4632c94fe1ee29fbb513b6c10a6bf5bc88e.scope - libcontainer container a22a53b86b9d1524cefb5f2ac1f1a4632c94fe1ee29fbb513b6c10a6bf5bc88e. Mar 6 03:04:01.024572 systemd-networkd[1838]: calic66315d401d: Gained IPv6LL Mar 6 03:04:01.081715 containerd[1983]: time="2026-03-06T03:04:01.081533213Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fbfb65546-qf94t,Uid:754aef33-4132-4e75-8c11-8ccce15a7459,Namespace:calico-system,Attempt:0,} returns sandbox id \"a22a53b86b9d1524cefb5f2ac1f1a4632c94fe1ee29fbb513b6c10a6bf5bc88e\"" Mar 6 03:04:01.662020 containerd[1983]: time="2026-03-06T03:04:01.661792720Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-zcmkp,Uid:031c3aba-3f87-40fc-a1e7-11afbad831cd,Namespace:calico-system,Attempt:0,}" Mar 6 03:04:01.664873 systemd-networkd[1838]: cali21cd075e471: Gained IPv6LL Mar 6 03:04:01.670156 containerd[1983]: time="2026-03-06T03:04:01.669349902Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-5nmrp,Uid:e854168f-88e6-4370-8105-c1eae5b8cb91,Namespace:kube-system,Attempt:0,}" Mar 6 03:04:01.987512 systemd-networkd[1838]: cali1f3b9bd8277: Gained IPv6LL Mar 6 03:04:02.440937 systemd-networkd[1838]: calic6b887b10a7: Link UP Mar 6 03:04:02.448511 systemd-networkd[1838]: calic6b887b10a7: Gained carrier Mar 6 03:04:02.485509 containerd[1983]: 2026-03-06 03:04:02.027 [INFO][5453] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--81-k8s-goldmane--9f7667bb8--zcmkp-eth0 goldmane-9f7667bb8- calico-system 031c3aba-3f87-40fc-a1e7-11afbad831cd 859 0 2026-03-06 03:03:21 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:9f7667bb8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-18-81 goldmane-9f7667bb8-zcmkp eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calic6b887b10a7 [] [] }} ContainerID="147855c9ccdf47f72fbe8e7c306a34dafe15603826c20a39a9ef3ff3c86bd48d" Namespace="calico-system" Pod="goldmane-9f7667bb8-zcmkp" WorkloadEndpoint="ip--172--31--18--81-k8s-goldmane--9f7667bb8--zcmkp-" Mar 6 03:04:02.485509 containerd[1983]: 2026-03-06 03:04:02.027 [INFO][5453] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="147855c9ccdf47f72fbe8e7c306a34dafe15603826c20a39a9ef3ff3c86bd48d" Namespace="calico-system" Pod="goldmane-9f7667bb8-zcmkp" WorkloadEndpoint="ip--172--31--18--81-k8s-goldmane--9f7667bb8--zcmkp-eth0" Mar 6 03:04:02.485509 containerd[1983]: 2026-03-06 03:04:02.191 [INFO][5476] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="147855c9ccdf47f72fbe8e7c306a34dafe15603826c20a39a9ef3ff3c86bd48d" HandleID="k8s-pod-network.147855c9ccdf47f72fbe8e7c306a34dafe15603826c20a39a9ef3ff3c86bd48d" Workload="ip--172--31--18--81-k8s-goldmane--9f7667bb8--zcmkp-eth0" Mar 6 03:04:02.487075 containerd[1983]: 2026-03-06 03:04:02.255 [INFO][5476] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="147855c9ccdf47f72fbe8e7c306a34dafe15603826c20a39a9ef3ff3c86bd48d" HandleID="k8s-pod-network.147855c9ccdf47f72fbe8e7c306a34dafe15603826c20a39a9ef3ff3c86bd48d" Workload="ip--172--31--18--81-k8s-goldmane--9f7667bb8--zcmkp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003e5cb0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-18-81", "pod":"goldmane-9f7667bb8-zcmkp", "timestamp":"2026-03-06 03:04:02.191678323 +0000 UTC"}, Hostname:"ip-172-31-18-81", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000304c60)} Mar 6 03:04:02.487075 containerd[1983]: 2026-03-06 03:04:02.255 [INFO][5476] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 03:04:02.487075 containerd[1983]: 2026-03-06 03:04:02.255 [INFO][5476] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 03:04:02.487075 containerd[1983]: 2026-03-06 03:04:02.255 [INFO][5476] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-81' Mar 6 03:04:02.487075 containerd[1983]: 2026-03-06 03:04:02.268 [INFO][5476] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.147855c9ccdf47f72fbe8e7c306a34dafe15603826c20a39a9ef3ff3c86bd48d" host="ip-172-31-18-81" Mar 6 03:04:02.487075 containerd[1983]: 2026-03-06 03:04:02.294 [INFO][5476] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-18-81" Mar 6 03:04:02.487075 containerd[1983]: 2026-03-06 03:04:02.320 [INFO][5476] ipam/ipam.go 526: Trying affinity for 192.168.49.0/26 host="ip-172-31-18-81" Mar 6 03:04:02.487075 containerd[1983]: 2026-03-06 03:04:02.323 [INFO][5476] ipam/ipam.go 160: Attempting to load block cidr=192.168.49.0/26 host="ip-172-31-18-81" Mar 6 03:04:02.487075 containerd[1983]: 2026-03-06 03:04:02.328 [INFO][5476] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.49.0/26 host="ip-172-31-18-81" Mar 6 03:04:02.487800 containerd[1983]: 2026-03-06 03:04:02.328 [INFO][5476] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.49.0/26 handle="k8s-pod-network.147855c9ccdf47f72fbe8e7c306a34dafe15603826c20a39a9ef3ff3c86bd48d" host="ip-172-31-18-81" Mar 6 03:04:02.487800 containerd[1983]: 2026-03-06 03:04:02.334 [INFO][5476] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.147855c9ccdf47f72fbe8e7c306a34dafe15603826c20a39a9ef3ff3c86bd48d Mar 6 03:04:02.487800 containerd[1983]: 2026-03-06 03:04:02.345 [INFO][5476] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.49.0/26 handle="k8s-pod-network.147855c9ccdf47f72fbe8e7c306a34dafe15603826c20a39a9ef3ff3c86bd48d" host="ip-172-31-18-81" Mar 6 03:04:02.487800 containerd[1983]: 2026-03-06 03:04:02.360 [INFO][5476] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.49.6/26] block=192.168.49.0/26 handle="k8s-pod-network.147855c9ccdf47f72fbe8e7c306a34dafe15603826c20a39a9ef3ff3c86bd48d" host="ip-172-31-18-81" Mar 6 03:04:02.487800 containerd[1983]: 2026-03-06 03:04:02.360 [INFO][5476] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.49.6/26] handle="k8s-pod-network.147855c9ccdf47f72fbe8e7c306a34dafe15603826c20a39a9ef3ff3c86bd48d" host="ip-172-31-18-81" Mar 6 03:04:02.487800 containerd[1983]: 2026-03-06 03:04:02.362 [INFO][5476] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 03:04:02.487800 containerd[1983]: 2026-03-06 03:04:02.362 [INFO][5476] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.49.6/26] IPv6=[] ContainerID="147855c9ccdf47f72fbe8e7c306a34dafe15603826c20a39a9ef3ff3c86bd48d" HandleID="k8s-pod-network.147855c9ccdf47f72fbe8e7c306a34dafe15603826c20a39a9ef3ff3c86bd48d" Workload="ip--172--31--18--81-k8s-goldmane--9f7667bb8--zcmkp-eth0" Mar 6 03:04:02.488372 containerd[1983]: 2026-03-06 03:04:02.371 [INFO][5453] cni-plugin/k8s.go 418: Populated endpoint ContainerID="147855c9ccdf47f72fbe8e7c306a34dafe15603826c20a39a9ef3ff3c86bd48d" Namespace="calico-system" Pod="goldmane-9f7667bb8-zcmkp" WorkloadEndpoint="ip--172--31--18--81-k8s-goldmane--9f7667bb8--zcmkp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--81-k8s-goldmane--9f7667bb8--zcmkp-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"031c3aba-3f87-40fc-a1e7-11afbad831cd", ResourceVersion:"859", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 3, 3, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-81", ContainerID:"", Pod:"goldmane-9f7667bb8-zcmkp", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.49.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic6b887b10a7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 03:04:02.488372 containerd[1983]: 2026-03-06 03:04:02.372 [INFO][5453] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.49.6/32] ContainerID="147855c9ccdf47f72fbe8e7c306a34dafe15603826c20a39a9ef3ff3c86bd48d" Namespace="calico-system" Pod="goldmane-9f7667bb8-zcmkp" WorkloadEndpoint="ip--172--31--18--81-k8s-goldmane--9f7667bb8--zcmkp-eth0" Mar 6 03:04:02.488781 containerd[1983]: 2026-03-06 03:04:02.372 [INFO][5453] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic6b887b10a7 ContainerID="147855c9ccdf47f72fbe8e7c306a34dafe15603826c20a39a9ef3ff3c86bd48d" Namespace="calico-system" Pod="goldmane-9f7667bb8-zcmkp" WorkloadEndpoint="ip--172--31--18--81-k8s-goldmane--9f7667bb8--zcmkp-eth0" Mar 6 03:04:02.488781 containerd[1983]: 2026-03-06 03:04:02.448 [INFO][5453] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="147855c9ccdf47f72fbe8e7c306a34dafe15603826c20a39a9ef3ff3c86bd48d" Namespace="calico-system" Pod="goldmane-9f7667bb8-zcmkp" WorkloadEndpoint="ip--172--31--18--81-k8s-goldmane--9f7667bb8--zcmkp-eth0" Mar 6 03:04:02.488865 containerd[1983]: 2026-03-06 03:04:02.448 [INFO][5453] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="147855c9ccdf47f72fbe8e7c306a34dafe15603826c20a39a9ef3ff3c86bd48d" Namespace="calico-system" Pod="goldmane-9f7667bb8-zcmkp" WorkloadEndpoint="ip--172--31--18--81-k8s-goldmane--9f7667bb8--zcmkp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--81-k8s-goldmane--9f7667bb8--zcmkp-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"031c3aba-3f87-40fc-a1e7-11afbad831cd", ResourceVersion:"859", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 3, 3, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-81", ContainerID:"147855c9ccdf47f72fbe8e7c306a34dafe15603826c20a39a9ef3ff3c86bd48d", Pod:"goldmane-9f7667bb8-zcmkp", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.49.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic6b887b10a7", MAC:"b2:cb:38:40:da:3e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 03:04:02.488967 containerd[1983]: 2026-03-06 03:04:02.481 [INFO][5453] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="147855c9ccdf47f72fbe8e7c306a34dafe15603826c20a39a9ef3ff3c86bd48d" Namespace="calico-system" Pod="goldmane-9f7667bb8-zcmkp" WorkloadEndpoint="ip--172--31--18--81-k8s-goldmane--9f7667bb8--zcmkp-eth0" Mar 6 03:04:02.525805 systemd-networkd[1838]: cali626b87522e5: Link UP Mar 6 03:04:02.527049 systemd-networkd[1838]: cali626b87522e5: Gained carrier Mar 6 03:04:02.572272 containerd[1983]: 2026-03-06 03:04:02.111 [INFO][5454] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--81-k8s-coredns--7d764666f9--5nmrp-eth0 coredns-7d764666f9- kube-system e854168f-88e6-4370-8105-c1eae5b8cb91 852 0 2026-03-06 03:03:03 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7d764666f9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-18-81 coredns-7d764666f9-5nmrp eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali626b87522e5 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="b2df5c784885e60c7ff440cd8caa47f79d583b727447210719661a5be213c60f" Namespace="kube-system" Pod="coredns-7d764666f9-5nmrp" WorkloadEndpoint="ip--172--31--18--81-k8s-coredns--7d764666f9--5nmrp-" Mar 6 03:04:02.572272 containerd[1983]: 2026-03-06 03:04:02.114 [INFO][5454] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b2df5c784885e60c7ff440cd8caa47f79d583b727447210719661a5be213c60f" Namespace="kube-system" Pod="coredns-7d764666f9-5nmrp" WorkloadEndpoint="ip--172--31--18--81-k8s-coredns--7d764666f9--5nmrp-eth0" Mar 6 03:04:02.572272 containerd[1983]: 2026-03-06 03:04:02.318 [INFO][5487] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b2df5c784885e60c7ff440cd8caa47f79d583b727447210719661a5be213c60f" HandleID="k8s-pod-network.b2df5c784885e60c7ff440cd8caa47f79d583b727447210719661a5be213c60f" Workload="ip--172--31--18--81-k8s-coredns--7d764666f9--5nmrp-eth0" Mar 6 03:04:02.572722 containerd[1983]: 2026-03-06 03:04:02.333 [INFO][5487] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="b2df5c784885e60c7ff440cd8caa47f79d583b727447210719661a5be213c60f" HandleID="k8s-pod-network.b2df5c784885e60c7ff440cd8caa47f79d583b727447210719661a5be213c60f" Workload="ip--172--31--18--81-k8s-coredns--7d764666f9--5nmrp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003cf790), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-18-81", "pod":"coredns-7d764666f9-5nmrp", "timestamp":"2026-03-06 03:04:02.318138815 +0000 UTC"}, Hostname:"ip-172-31-18-81", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0003d2580)} Mar 6 03:04:02.572722 containerd[1983]: 2026-03-06 03:04:02.333 [INFO][5487] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 03:04:02.572722 containerd[1983]: 2026-03-06 03:04:02.361 [INFO][5487] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 03:04:02.572722 containerd[1983]: 2026-03-06 03:04:02.361 [INFO][5487] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-81' Mar 6 03:04:02.572722 containerd[1983]: 2026-03-06 03:04:02.368 [INFO][5487] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.b2df5c784885e60c7ff440cd8caa47f79d583b727447210719661a5be213c60f" host="ip-172-31-18-81" Mar 6 03:04:02.572722 containerd[1983]: 2026-03-06 03:04:02.451 [INFO][5487] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-18-81" Mar 6 03:04:02.572722 containerd[1983]: 2026-03-06 03:04:02.466 [INFO][5487] ipam/ipam.go 526: Trying affinity for 192.168.49.0/26 host="ip-172-31-18-81" Mar 6 03:04:02.572722 containerd[1983]: 2026-03-06 03:04:02.475 [INFO][5487] ipam/ipam.go 160: Attempting to load block cidr=192.168.49.0/26 host="ip-172-31-18-81" Mar 6 03:04:02.572722 containerd[1983]: 2026-03-06 03:04:02.484 [INFO][5487] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.49.0/26 host="ip-172-31-18-81" Mar 6 03:04:02.573175 containerd[1983]: 2026-03-06 03:04:02.484 [INFO][5487] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.49.0/26 handle="k8s-pod-network.b2df5c784885e60c7ff440cd8caa47f79d583b727447210719661a5be213c60f" host="ip-172-31-18-81" Mar 6 03:04:02.573175 containerd[1983]: 2026-03-06 03:04:02.490 [INFO][5487] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.b2df5c784885e60c7ff440cd8caa47f79d583b727447210719661a5be213c60f Mar 6 03:04:02.573175 containerd[1983]: 2026-03-06 03:04:02.501 [INFO][5487] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.49.0/26 handle="k8s-pod-network.b2df5c784885e60c7ff440cd8caa47f79d583b727447210719661a5be213c60f" host="ip-172-31-18-81" Mar 6 03:04:02.573175 containerd[1983]: 2026-03-06 03:04:02.516 [INFO][5487] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.49.7/26] block=192.168.49.0/26 handle="k8s-pod-network.b2df5c784885e60c7ff440cd8caa47f79d583b727447210719661a5be213c60f" host="ip-172-31-18-81" Mar 6 03:04:02.573175 containerd[1983]: 2026-03-06 03:04:02.516 [INFO][5487] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.49.7/26] handle="k8s-pod-network.b2df5c784885e60c7ff440cd8caa47f79d583b727447210719661a5be213c60f" host="ip-172-31-18-81" Mar 6 03:04:02.573175 containerd[1983]: 2026-03-06 03:04:02.516 [INFO][5487] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 03:04:02.573175 containerd[1983]: 2026-03-06 03:04:02.516 [INFO][5487] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.49.7/26] IPv6=[] ContainerID="b2df5c784885e60c7ff440cd8caa47f79d583b727447210719661a5be213c60f" HandleID="k8s-pod-network.b2df5c784885e60c7ff440cd8caa47f79d583b727447210719661a5be213c60f" Workload="ip--172--31--18--81-k8s-coredns--7d764666f9--5nmrp-eth0" Mar 6 03:04:02.576068 containerd[1983]: 2026-03-06 03:04:02.521 [INFO][5454] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b2df5c784885e60c7ff440cd8caa47f79d583b727447210719661a5be213c60f" Namespace="kube-system" Pod="coredns-7d764666f9-5nmrp" WorkloadEndpoint="ip--172--31--18--81-k8s-coredns--7d764666f9--5nmrp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--81-k8s-coredns--7d764666f9--5nmrp-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"e854168f-88e6-4370-8105-c1eae5b8cb91", ResourceVersion:"852", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 3, 3, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-81", ContainerID:"", Pod:"coredns-7d764666f9-5nmrp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.49.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali626b87522e5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 03:04:02.576068 containerd[1983]: 2026-03-06 03:04:02.522 [INFO][5454] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.49.7/32] ContainerID="b2df5c784885e60c7ff440cd8caa47f79d583b727447210719661a5be213c60f" Namespace="kube-system" Pod="coredns-7d764666f9-5nmrp" WorkloadEndpoint="ip--172--31--18--81-k8s-coredns--7d764666f9--5nmrp-eth0" Mar 6 03:04:02.576068 containerd[1983]: 2026-03-06 03:04:02.522 [INFO][5454] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali626b87522e5 ContainerID="b2df5c784885e60c7ff440cd8caa47f79d583b727447210719661a5be213c60f" Namespace="kube-system" Pod="coredns-7d764666f9-5nmrp" WorkloadEndpoint="ip--172--31--18--81-k8s-coredns--7d764666f9--5nmrp-eth0" Mar 6 03:04:02.576068 containerd[1983]: 2026-03-06 03:04:02.527 [INFO][5454] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b2df5c784885e60c7ff440cd8caa47f79d583b727447210719661a5be213c60f" Namespace="kube-system" Pod="coredns-7d764666f9-5nmrp" WorkloadEndpoint="ip--172--31--18--81-k8s-coredns--7d764666f9--5nmrp-eth0" Mar 6 03:04:02.576068 containerd[1983]: 2026-03-06 03:04:02.528 [INFO][5454] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b2df5c784885e60c7ff440cd8caa47f79d583b727447210719661a5be213c60f" Namespace="kube-system" Pod="coredns-7d764666f9-5nmrp" WorkloadEndpoint="ip--172--31--18--81-k8s-coredns--7d764666f9--5nmrp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--81-k8s-coredns--7d764666f9--5nmrp-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"e854168f-88e6-4370-8105-c1eae5b8cb91", ResourceVersion:"852", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 3, 3, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-81", ContainerID:"b2df5c784885e60c7ff440cd8caa47f79d583b727447210719661a5be213c60f", Pod:"coredns-7d764666f9-5nmrp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.49.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali626b87522e5", MAC:"d6:c9:b5:fe:74:dc", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 03:04:02.576068 containerd[1983]: 2026-03-06 03:04:02.563 [INFO][5454] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b2df5c784885e60c7ff440cd8caa47f79d583b727447210719661a5be213c60f" Namespace="kube-system" Pod="coredns-7d764666f9-5nmrp" WorkloadEndpoint="ip--172--31--18--81-k8s-coredns--7d764666f9--5nmrp-eth0" Mar 6 03:04:02.618453 containerd[1983]: time="2026-03-06T03:04:02.616597648Z" level=info msg="connecting to shim 147855c9ccdf47f72fbe8e7c306a34dafe15603826c20a39a9ef3ff3c86bd48d" address="unix:///run/containerd/s/3c93538e035b76db77d56dece5b9537359bebb5431cd2c3e4cca1b51ca53e51f" namespace=k8s.io protocol=ttrpc version=3 Mar 6 03:04:02.639954 containerd[1983]: time="2026-03-06T03:04:02.639786146Z" level=info msg="connecting to shim b2df5c784885e60c7ff440cd8caa47f79d583b727447210719661a5be213c60f" address="unix:///run/containerd/s/0cb27dcee3e5b5f9c638afd6d585784fdeaa019cf8a615576a7ebba0eeac8dbc" namespace=k8s.io protocol=ttrpc version=3 Mar 6 03:04:02.704964 systemd[1]: Started cri-containerd-147855c9ccdf47f72fbe8e7c306a34dafe15603826c20a39a9ef3ff3c86bd48d.scope - libcontainer container 147855c9ccdf47f72fbe8e7c306a34dafe15603826c20a39a9ef3ff3c86bd48d. Mar 6 03:04:02.726695 systemd[1]: Started cri-containerd-b2df5c784885e60c7ff440cd8caa47f79d583b727447210719661a5be213c60f.scope - libcontainer container b2df5c784885e60c7ff440cd8caa47f79d583b727447210719661a5be213c60f. Mar 6 03:04:02.814890 systemd[1]: Started sshd@8-172.31.18.81:22-68.220.241.50:58912.service - OpenSSH per-connection server daemon (68.220.241.50:58912). Mar 6 03:04:02.819265 containerd[1983]: time="2026-03-06T03:04:02.819192545Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-5nmrp,Uid:e854168f-88e6-4370-8105-c1eae5b8cb91,Namespace:kube-system,Attempt:0,} returns sandbox id \"b2df5c784885e60c7ff440cd8caa47f79d583b727447210719661a5be213c60f\"" Mar 6 03:04:02.835350 containerd[1983]: time="2026-03-06T03:04:02.834964683Z" level=info msg="CreateContainer within sandbox \"b2df5c784885e60c7ff440cd8caa47f79d583b727447210719661a5be213c60f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 6 03:04:02.977858 containerd[1983]: time="2026-03-06T03:04:02.977815286Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-zcmkp,Uid:031c3aba-3f87-40fc-a1e7-11afbad831cd,Namespace:calico-system,Attempt:0,} returns sandbox id \"147855c9ccdf47f72fbe8e7c306a34dafe15603826c20a39a9ef3ff3c86bd48d\"" Mar 6 03:04:03.040827 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount252170084.mount: Deactivated successfully. Mar 6 03:04:03.051468 containerd[1983]: time="2026-03-06T03:04:03.050664607Z" level=info msg="Container bbb23b32f8ddd2e3a4a67d504b48d7639e8fb1fd58a2894ee715276915130084: CDI devices from CRI Config.CDIDevices: []" Mar 6 03:04:03.058547 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount594324731.mount: Deactivated successfully. Mar 6 03:04:03.069241 containerd[1983]: time="2026-03-06T03:04:03.069208768Z" level=info msg="CreateContainer within sandbox \"b2df5c784885e60c7ff440cd8caa47f79d583b727447210719661a5be213c60f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"bbb23b32f8ddd2e3a4a67d504b48d7639e8fb1fd58a2894ee715276915130084\"" Mar 6 03:04:03.069891 containerd[1983]: time="2026-03-06T03:04:03.069862839Z" level=info msg="StartContainer for \"bbb23b32f8ddd2e3a4a67d504b48d7639e8fb1fd58a2894ee715276915130084\"" Mar 6 03:04:03.072340 containerd[1983]: time="2026-03-06T03:04:03.071347142Z" level=info msg="connecting to shim bbb23b32f8ddd2e3a4a67d504b48d7639e8fb1fd58a2894ee715276915130084" address="unix:///run/containerd/s/0cb27dcee3e5b5f9c638afd6d585784fdeaa019cf8a615576a7ebba0eeac8dbc" protocol=ttrpc version=3 Mar 6 03:04:03.102724 systemd[1]: Started cri-containerd-bbb23b32f8ddd2e3a4a67d504b48d7639e8fb1fd58a2894ee715276915130084.scope - libcontainer container bbb23b32f8ddd2e3a4a67d504b48d7639e8fb1fd58a2894ee715276915130084. Mar 6 03:04:03.164816 containerd[1983]: time="2026-03-06T03:04:03.164772547Z" level=info msg="StartContainer for \"bbb23b32f8ddd2e3a4a67d504b48d7639e8fb1fd58a2894ee715276915130084\" returns successfully" Mar 6 03:04:03.458707 sshd[5605]: Accepted publickey for core from 68.220.241.50 port 58912 ssh2: RSA SHA256:qfQ2rypVMEWpTsWSSrgOtJoJZgil1uRcnShauR4vebY Mar 6 03:04:03.487536 sshd-session[5605]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 03:04:03.502059 systemd-logind[1952]: New session 9 of user core. Mar 6 03:04:03.508615 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 6 03:04:03.776647 systemd-networkd[1838]: cali626b87522e5: Gained IPv6LL Mar 6 03:04:03.829015 containerd[1983]: time="2026-03-06T03:04:03.828949467Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-d8hdh,Uid:a66f999f-3bca-46b9-bcee-a4b3df253d8f,Namespace:kube-system,Attempt:0,}" Mar 6 03:04:03.968591 systemd-networkd[1838]: calic6b887b10a7: Gained IPv6LL Mar 6 03:04:04.241171 systemd-networkd[1838]: calid507c902f5b: Link UP Mar 6 03:04:04.243037 systemd-networkd[1838]: calid507c902f5b: Gained carrier Mar 6 03:04:04.268578 kubelet[3329]: I0306 03:04:04.268481 3329 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-5nmrp" podStartSLOduration=61.268410507 podStartE2EDuration="1m1.268410507s" podCreationTimestamp="2026-03-06 03:03:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-06 03:04:04.216905601 +0000 UTC m=+66.728876690" watchObservedRunningTime="2026-03-06 03:04:04.268410507 +0000 UTC m=+66.780381594" Mar 6 03:04:04.287797 containerd[1983]: 2026-03-06 03:04:04.005 [INFO][5659] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--81-k8s-coredns--7d764666f9--d8hdh-eth0 coredns-7d764666f9- kube-system a66f999f-3bca-46b9-bcee-a4b3df253d8f 862 0 2026-03-06 03:03:03 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7d764666f9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-18-81 coredns-7d764666f9-d8hdh eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calid507c902f5b [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="68a033aa52b5f4b4940a761262395f20d6aacfa8d1faf2af9049ed54b079df20" Namespace="kube-system" Pod="coredns-7d764666f9-d8hdh" WorkloadEndpoint="ip--172--31--18--81-k8s-coredns--7d764666f9--d8hdh-" Mar 6 03:04:04.287797 containerd[1983]: 2026-03-06 03:04:04.008 [INFO][5659] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="68a033aa52b5f4b4940a761262395f20d6aacfa8d1faf2af9049ed54b079df20" Namespace="kube-system" Pod="coredns-7d764666f9-d8hdh" WorkloadEndpoint="ip--172--31--18--81-k8s-coredns--7d764666f9--d8hdh-eth0" Mar 6 03:04:04.287797 containerd[1983]: 2026-03-06 03:04:04.134 [INFO][5677] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="68a033aa52b5f4b4940a761262395f20d6aacfa8d1faf2af9049ed54b079df20" HandleID="k8s-pod-network.68a033aa52b5f4b4940a761262395f20d6aacfa8d1faf2af9049ed54b079df20" Workload="ip--172--31--18--81-k8s-coredns--7d764666f9--d8hdh-eth0" Mar 6 03:04:04.287797 containerd[1983]: 2026-03-06 03:04:04.145 [INFO][5677] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="68a033aa52b5f4b4940a761262395f20d6aacfa8d1faf2af9049ed54b079df20" HandleID="k8s-pod-network.68a033aa52b5f4b4940a761262395f20d6aacfa8d1faf2af9049ed54b079df20" Workload="ip--172--31--18--81-k8s-coredns--7d764666f9--d8hdh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004555d0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-18-81", "pod":"coredns-7d764666f9-d8hdh", "timestamp":"2026-03-06 03:04:04.134589095 +0000 UTC"}, Hostname:"ip-172-31-18-81", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0002286e0)} Mar 6 03:04:04.287797 containerd[1983]: 2026-03-06 03:04:04.146 [INFO][5677] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 03:04:04.287797 containerd[1983]: 2026-03-06 03:04:04.146 [INFO][5677] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 03:04:04.287797 containerd[1983]: 2026-03-06 03:04:04.147 [INFO][5677] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-81' Mar 6 03:04:04.287797 containerd[1983]: 2026-03-06 03:04:04.153 [INFO][5677] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.68a033aa52b5f4b4940a761262395f20d6aacfa8d1faf2af9049ed54b079df20" host="ip-172-31-18-81" Mar 6 03:04:04.287797 containerd[1983]: 2026-03-06 03:04:04.174 [INFO][5677] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-18-81" Mar 6 03:04:04.287797 containerd[1983]: 2026-03-06 03:04:04.185 [INFO][5677] ipam/ipam.go 526: Trying affinity for 192.168.49.0/26 host="ip-172-31-18-81" Mar 6 03:04:04.287797 containerd[1983]: 2026-03-06 03:04:04.189 [INFO][5677] ipam/ipam.go 160: Attempting to load block cidr=192.168.49.0/26 host="ip-172-31-18-81" Mar 6 03:04:04.287797 containerd[1983]: 2026-03-06 03:04:04.195 [INFO][5677] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.49.0/26 host="ip-172-31-18-81" Mar 6 03:04:04.287797 containerd[1983]: 2026-03-06 03:04:04.195 [INFO][5677] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.49.0/26 handle="k8s-pod-network.68a033aa52b5f4b4940a761262395f20d6aacfa8d1faf2af9049ed54b079df20" host="ip-172-31-18-81" Mar 6 03:04:04.287797 containerd[1983]: 2026-03-06 03:04:04.199 [INFO][5677] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.68a033aa52b5f4b4940a761262395f20d6aacfa8d1faf2af9049ed54b079df20 Mar 6 03:04:04.287797 containerd[1983]: 2026-03-06 03:04:04.209 [INFO][5677] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.49.0/26 handle="k8s-pod-network.68a033aa52b5f4b4940a761262395f20d6aacfa8d1faf2af9049ed54b079df20" host="ip-172-31-18-81" Mar 6 03:04:04.287797 containerd[1983]: 2026-03-06 03:04:04.228 [INFO][5677] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.49.8/26] block=192.168.49.0/26 handle="k8s-pod-network.68a033aa52b5f4b4940a761262395f20d6aacfa8d1faf2af9049ed54b079df20" host="ip-172-31-18-81" Mar 6 03:04:04.287797 containerd[1983]: 2026-03-06 03:04:04.229 [INFO][5677] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.49.8/26] handle="k8s-pod-network.68a033aa52b5f4b4940a761262395f20d6aacfa8d1faf2af9049ed54b079df20" host="ip-172-31-18-81" Mar 6 03:04:04.287797 containerd[1983]: 2026-03-06 03:04:04.229 [INFO][5677] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 03:04:04.287797 containerd[1983]: 2026-03-06 03:04:04.229 [INFO][5677] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.49.8/26] IPv6=[] ContainerID="68a033aa52b5f4b4940a761262395f20d6aacfa8d1faf2af9049ed54b079df20" HandleID="k8s-pod-network.68a033aa52b5f4b4940a761262395f20d6aacfa8d1faf2af9049ed54b079df20" Workload="ip--172--31--18--81-k8s-coredns--7d764666f9--d8hdh-eth0" Mar 6 03:04:04.290900 containerd[1983]: 2026-03-06 03:04:04.235 [INFO][5659] cni-plugin/k8s.go 418: Populated endpoint ContainerID="68a033aa52b5f4b4940a761262395f20d6aacfa8d1faf2af9049ed54b079df20" Namespace="kube-system" Pod="coredns-7d764666f9-d8hdh" WorkloadEndpoint="ip--172--31--18--81-k8s-coredns--7d764666f9--d8hdh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--81-k8s-coredns--7d764666f9--d8hdh-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"a66f999f-3bca-46b9-bcee-a4b3df253d8f", ResourceVersion:"862", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 3, 3, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-81", ContainerID:"", Pod:"coredns-7d764666f9-d8hdh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.49.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid507c902f5b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 03:04:04.290900 containerd[1983]: 2026-03-06 03:04:04.235 [INFO][5659] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.49.8/32] ContainerID="68a033aa52b5f4b4940a761262395f20d6aacfa8d1faf2af9049ed54b079df20" Namespace="kube-system" Pod="coredns-7d764666f9-d8hdh" WorkloadEndpoint="ip--172--31--18--81-k8s-coredns--7d764666f9--d8hdh-eth0" Mar 6 03:04:04.290900 containerd[1983]: 2026-03-06 03:04:04.235 [INFO][5659] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid507c902f5b ContainerID="68a033aa52b5f4b4940a761262395f20d6aacfa8d1faf2af9049ed54b079df20" Namespace="kube-system" Pod="coredns-7d764666f9-d8hdh" WorkloadEndpoint="ip--172--31--18--81-k8s-coredns--7d764666f9--d8hdh-eth0" Mar 6 03:04:04.290900 containerd[1983]: 2026-03-06 03:04:04.243 [INFO][5659] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="68a033aa52b5f4b4940a761262395f20d6aacfa8d1faf2af9049ed54b079df20" Namespace="kube-system" Pod="coredns-7d764666f9-d8hdh" WorkloadEndpoint="ip--172--31--18--81-k8s-coredns--7d764666f9--d8hdh-eth0" Mar 6 03:04:04.290900 containerd[1983]: 2026-03-06 03:04:04.245 [INFO][5659] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="68a033aa52b5f4b4940a761262395f20d6aacfa8d1faf2af9049ed54b079df20" Namespace="kube-system" Pod="coredns-7d764666f9-d8hdh" WorkloadEndpoint="ip--172--31--18--81-k8s-coredns--7d764666f9--d8hdh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--81-k8s-coredns--7d764666f9--d8hdh-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"a66f999f-3bca-46b9-bcee-a4b3df253d8f", ResourceVersion:"862", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 3, 3, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-81", ContainerID:"68a033aa52b5f4b4940a761262395f20d6aacfa8d1faf2af9049ed54b079df20", Pod:"coredns-7d764666f9-d8hdh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.49.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid507c902f5b", MAC:"ca:17:dd:a0:d4:55", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 03:04:04.290900 containerd[1983]: 2026-03-06 03:04:04.273 [INFO][5659] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="68a033aa52b5f4b4940a761262395f20d6aacfa8d1faf2af9049ed54b079df20" Namespace="kube-system" Pod="coredns-7d764666f9-d8hdh" WorkloadEndpoint="ip--172--31--18--81-k8s-coredns--7d764666f9--d8hdh-eth0" Mar 6 03:04:04.366682 containerd[1983]: time="2026-03-06T03:04:04.366636736Z" level=info msg="connecting to shim 68a033aa52b5f4b4940a761262395f20d6aacfa8d1faf2af9049ed54b079df20" address="unix:///run/containerd/s/e8f6483c76df5a5f5ab7afd874e5e62d9524162e264cd9a3bfbb10e62b7aaf3f" namespace=k8s.io protocol=ttrpc version=3 Mar 6 03:04:04.475647 systemd[1]: Started cri-containerd-68a033aa52b5f4b4940a761262395f20d6aacfa8d1faf2af9049ed54b079df20.scope - libcontainer container 68a033aa52b5f4b4940a761262395f20d6aacfa8d1faf2af9049ed54b079df20. Mar 6 03:04:04.732139 containerd[1983]: time="2026-03-06T03:04:04.732033378Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-d8hdh,Uid:a66f999f-3bca-46b9-bcee-a4b3df253d8f,Namespace:kube-system,Attempt:0,} returns sandbox id \"68a033aa52b5f4b4940a761262395f20d6aacfa8d1faf2af9049ed54b079df20\"" Mar 6 03:04:04.796529 containerd[1983]: time="2026-03-06T03:04:04.796486440Z" level=info msg="CreateContainer within sandbox \"68a033aa52b5f4b4940a761262395f20d6aacfa8d1faf2af9049ed54b079df20\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 6 03:04:04.849458 containerd[1983]: time="2026-03-06T03:04:04.847555522Z" level=info msg="Container 7e4c9b4720b1913a11f93d686151c33f0b29f15d55b5938cff6925a2741c7f1a: CDI devices from CRI Config.CDIDevices: []" Mar 6 03:04:04.856277 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount417393998.mount: Deactivated successfully. Mar 6 03:04:04.866619 containerd[1983]: time="2026-03-06T03:04:04.866579739Z" level=info msg="CreateContainer within sandbox \"68a033aa52b5f4b4940a761262395f20d6aacfa8d1faf2af9049ed54b079df20\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7e4c9b4720b1913a11f93d686151c33f0b29f15d55b5938cff6925a2741c7f1a\"" Mar 6 03:04:04.868913 containerd[1983]: time="2026-03-06T03:04:04.868860489Z" level=info msg="StartContainer for \"7e4c9b4720b1913a11f93d686151c33f0b29f15d55b5938cff6925a2741c7f1a\"" Mar 6 03:04:04.874331 containerd[1983]: time="2026-03-06T03:04:04.874282313Z" level=info msg="connecting to shim 7e4c9b4720b1913a11f93d686151c33f0b29f15d55b5938cff6925a2741c7f1a" address="unix:///run/containerd/s/e8f6483c76df5a5f5ab7afd874e5e62d9524162e264cd9a3bfbb10e62b7aaf3f" protocol=ttrpc version=3 Mar 6 03:04:04.960939 systemd[1]: Started cri-containerd-7e4c9b4720b1913a11f93d686151c33f0b29f15d55b5938cff6925a2741c7f1a.scope - libcontainer container 7e4c9b4720b1913a11f93d686151c33f0b29f15d55b5938cff6925a2741c7f1a. Mar 6 03:04:05.043699 containerd[1983]: time="2026-03-06T03:04:05.043575080Z" level=info msg="StartContainer for \"7e4c9b4720b1913a11f93d686151c33f0b29f15d55b5938cff6925a2741c7f1a\" returns successfully" Mar 6 03:04:05.110450 sshd[5650]: Connection closed by 68.220.241.50 port 58912 Mar 6 03:04:05.110136 sshd-session[5605]: pam_unix(sshd:session): session closed for user core Mar 6 03:04:05.118203 systemd[1]: sshd@8-172.31.18.81:22-68.220.241.50:58912.service: Deactivated successfully. Mar 6 03:04:05.121609 systemd[1]: session-9.scope: Deactivated successfully. Mar 6 03:04:05.123184 systemd-logind[1952]: Session 9 logged out. Waiting for processes to exit. Mar 6 03:04:05.126340 systemd-logind[1952]: Removed session 9. Mar 6 03:04:05.184460 kubelet[3329]: I0306 03:04:05.184029 3329 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-d8hdh" podStartSLOduration=62.184009611 podStartE2EDuration="1m2.184009611s" podCreationTimestamp="2026-03-06 03:03:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-06 03:04:05.183150745 +0000 UTC m=+67.695121845" watchObservedRunningTime="2026-03-06 03:04:05.184009611 +0000 UTC m=+67.695980698" Mar 6 03:04:05.376710 systemd-networkd[1838]: calid507c902f5b: Gained IPv6LL Mar 6 03:04:06.379299 containerd[1983]: time="2026-03-06T03:04:06.373448912Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=52406348" Mar 6 03:04:06.387799 containerd[1983]: time="2026-03-06T03:04:06.387736904Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:04:06.408804 containerd[1983]: time="2026-03-06T03:04:06.408740715Z" level=info msg="ImageCreate event name:\"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:04:06.415118 containerd[1983]: time="2026-03-06T03:04:06.415068794Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"53962361\" in 5.947193567s" Mar 6 03:04:06.415118 containerd[1983]: time="2026-03-06T03:04:06.415117098Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\"" Mar 6 03:04:06.416069 containerd[1983]: time="2026-03-06T03:04:06.416028765Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:04:06.417980 containerd[1983]: time="2026-03-06T03:04:06.417958102Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 6 03:04:06.501452 containerd[1983]: time="2026-03-06T03:04:06.498002348Z" level=info msg="CreateContainer within sandbox \"ac7738190806e0da70c1d7b79028f870282e2b9e7f85b576a5ff116ccda1234e\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Mar 6 03:04:06.512440 containerd[1983]: time="2026-03-06T03:04:06.510934477Z" level=info msg="Container c73dba45c7006d21927cc2b5abe2571c343842fbc9c0a41af61b6d0a1685a48b: CDI devices from CRI Config.CDIDevices: []" Mar 6 03:04:06.520965 containerd[1983]: time="2026-03-06T03:04:06.520916183Z" level=info msg="CreateContainer within sandbox \"ac7738190806e0da70c1d7b79028f870282e2b9e7f85b576a5ff116ccda1234e\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"c73dba45c7006d21927cc2b5abe2571c343842fbc9c0a41af61b6d0a1685a48b\"" Mar 6 03:04:06.523360 containerd[1983]: time="2026-03-06T03:04:06.521913356Z" level=info msg="StartContainer for \"c73dba45c7006d21927cc2b5abe2571c343842fbc9c0a41af61b6d0a1685a48b\"" Mar 6 03:04:06.523712 containerd[1983]: time="2026-03-06T03:04:06.523627994Z" level=info msg="connecting to shim c73dba45c7006d21927cc2b5abe2571c343842fbc9c0a41af61b6d0a1685a48b" address="unix:///run/containerd/s/de37f765f4f544a9f83f0683e82420377c33b112014aa255d330619b229c0834" protocol=ttrpc version=3 Mar 6 03:04:06.554691 systemd[1]: Started cri-containerd-c73dba45c7006d21927cc2b5abe2571c343842fbc9c0a41af61b6d0a1685a48b.scope - libcontainer container c73dba45c7006d21927cc2b5abe2571c343842fbc9c0a41af61b6d0a1685a48b. Mar 6 03:04:06.623290 containerd[1983]: time="2026-03-06T03:04:06.623207303Z" level=info msg="StartContainer for \"c73dba45c7006d21927cc2b5abe2571c343842fbc9c0a41af61b6d0a1685a48b\" returns successfully" Mar 6 03:04:07.186532 kubelet[3329]: I0306 03:04:07.185841 3329 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6d48547fb8-6cbb5" podStartSLOduration=39.024050481 podStartE2EDuration="45.185819548s" podCreationTimestamp="2026-03-06 03:03:22 +0000 UTC" firstStartedPulling="2026-03-06 03:04:00.255928025 +0000 UTC m=+62.767899100" lastFinishedPulling="2026-03-06 03:04:06.417697088 +0000 UTC m=+68.929668167" observedRunningTime="2026-03-06 03:04:07.184053605 +0000 UTC m=+69.696024693" watchObservedRunningTime="2026-03-06 03:04:07.185819548 +0000 UTC m=+69.697790638" Mar 6 03:04:07.837891 ntpd[2217]: Listen normally on 9 calib0f52df67d1 [fe80::ecee:eeff:feee:eeee%8]:123 Mar 6 03:04:07.837945 ntpd[2217]: Listen normally on 10 calic66315d401d [fe80::ecee:eeff:feee:eeee%9]:123 Mar 6 03:04:07.839250 ntpd[2217]: 6 Mar 03:04:07 ntpd[2217]: Listen normally on 9 calib0f52df67d1 [fe80::ecee:eeff:feee:eeee%8]:123 Mar 6 03:04:07.839250 ntpd[2217]: 6 Mar 03:04:07 ntpd[2217]: Listen normally on 10 calic66315d401d [fe80::ecee:eeff:feee:eeee%9]:123 Mar 6 03:04:07.839250 ntpd[2217]: 6 Mar 03:04:07 ntpd[2217]: Listen normally on 11 cali21cd075e471 [fe80::ecee:eeff:feee:eeee%10]:123 Mar 6 03:04:07.839250 ntpd[2217]: 6 Mar 03:04:07 ntpd[2217]: Listen normally on 12 cali1f3b9bd8277 [fe80::ecee:eeff:feee:eeee%11]:123 Mar 6 03:04:07.839250 ntpd[2217]: 6 Mar 03:04:07 ntpd[2217]: Listen normally on 13 calic6b887b10a7 [fe80::ecee:eeff:feee:eeee%12]:123 Mar 6 03:04:07.839250 ntpd[2217]: 6 Mar 03:04:07 ntpd[2217]: Listen normally on 14 cali626b87522e5 [fe80::ecee:eeff:feee:eeee%13]:123 Mar 6 03:04:07.839250 ntpd[2217]: 6 Mar 03:04:07 ntpd[2217]: Listen normally on 15 calid507c902f5b [fe80::ecee:eeff:feee:eeee%14]:123 Mar 6 03:04:07.837971 ntpd[2217]: Listen normally on 11 cali21cd075e471 [fe80::ecee:eeff:feee:eeee%10]:123 Mar 6 03:04:07.837997 ntpd[2217]: Listen normally on 12 cali1f3b9bd8277 [fe80::ecee:eeff:feee:eeee%11]:123 Mar 6 03:04:07.838023 ntpd[2217]: Listen normally on 13 calic6b887b10a7 [fe80::ecee:eeff:feee:eeee%12]:123 Mar 6 03:04:07.839091 ntpd[2217]: Listen normally on 14 cali626b87522e5 [fe80::ecee:eeff:feee:eeee%13]:123 Mar 6 03:04:07.839145 ntpd[2217]: Listen normally on 15 calid507c902f5b [fe80::ecee:eeff:feee:eeee%14]:123 Mar 6 03:04:10.110459 containerd[1983]: time="2026-03-06T03:04:10.110396038Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:04:10.130461 containerd[1983]: time="2026-03-06T03:04:10.130216992Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=48415780" Mar 6 03:04:10.135656 containerd[1983]: time="2026-03-06T03:04:10.135366961Z" level=info msg="ImageCreate event name:\"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:04:10.136320 containerd[1983]: time="2026-03-06T03:04:10.136287903Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:04:10.137176 containerd[1983]: time="2026-03-06T03:04:10.137142277Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 3.718881446s" Mar 6 03:04:10.137257 containerd[1983]: time="2026-03-06T03:04:10.137181750Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Mar 6 03:04:10.138633 containerd[1983]: time="2026-03-06T03:04:10.138613006Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Mar 6 03:04:10.148151 containerd[1983]: time="2026-03-06T03:04:10.146743062Z" level=info msg="CreateContainer within sandbox \"14421cb07c5db05277b00e76b0391f0e4d8a078dbb5948d74c517bfd5ef7dfb2\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 6 03:04:10.157034 containerd[1983]: time="2026-03-06T03:04:10.156994862Z" level=info msg="Container 693e3520c8e471a6cd05a9ba22fc794391f8ad7838922e0d193f061ddf40dda0: CDI devices from CRI Config.CDIDevices: []" Mar 6 03:04:10.177634 containerd[1983]: time="2026-03-06T03:04:10.177589585Z" level=info msg="CreateContainer within sandbox \"14421cb07c5db05277b00e76b0391f0e4d8a078dbb5948d74c517bfd5ef7dfb2\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"693e3520c8e471a6cd05a9ba22fc794391f8ad7838922e0d193f061ddf40dda0\"" Mar 6 03:04:10.180522 containerd[1983]: time="2026-03-06T03:04:10.180266948Z" level=info msg="StartContainer for \"693e3520c8e471a6cd05a9ba22fc794391f8ad7838922e0d193f061ddf40dda0\"" Mar 6 03:04:10.182637 containerd[1983]: time="2026-03-06T03:04:10.182604125Z" level=info msg="connecting to shim 693e3520c8e471a6cd05a9ba22fc794391f8ad7838922e0d193f061ddf40dda0" address="unix:///run/containerd/s/e63ddb8ef1923c693f0f03e34be9888a11e57cbfc2b273f05f734029e7aeb1db" protocol=ttrpc version=3 Mar 6 03:04:10.206762 systemd[1]: Started sshd@9-172.31.18.81:22-68.220.241.50:58916.service - OpenSSH per-connection server daemon (68.220.241.50:58916). Mar 6 03:04:10.237679 systemd[1]: Started cri-containerd-693e3520c8e471a6cd05a9ba22fc794391f8ad7838922e0d193f061ddf40dda0.scope - libcontainer container 693e3520c8e471a6cd05a9ba22fc794391f8ad7838922e0d193f061ddf40dda0. Mar 6 03:04:10.347913 containerd[1983]: time="2026-03-06T03:04:10.347876525Z" level=info msg="StartContainer for \"693e3520c8e471a6cd05a9ba22fc794391f8ad7838922e0d193f061ddf40dda0\" returns successfully" Mar 6 03:04:10.764927 sshd[5903]: Accepted publickey for core from 68.220.241.50 port 58916 ssh2: RSA SHA256:qfQ2rypVMEWpTsWSSrgOtJoJZgil1uRcnShauR4vebY Mar 6 03:04:10.768064 sshd-session[5903]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 03:04:10.775754 systemd-logind[1952]: New session 10 of user core. Mar 6 03:04:10.784617 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 6 03:04:11.691314 kubelet[3329]: I0306 03:04:11.689851 3329 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-apiserver-7fbfb65546-bwshp" podStartSLOduration=40.860116116 podStartE2EDuration="50.689827381s" podCreationTimestamp="2026-03-06 03:03:21 +0000 UTC" firstStartedPulling="2026-03-06 03:04:00.308684932 +0000 UTC m=+62.820655998" lastFinishedPulling="2026-03-06 03:04:10.138396198 +0000 UTC m=+72.650367263" observedRunningTime="2026-03-06 03:04:11.32314646 +0000 UTC m=+73.835117548" watchObservedRunningTime="2026-03-06 03:04:11.689827381 +0000 UTC m=+74.201798469" Mar 6 03:04:12.057450 sshd[5938]: Connection closed by 68.220.241.50 port 58916 Mar 6 03:04:12.058653 sshd-session[5903]: pam_unix(sshd:session): session closed for user core Mar 6 03:04:12.066472 systemd[1]: sshd@9-172.31.18.81:22-68.220.241.50:58916.service: Deactivated successfully. Mar 6 03:04:12.071898 systemd[1]: session-10.scope: Deactivated successfully. Mar 6 03:04:12.077404 systemd-logind[1952]: Session 10 logged out. Waiting for processes to exit. Mar 6 03:04:12.081837 systemd-logind[1952]: Removed session 10. Mar 6 03:04:12.711690 containerd[1983]: time="2026-03-06T03:04:12.711639653Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:04:12.713139 containerd[1983]: time="2026-03-06T03:04:12.713103101Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=14704317" Mar 6 03:04:12.715758 containerd[1983]: time="2026-03-06T03:04:12.714703846Z" level=info msg="ImageCreate event name:\"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:04:12.719735 containerd[1983]: time="2026-03-06T03:04:12.719698493Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:04:12.720997 containerd[1983]: time="2026-03-06T03:04:12.720338063Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"16260314\" in 2.581543725s" Mar 6 03:04:12.720997 containerd[1983]: time="2026-03-06T03:04:12.720378567Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\"" Mar 6 03:04:12.732215 containerd[1983]: time="2026-03-06T03:04:12.732177576Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 6 03:04:12.828229 containerd[1983]: time="2026-03-06T03:04:12.828170362Z" level=info msg="CreateContainer within sandbox \"a9840a586f0f847e4ab0fea2422a129fa00eb685e28d7761c5ea71edea603a28\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Mar 6 03:04:12.838463 containerd[1983]: time="2026-03-06T03:04:12.837949719Z" level=info msg="Container ed02a2e1a8764b88f038131b612db80f4427c8e0b23c79ee127011d6f24ce675: CDI devices from CRI Config.CDIDevices: []" Mar 6 03:04:12.850348 containerd[1983]: time="2026-03-06T03:04:12.850303280Z" level=info msg="CreateContainer within sandbox \"a9840a586f0f847e4ab0fea2422a129fa00eb685e28d7761c5ea71edea603a28\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"ed02a2e1a8764b88f038131b612db80f4427c8e0b23c79ee127011d6f24ce675\"" Mar 6 03:04:12.852931 containerd[1983]: time="2026-03-06T03:04:12.851172077Z" level=info msg="StartContainer for \"ed02a2e1a8764b88f038131b612db80f4427c8e0b23c79ee127011d6f24ce675\"" Mar 6 03:04:12.853208 containerd[1983]: time="2026-03-06T03:04:12.853077362Z" level=info msg="connecting to shim ed02a2e1a8764b88f038131b612db80f4427c8e0b23c79ee127011d6f24ce675" address="unix:///run/containerd/s/e9fc2660cef22404fd4e79e2a757935a865b9bcd7548e6fd6c119650dd234524" protocol=ttrpc version=3 Mar 6 03:04:12.920703 systemd[1]: Started cri-containerd-ed02a2e1a8764b88f038131b612db80f4427c8e0b23c79ee127011d6f24ce675.scope - libcontainer container ed02a2e1a8764b88f038131b612db80f4427c8e0b23c79ee127011d6f24ce675. Mar 6 03:04:13.101851 containerd[1983]: time="2026-03-06T03:04:13.101804256Z" level=info msg="StartContainer for \"ed02a2e1a8764b88f038131b612db80f4427c8e0b23c79ee127011d6f24ce675\" returns successfully" Mar 6 03:04:13.103470 containerd[1983]: time="2026-03-06T03:04:13.102978610Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:04:13.104915 containerd[1983]: time="2026-03-06T03:04:13.104871063Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Mar 6 03:04:13.109047 containerd[1983]: time="2026-03-06T03:04:13.108967080Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 376.557487ms" Mar 6 03:04:13.109244 containerd[1983]: time="2026-03-06T03:04:13.109222717Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Mar 6 03:04:13.111456 containerd[1983]: time="2026-03-06T03:04:13.110873587Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Mar 6 03:04:13.115242 containerd[1983]: time="2026-03-06T03:04:13.115206828Z" level=info msg="CreateContainer within sandbox \"a22a53b86b9d1524cefb5f2ac1f1a4632c94fe1ee29fbb513b6c10a6bf5bc88e\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 6 03:04:13.132503 containerd[1983]: time="2026-03-06T03:04:13.131653301Z" level=info msg="Container b66b0fbe8a187b37dfc01ee79c7b67e4eaea425243806d0c8a79e62d2473284e: CDI devices from CRI Config.CDIDevices: []" Mar 6 03:04:13.203953 containerd[1983]: time="2026-03-06T03:04:13.202483529Z" level=info msg="CreateContainer within sandbox \"a22a53b86b9d1524cefb5f2ac1f1a4632c94fe1ee29fbb513b6c10a6bf5bc88e\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"b66b0fbe8a187b37dfc01ee79c7b67e4eaea425243806d0c8a79e62d2473284e\"" Mar 6 03:04:13.205657 containerd[1983]: time="2026-03-06T03:04:13.205122313Z" level=info msg="StartContainer for \"b66b0fbe8a187b37dfc01ee79c7b67e4eaea425243806d0c8a79e62d2473284e\"" Mar 6 03:04:13.208119 containerd[1983]: time="2026-03-06T03:04:13.208093101Z" level=info msg="connecting to shim b66b0fbe8a187b37dfc01ee79c7b67e4eaea425243806d0c8a79e62d2473284e" address="unix:///run/containerd/s/470306e146ac2a8f12d59d88d132ffe75a35a9a7e1324ecec5c1b6edd875615d" protocol=ttrpc version=3 Mar 6 03:04:13.244802 systemd[1]: Started cri-containerd-b66b0fbe8a187b37dfc01ee79c7b67e4eaea425243806d0c8a79e62d2473284e.scope - libcontainer container b66b0fbe8a187b37dfc01ee79c7b67e4eaea425243806d0c8a79e62d2473284e. Mar 6 03:04:13.382021 kubelet[3329]: I0306 03:04:13.381850 3329 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/csi-node-driver-zsntj" podStartSLOduration=37.672291495 podStartE2EDuration="51.372347131s" podCreationTimestamp="2026-03-06 03:03:22 +0000 UTC" firstStartedPulling="2026-03-06 03:03:59.027010078 +0000 UTC m=+61.538981157" lastFinishedPulling="2026-03-06 03:04:12.727065715 +0000 UTC m=+75.239036793" observedRunningTime="2026-03-06 03:04:13.372265653 +0000 UTC m=+75.884236737" watchObservedRunningTime="2026-03-06 03:04:13.372347131 +0000 UTC m=+75.884318219" Mar 6 03:04:13.385780 containerd[1983]: time="2026-03-06T03:04:13.385599912Z" level=info msg="StartContainer for \"b66b0fbe8a187b37dfc01ee79c7b67e4eaea425243806d0c8a79e62d2473284e\" returns successfully" Mar 6 03:04:13.973502 kubelet[3329]: I0306 03:04:13.973459 3329 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Mar 6 03:04:13.973663 kubelet[3329]: I0306 03:04:13.973520 3329 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Mar 6 03:04:14.372351 kubelet[3329]: I0306 03:04:14.371836 3329 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-apiserver-7fbfb65546-qf94t" podStartSLOduration=41.345872866 podStartE2EDuration="53.371819346s" podCreationTimestamp="2026-03-06 03:03:21 +0000 UTC" firstStartedPulling="2026-03-06 03:04:01.084519389 +0000 UTC m=+63.596490463" lastFinishedPulling="2026-03-06 03:04:13.110465856 +0000 UTC m=+75.622436943" observedRunningTime="2026-03-06 03:04:14.370959362 +0000 UTC m=+76.882930449" watchObservedRunningTime="2026-03-06 03:04:14.371819346 +0000 UTC m=+76.883790442" Mar 6 03:04:16.245280 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount95622420.mount: Deactivated successfully. Mar 6 03:04:17.050832 containerd[1983]: time="2026-03-06T03:04:17.050504391Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:04:17.053615 containerd[1983]: time="2026-03-06T03:04:17.053575057Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=55623386" Mar 6 03:04:17.054185 containerd[1983]: time="2026-03-06T03:04:17.054148839Z" level=info msg="ImageCreate event name:\"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:04:17.057515 containerd[1983]: time="2026-03-06T03:04:17.057219337Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:04:17.058307 containerd[1983]: time="2026-03-06T03:04:17.058262130Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"55623232\" in 3.946664751s" Mar 6 03:04:17.058477 containerd[1983]: time="2026-03-06T03:04:17.058455046Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\"" Mar 6 03:04:17.174106 systemd[1]: Started sshd@10-172.31.18.81:22-68.220.241.50:52684.service - OpenSSH per-connection server daemon (68.220.241.50:52684). Mar 6 03:04:17.407597 containerd[1983]: time="2026-03-06T03:04:17.407172949Z" level=info msg="CreateContainer within sandbox \"147855c9ccdf47f72fbe8e7c306a34dafe15603826c20a39a9ef3ff3c86bd48d\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Mar 6 03:04:17.421464 containerd[1983]: time="2026-03-06T03:04:17.420585711Z" level=info msg="Container 4f90b408ab52d935a4742d6119a3da902aca4846c547ac1d9f5c5309f2114231: CDI devices from CRI Config.CDIDevices: []" Mar 6 03:04:17.431884 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3687550375.mount: Deactivated successfully. Mar 6 03:04:17.443202 containerd[1983]: time="2026-03-06T03:04:17.443144590Z" level=info msg="CreateContainer within sandbox \"147855c9ccdf47f72fbe8e7c306a34dafe15603826c20a39a9ef3ff3c86bd48d\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"4f90b408ab52d935a4742d6119a3da902aca4846c547ac1d9f5c5309f2114231\"" Mar 6 03:04:17.453526 containerd[1983]: time="2026-03-06T03:04:17.453407474Z" level=info msg="StartContainer for \"4f90b408ab52d935a4742d6119a3da902aca4846c547ac1d9f5c5309f2114231\"" Mar 6 03:04:17.495508 containerd[1983]: time="2026-03-06T03:04:17.495455541Z" level=info msg="connecting to shim 4f90b408ab52d935a4742d6119a3da902aca4846c547ac1d9f5c5309f2114231" address="unix:///run/containerd/s/3c93538e035b76db77d56dece5b9537359bebb5431cd2c3e4cca1b51ca53e51f" protocol=ttrpc version=3 Mar 6 03:04:17.652675 systemd[1]: Started cri-containerd-4f90b408ab52d935a4742d6119a3da902aca4846c547ac1d9f5c5309f2114231.scope - libcontainer container 4f90b408ab52d935a4742d6119a3da902aca4846c547ac1d9f5c5309f2114231. Mar 6 03:04:17.799708 sshd[6069]: Accepted publickey for core from 68.220.241.50 port 52684 ssh2: RSA SHA256:qfQ2rypVMEWpTsWSSrgOtJoJZgil1uRcnShauR4vebY Mar 6 03:04:17.805376 sshd-session[6069]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 03:04:17.832379 systemd-logind[1952]: New session 11 of user core. Mar 6 03:04:17.837322 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 6 03:04:18.066793 containerd[1983]: time="2026-03-06T03:04:18.066512128Z" level=info msg="StartContainer for \"4f90b408ab52d935a4742d6119a3da902aca4846c547ac1d9f5c5309f2114231\" returns successfully" Mar 6 03:04:18.898396 kubelet[3329]: I0306 03:04:18.878195 3329 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/goldmane-9f7667bb8-zcmkp" podStartSLOduration=43.721690236 podStartE2EDuration="57.837617868s" podCreationTimestamp="2026-03-06 03:03:21 +0000 UTC" firstStartedPulling="2026-03-06 03:04:02.981917965 +0000 UTC m=+65.493889047" lastFinishedPulling="2026-03-06 03:04:17.097845599 +0000 UTC m=+79.609816679" observedRunningTime="2026-03-06 03:04:18.726124369 +0000 UTC m=+81.238095458" watchObservedRunningTime="2026-03-06 03:04:18.837617868 +0000 UTC m=+81.349588956" Mar 6 03:04:19.371811 sshd[6093]: Connection closed by 68.220.241.50 port 52684 Mar 6 03:04:19.373636 sshd-session[6069]: pam_unix(sshd:session): session closed for user core Mar 6 03:04:19.378959 systemd-logind[1952]: Session 11 logged out. Waiting for processes to exit. Mar 6 03:04:19.379594 systemd[1]: sshd@10-172.31.18.81:22-68.220.241.50:52684.service: Deactivated successfully. Mar 6 03:04:19.383108 systemd[1]: session-11.scope: Deactivated successfully. Mar 6 03:04:19.385703 systemd-logind[1952]: Removed session 11. Mar 6 03:04:19.461537 systemd[1]: Started sshd@11-172.31.18.81:22-68.220.241.50:52690.service - OpenSSH per-connection server daemon (68.220.241.50:52690). Mar 6 03:04:19.946957 sshd[6143]: Accepted publickey for core from 68.220.241.50 port 52690 ssh2: RSA SHA256:qfQ2rypVMEWpTsWSSrgOtJoJZgil1uRcnShauR4vebY Mar 6 03:04:19.948391 sshd-session[6143]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 03:04:19.955208 systemd-logind[1952]: New session 12 of user core. Mar 6 03:04:19.964705 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 6 03:04:20.370263 sshd[6171]: Connection closed by 68.220.241.50 port 52690 Mar 6 03:04:20.371647 sshd-session[6143]: pam_unix(sshd:session): session closed for user core Mar 6 03:04:20.382045 systemd[1]: sshd@11-172.31.18.81:22-68.220.241.50:52690.service: Deactivated successfully. Mar 6 03:04:20.386317 systemd[1]: session-12.scope: Deactivated successfully. Mar 6 03:04:20.389252 systemd-logind[1952]: Session 12 logged out. Waiting for processes to exit. Mar 6 03:04:20.393347 systemd-logind[1952]: Removed session 12. Mar 6 03:04:20.460703 systemd[1]: Started sshd@12-172.31.18.81:22-68.220.241.50:52696.service - OpenSSH per-connection server daemon (68.220.241.50:52696). Mar 6 03:04:20.914662 sshd[6181]: Accepted publickey for core from 68.220.241.50 port 52696 ssh2: RSA SHA256:qfQ2rypVMEWpTsWSSrgOtJoJZgil1uRcnShauR4vebY Mar 6 03:04:20.916150 sshd-session[6181]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 03:04:20.922452 systemd-logind[1952]: New session 13 of user core. Mar 6 03:04:20.926610 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 6 03:04:21.352004 sshd[6206]: Connection closed by 68.220.241.50 port 52696 Mar 6 03:04:21.354992 sshd-session[6181]: pam_unix(sshd:session): session closed for user core Mar 6 03:04:21.361066 systemd[1]: sshd@12-172.31.18.81:22-68.220.241.50:52696.service: Deactivated successfully. Mar 6 03:04:21.364300 systemd[1]: session-13.scope: Deactivated successfully. Mar 6 03:04:21.366197 systemd-logind[1952]: Session 13 logged out. Waiting for processes to exit. Mar 6 03:04:21.368846 systemd-logind[1952]: Removed session 13. Mar 6 03:04:26.442670 systemd[1]: Started sshd@13-172.31.18.81:22-68.220.241.50:36042.service - OpenSSH per-connection server daemon (68.220.241.50:36042). Mar 6 03:04:26.989115 sshd[6288]: Accepted publickey for core from 68.220.241.50 port 36042 ssh2: RSA SHA256:qfQ2rypVMEWpTsWSSrgOtJoJZgil1uRcnShauR4vebY Mar 6 03:04:26.991823 sshd-session[6288]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 03:04:26.998516 systemd-logind[1952]: New session 14 of user core. Mar 6 03:04:27.004802 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 6 03:04:27.567619 sshd[6291]: Connection closed by 68.220.241.50 port 36042 Mar 6 03:04:27.568848 sshd-session[6288]: pam_unix(sshd:session): session closed for user core Mar 6 03:04:27.574385 systemd-logind[1952]: Session 14 logged out. Waiting for processes to exit. Mar 6 03:04:27.574768 systemd[1]: sshd@13-172.31.18.81:22-68.220.241.50:36042.service: Deactivated successfully. Mar 6 03:04:27.577525 systemd[1]: session-14.scope: Deactivated successfully. Mar 6 03:04:27.580714 systemd-logind[1952]: Removed session 14. Mar 6 03:04:27.663738 systemd[1]: Started sshd@14-172.31.18.81:22-68.220.241.50:36046.service - OpenSSH per-connection server daemon (68.220.241.50:36046). Mar 6 03:04:28.105361 sshd[6303]: Accepted publickey for core from 68.220.241.50 port 36046 ssh2: RSA SHA256:qfQ2rypVMEWpTsWSSrgOtJoJZgil1uRcnShauR4vebY Mar 6 03:04:28.106641 sshd-session[6303]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 03:04:28.112730 systemd-logind[1952]: New session 15 of user core. Mar 6 03:04:28.117689 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 6 03:04:31.762440 sshd[6306]: Connection closed by 68.220.241.50 port 36046 Mar 6 03:04:31.763240 sshd-session[6303]: pam_unix(sshd:session): session closed for user core Mar 6 03:04:31.768738 systemd[1]: sshd@14-172.31.18.81:22-68.220.241.50:36046.service: Deactivated successfully. Mar 6 03:04:31.771568 systemd[1]: session-15.scope: Deactivated successfully. Mar 6 03:04:31.773312 systemd-logind[1952]: Session 15 logged out. Waiting for processes to exit. Mar 6 03:04:31.774891 systemd-logind[1952]: Removed session 15. Mar 6 03:04:31.852660 systemd[1]: Started sshd@15-172.31.18.81:22-68.220.241.50:36048.service - OpenSSH per-connection server daemon (68.220.241.50:36048). Mar 6 03:04:32.350561 sshd[6320]: Accepted publickey for core from 68.220.241.50 port 36048 ssh2: RSA SHA256:qfQ2rypVMEWpTsWSSrgOtJoJZgil1uRcnShauR4vebY Mar 6 03:04:32.352074 sshd-session[6320]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 03:04:32.358389 systemd-logind[1952]: New session 16 of user core. Mar 6 03:04:32.369648 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 6 03:04:33.664903 sshd[6323]: Connection closed by 68.220.241.50 port 36048 Mar 6 03:04:33.666890 sshd-session[6320]: pam_unix(sshd:session): session closed for user core Mar 6 03:04:33.675902 systemd[1]: sshd@15-172.31.18.81:22-68.220.241.50:36048.service: Deactivated successfully. Mar 6 03:04:33.679376 systemd[1]: session-16.scope: Deactivated successfully. Mar 6 03:04:33.684697 systemd-logind[1952]: Session 16 logged out. Waiting for processes to exit. Mar 6 03:04:33.686888 systemd-logind[1952]: Removed session 16. Mar 6 03:04:33.754654 systemd[1]: Started sshd@16-172.31.18.81:22-68.220.241.50:49972.service - OpenSSH per-connection server daemon (68.220.241.50:49972). Mar 6 03:04:34.210006 sshd[6347]: Accepted publickey for core from 68.220.241.50 port 49972 ssh2: RSA SHA256:qfQ2rypVMEWpTsWSSrgOtJoJZgil1uRcnShauR4vebY Mar 6 03:04:34.213097 sshd-session[6347]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 03:04:34.225133 systemd-logind[1952]: New session 17 of user core. Mar 6 03:04:34.229258 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 6 03:04:35.326666 sshd[6350]: Connection closed by 68.220.241.50 port 49972 Mar 6 03:04:35.328658 sshd-session[6347]: pam_unix(sshd:session): session closed for user core Mar 6 03:04:35.336606 systemd[1]: sshd@16-172.31.18.81:22-68.220.241.50:49972.service: Deactivated successfully. Mar 6 03:04:35.340797 systemd[1]: session-17.scope: Deactivated successfully. Mar 6 03:04:35.345062 systemd-logind[1952]: Session 17 logged out. Waiting for processes to exit. Mar 6 03:04:35.350515 systemd-logind[1952]: Removed session 17. Mar 6 03:04:35.419919 systemd[1]: Started sshd@17-172.31.18.81:22-68.220.241.50:49980.service - OpenSSH per-connection server daemon (68.220.241.50:49980). Mar 6 03:04:35.897648 sshd[6375]: Accepted publickey for core from 68.220.241.50 port 49980 ssh2: RSA SHA256:qfQ2rypVMEWpTsWSSrgOtJoJZgil1uRcnShauR4vebY Mar 6 03:04:35.899249 sshd-session[6375]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 03:04:35.904483 systemd-logind[1952]: New session 18 of user core. Mar 6 03:04:35.910592 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 6 03:04:36.291470 sshd[6378]: Connection closed by 68.220.241.50 port 49980 Mar 6 03:04:36.292901 sshd-session[6375]: pam_unix(sshd:session): session closed for user core Mar 6 03:04:36.300370 systemd-logind[1952]: Session 18 logged out. Waiting for processes to exit. Mar 6 03:04:36.300681 systemd[1]: sshd@17-172.31.18.81:22-68.220.241.50:49980.service: Deactivated successfully. Mar 6 03:04:36.303173 systemd[1]: session-18.scope: Deactivated successfully. Mar 6 03:04:36.305545 systemd-logind[1952]: Removed session 18. Mar 6 03:04:41.382284 systemd[1]: Started sshd@18-172.31.18.81:22-68.220.241.50:49984.service - OpenSSH per-connection server daemon (68.220.241.50:49984). Mar 6 03:04:41.930469 sshd[6414]: Accepted publickey for core from 68.220.241.50 port 49984 ssh2: RSA SHA256:qfQ2rypVMEWpTsWSSrgOtJoJZgil1uRcnShauR4vebY Mar 6 03:04:41.935634 sshd-session[6414]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 03:04:41.942279 systemd-logind[1952]: New session 19 of user core. Mar 6 03:04:41.950645 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 6 03:04:42.555549 sshd[6417]: Connection closed by 68.220.241.50 port 49984 Mar 6 03:04:42.556706 sshd-session[6414]: pam_unix(sshd:session): session closed for user core Mar 6 03:04:42.564603 systemd-logind[1952]: Session 19 logged out. Waiting for processes to exit. Mar 6 03:04:42.565143 systemd[1]: sshd@18-172.31.18.81:22-68.220.241.50:49984.service: Deactivated successfully. Mar 6 03:04:42.568051 systemd[1]: session-19.scope: Deactivated successfully. Mar 6 03:04:42.570530 systemd-logind[1952]: Removed session 19. Mar 6 03:04:47.649704 systemd[1]: Started sshd@19-172.31.18.81:22-68.220.241.50:48610.service - OpenSSH per-connection server daemon (68.220.241.50:48610). Mar 6 03:04:48.097536 sshd[6439]: Accepted publickey for core from 68.220.241.50 port 48610 ssh2: RSA SHA256:qfQ2rypVMEWpTsWSSrgOtJoJZgil1uRcnShauR4vebY Mar 6 03:04:48.101041 sshd-session[6439]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 03:04:48.121243 systemd-logind[1952]: New session 20 of user core. Mar 6 03:04:48.124788 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 6 03:04:48.701453 sshd[6442]: Connection closed by 68.220.241.50 port 48610 Mar 6 03:04:48.701110 sshd-session[6439]: pam_unix(sshd:session): session closed for user core Mar 6 03:04:48.708020 systemd-logind[1952]: Session 20 logged out. Waiting for processes to exit. Mar 6 03:04:48.709222 systemd[1]: sshd@19-172.31.18.81:22-68.220.241.50:48610.service: Deactivated successfully. Mar 6 03:04:48.714755 systemd[1]: session-20.scope: Deactivated successfully. Mar 6 03:04:48.721125 systemd-logind[1952]: Removed session 20. Mar 6 03:04:53.795577 systemd[1]: Started sshd@20-172.31.18.81:22-68.220.241.50:51228.service - OpenSSH per-connection server daemon (68.220.241.50:51228). Mar 6 03:04:54.341868 sshd[6503]: Accepted publickey for core from 68.220.241.50 port 51228 ssh2: RSA SHA256:qfQ2rypVMEWpTsWSSrgOtJoJZgil1uRcnShauR4vebY Mar 6 03:04:54.343746 sshd-session[6503]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 03:04:54.348674 systemd-logind[1952]: New session 21 of user core. Mar 6 03:04:54.354644 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 6 03:04:55.267675 sshd[6506]: Connection closed by 68.220.241.50 port 51228 Mar 6 03:04:55.272117 sshd-session[6503]: pam_unix(sshd:session): session closed for user core Mar 6 03:04:55.281168 systemd[1]: sshd@20-172.31.18.81:22-68.220.241.50:51228.service: Deactivated successfully. Mar 6 03:04:55.284217 systemd[1]: session-21.scope: Deactivated successfully. Mar 6 03:04:55.285121 systemd-logind[1952]: Session 21 logged out. Waiting for processes to exit. Mar 6 03:04:55.287481 systemd-logind[1952]: Removed session 21. Mar 6 03:04:57.557251 update_engine[1953]: I20260306 03:04:57.557163 1953 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Mar 6 03:04:57.557251 update_engine[1953]: I20260306 03:04:57.557246 1953 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Mar 6 03:04:57.564050 update_engine[1953]: I20260306 03:04:57.563997 1953 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Mar 6 03:04:57.566230 update_engine[1953]: I20260306 03:04:57.566025 1953 omaha_request_params.cc:62] Current group set to stable Mar 6 03:04:57.568563 update_engine[1953]: I20260306 03:04:57.567107 1953 update_attempter.cc:499] Already updated boot flags. Skipping. Mar 6 03:04:57.568563 update_engine[1953]: I20260306 03:04:57.567135 1953 update_attempter.cc:643] Scheduling an action processor start. Mar 6 03:04:57.568563 update_engine[1953]: I20260306 03:04:57.567160 1953 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Mar 6 03:04:57.568563 update_engine[1953]: I20260306 03:04:57.567219 1953 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Mar 6 03:04:57.568563 update_engine[1953]: I20260306 03:04:57.567317 1953 omaha_request_action.cc:271] Posting an Omaha request to disabled Mar 6 03:04:57.568563 update_engine[1953]: I20260306 03:04:57.567327 1953 omaha_request_action.cc:272] Request: Mar 6 03:04:57.568563 update_engine[1953]: Mar 6 03:04:57.568563 update_engine[1953]: Mar 6 03:04:57.568563 update_engine[1953]: Mar 6 03:04:57.568563 update_engine[1953]: Mar 6 03:04:57.568563 update_engine[1953]: Mar 6 03:04:57.568563 update_engine[1953]: Mar 6 03:04:57.568563 update_engine[1953]: Mar 6 03:04:57.568563 update_engine[1953]: Mar 6 03:04:57.568563 update_engine[1953]: I20260306 03:04:57.567337 1953 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 6 03:04:57.598082 update_engine[1953]: I20260306 03:04:57.598026 1953 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 6 03:04:57.603500 update_engine[1953]: I20260306 03:04:57.601689 1953 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 6 03:04:57.618796 locksmithd[2017]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Mar 6 03:04:57.634370 update_engine[1953]: E20260306 03:04:57.634306 1953 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 6 03:04:57.634554 update_engine[1953]: I20260306 03:04:57.634521 1953 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Mar 6 03:05:00.362504 systemd[1]: Started sshd@21-172.31.18.81:22-68.220.241.50:51234.service - OpenSSH per-connection server daemon (68.220.241.50:51234). Mar 6 03:05:00.860268 sshd[6520]: Accepted publickey for core from 68.220.241.50 port 51234 ssh2: RSA SHA256:qfQ2rypVMEWpTsWSSrgOtJoJZgil1uRcnShauR4vebY Mar 6 03:05:00.862977 sshd-session[6520]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 03:05:00.869969 systemd-logind[1952]: New session 22 of user core. Mar 6 03:05:00.876639 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 6 03:05:02.095756 sshd[6526]: Connection closed by 68.220.241.50 port 51234 Mar 6 03:05:02.102727 sshd-session[6520]: pam_unix(sshd:session): session closed for user core Mar 6 03:05:02.137953 systemd[1]: sshd@21-172.31.18.81:22-68.220.241.50:51234.service: Deactivated successfully. Mar 6 03:05:02.156758 systemd[1]: session-22.scope: Deactivated successfully. Mar 6 03:05:02.183214 systemd-logind[1952]: Session 22 logged out. Waiting for processes to exit. Mar 6 03:05:02.198376 systemd-logind[1952]: Removed session 22. Mar 6 03:05:07.188367 systemd[1]: Started sshd@22-172.31.18.81:22-68.220.241.50:44608.service - OpenSSH per-connection server daemon (68.220.241.50:44608). Mar 6 03:05:07.477577 update_engine[1953]: I20260306 03:05:07.477492 1953 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 6 03:05:07.478224 update_engine[1953]: I20260306 03:05:07.477607 1953 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 6 03:05:07.478279 update_engine[1953]: I20260306 03:05:07.478232 1953 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 6 03:05:07.479238 update_engine[1953]: E20260306 03:05:07.479198 1953 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 6 03:05:07.479366 update_engine[1953]: I20260306 03:05:07.479296 1953 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Mar 6 03:05:07.684973 sshd[6541]: Accepted publickey for core from 68.220.241.50 port 44608 ssh2: RSA SHA256:qfQ2rypVMEWpTsWSSrgOtJoJZgil1uRcnShauR4vebY Mar 6 03:05:07.687106 sshd-session[6541]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 03:05:07.696748 systemd-logind[1952]: New session 23 of user core. Mar 6 03:05:07.704637 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 6 03:05:08.146930 sshd[6564]: Connection closed by 68.220.241.50 port 44608 Mar 6 03:05:08.147683 sshd-session[6541]: pam_unix(sshd:session): session closed for user core Mar 6 03:05:08.153050 systemd[1]: sshd@22-172.31.18.81:22-68.220.241.50:44608.service: Deactivated successfully. Mar 6 03:05:08.153962 systemd-logind[1952]: Session 23 logged out. Waiting for processes to exit. Mar 6 03:05:08.156226 systemd[1]: session-23.scope: Deactivated successfully. Mar 6 03:05:08.158302 systemd-logind[1952]: Removed session 23. Mar 6 03:05:17.483755 update_engine[1953]: I20260306 03:05:17.483654 1953 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 6 03:05:17.484396 update_engine[1953]: I20260306 03:05:17.483781 1953 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 6 03:05:17.485082 update_engine[1953]: I20260306 03:05:17.485044 1953 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 6 03:05:17.508269 update_engine[1953]: E20260306 03:05:17.508208 1953 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 6 03:05:17.508413 update_engine[1953]: I20260306 03:05:17.508340 1953 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Mar 6 03:05:22.308384 systemd[1]: cri-containerd-286148a75fcdaba27cacdbfcea01438d9acbf4f3fd3265403a32afe203b35360.scope: Deactivated successfully. Mar 6 03:05:22.308781 systemd[1]: cri-containerd-286148a75fcdaba27cacdbfcea01438d9acbf4f3fd3265403a32afe203b35360.scope: Consumed 3.361s CPU time, 91.6M memory peak, 88.7M read from disk. Mar 6 03:05:22.514211 containerd[1983]: time="2026-03-06T03:05:22.502375953Z" level=info msg="received container exit event container_id:\"286148a75fcdaba27cacdbfcea01438d9acbf4f3fd3265403a32afe203b35360\" id:\"286148a75fcdaba27cacdbfcea01438d9acbf4f3fd3265403a32afe203b35360\" pid:3166 exit_status:1 exited_at:{seconds:1772766322 nanos:424283876}" Mar 6 03:05:22.622291 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-286148a75fcdaba27cacdbfcea01438d9acbf4f3fd3265403a32afe203b35360-rootfs.mount: Deactivated successfully. Mar 6 03:05:23.057613 kubelet[3329]: I0306 03:05:23.048031 3329 scope.go:122] "RemoveContainer" containerID="286148a75fcdaba27cacdbfcea01438d9acbf4f3fd3265403a32afe203b35360" Mar 6 03:05:23.158448 containerd[1983]: time="2026-03-06T03:05:23.158356746Z" level=info msg="CreateContainer within sandbox \"acf4ee4e6dd77f5e82c06266f1725108c2312d0518818730a77d5684e39bcb60\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Mar 6 03:05:23.168868 systemd[1]: cri-containerd-4d355dbba419be32533dd4a5b2cb614f3a836aed8d454c883865888563206b87.scope: Deactivated successfully. Mar 6 03:05:23.169212 systemd[1]: cri-containerd-4d355dbba419be32533dd4a5b2cb614f3a836aed8d454c883865888563206b87.scope: Consumed 9.659s CPU time, 129.1M memory peak, 56.9M read from disk. Mar 6 03:05:23.176939 containerd[1983]: time="2026-03-06T03:05:23.176528214Z" level=info msg="received container exit event container_id:\"4d355dbba419be32533dd4a5b2cb614f3a836aed8d454c883865888563206b87\" id:\"4d355dbba419be32533dd4a5b2cb614f3a836aed8d454c883865888563206b87\" pid:3836 exit_status:1 exited_at:{seconds:1772766323 nanos:175909912}" Mar 6 03:05:23.228673 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4d355dbba419be32533dd4a5b2cb614f3a836aed8d454c883865888563206b87-rootfs.mount: Deactivated successfully. Mar 6 03:05:23.373261 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3393863509.mount: Deactivated successfully. Mar 6 03:05:23.378051 containerd[1983]: time="2026-03-06T03:05:23.377481467Z" level=info msg="Container 519bb10749aa6ecd4fb9bfef1455f60d62293ba377df408405f8e6d30a87b2fb: CDI devices from CRI Config.CDIDevices: []" Mar 6 03:05:23.407880 containerd[1983]: time="2026-03-06T03:05:23.407829664Z" level=info msg="CreateContainer within sandbox \"acf4ee4e6dd77f5e82c06266f1725108c2312d0518818730a77d5684e39bcb60\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"519bb10749aa6ecd4fb9bfef1455f60d62293ba377df408405f8e6d30a87b2fb\"" Mar 6 03:05:23.412832 containerd[1983]: time="2026-03-06T03:05:23.412789831Z" level=info msg="StartContainer for \"519bb10749aa6ecd4fb9bfef1455f60d62293ba377df408405f8e6d30a87b2fb\"" Mar 6 03:05:23.414343 containerd[1983]: time="2026-03-06T03:05:23.414290139Z" level=info msg="connecting to shim 519bb10749aa6ecd4fb9bfef1455f60d62293ba377df408405f8e6d30a87b2fb" address="unix:///run/containerd/s/088658aed94733e929944d2438d5fe34d62a1bddce2a4b21097988f236ab3f6f" protocol=ttrpc version=3 Mar 6 03:05:23.502787 systemd[1]: Started cri-containerd-519bb10749aa6ecd4fb9bfef1455f60d62293ba377df408405f8e6d30a87b2fb.scope - libcontainer container 519bb10749aa6ecd4fb9bfef1455f60d62293ba377df408405f8e6d30a87b2fb. Mar 6 03:05:23.607456 containerd[1983]: time="2026-03-06T03:05:23.607384333Z" level=info msg="StartContainer for \"519bb10749aa6ecd4fb9bfef1455f60d62293ba377df408405f8e6d30a87b2fb\" returns successfully" Mar 6 03:05:24.040257 kubelet[3329]: I0306 03:05:24.040191 3329 scope.go:122] "RemoveContainer" containerID="4d355dbba419be32533dd4a5b2cb614f3a836aed8d454c883865888563206b87" Mar 6 03:05:24.053404 containerd[1983]: time="2026-03-06T03:05:24.053358565Z" level=info msg="CreateContainer within sandbox \"618adb6d4de6f3d1adab051101c6c7a6db375409469f08f4598ce9505f295988\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Mar 6 03:05:24.075916 containerd[1983]: time="2026-03-06T03:05:24.074571486Z" level=info msg="Container 5d3af012445b95d41b3f356e7bf950653d8c514fe78963ab2577ff589f30c77e: CDI devices from CRI Config.CDIDevices: []" Mar 6 03:05:24.088931 containerd[1983]: time="2026-03-06T03:05:24.088887183Z" level=info msg="CreateContainer within sandbox \"618adb6d4de6f3d1adab051101c6c7a6db375409469f08f4598ce9505f295988\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"5d3af012445b95d41b3f356e7bf950653d8c514fe78963ab2577ff589f30c77e\"" Mar 6 03:05:24.090467 containerd[1983]: time="2026-03-06T03:05:24.089371108Z" level=info msg="StartContainer for \"5d3af012445b95d41b3f356e7bf950653d8c514fe78963ab2577ff589f30c77e\"" Mar 6 03:05:24.090467 containerd[1983]: time="2026-03-06T03:05:24.090393024Z" level=info msg="connecting to shim 5d3af012445b95d41b3f356e7bf950653d8c514fe78963ab2577ff589f30c77e" address="unix:///run/containerd/s/821fb70241586150d055bc6ba4b70bb375d58be72108dab4da57be435ab1e9d8" protocol=ttrpc version=3 Mar 6 03:05:24.120639 systemd[1]: Started cri-containerd-5d3af012445b95d41b3f356e7bf950653d8c514fe78963ab2577ff589f30c77e.scope - libcontainer container 5d3af012445b95d41b3f356e7bf950653d8c514fe78963ab2577ff589f30c77e. Mar 6 03:05:24.162283 containerd[1983]: time="2026-03-06T03:05:24.162242422Z" level=info msg="StartContainer for \"5d3af012445b95d41b3f356e7bf950653d8c514fe78963ab2577ff589f30c77e\" returns successfully" Mar 6 03:05:27.477885 update_engine[1953]: I20260306 03:05:27.477818 1953 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 6 03:05:27.478348 update_engine[1953]: I20260306 03:05:27.477929 1953 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 6 03:05:27.478548 update_engine[1953]: I20260306 03:05:27.478515 1953 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 6 03:05:27.479819 update_engine[1953]: E20260306 03:05:27.479784 1953 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 6 03:05:27.479924 update_engine[1953]: I20260306 03:05:27.479874 1953 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Mar 6 03:05:27.479924 update_engine[1953]: I20260306 03:05:27.479887 1953 omaha_request_action.cc:617] Omaha request response: Mar 6 03:05:27.480003 update_engine[1953]: E20260306 03:05:27.479974 1953 omaha_request_action.cc:636] Omaha request network transfer failed. Mar 6 03:05:27.480055 update_engine[1953]: I20260306 03:05:27.480005 1953 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Mar 6 03:05:27.480055 update_engine[1953]: I20260306 03:05:27.480013 1953 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 6 03:05:27.480055 update_engine[1953]: I20260306 03:05:27.480021 1953 update_attempter.cc:306] Processing Done. Mar 6 03:05:27.480055 update_engine[1953]: E20260306 03:05:27.480041 1953 update_attempter.cc:619] Update failed. Mar 6 03:05:27.480055 update_engine[1953]: I20260306 03:05:27.480049 1953 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Mar 6 03:05:27.481240 update_engine[1953]: I20260306 03:05:27.480057 1953 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Mar 6 03:05:27.481240 update_engine[1953]: I20260306 03:05:27.480065 1953 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Mar 6 03:05:27.481240 update_engine[1953]: I20260306 03:05:27.480158 1953 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Mar 6 03:05:27.481240 update_engine[1953]: I20260306 03:05:27.480193 1953 omaha_request_action.cc:271] Posting an Omaha request to disabled Mar 6 03:05:27.481240 update_engine[1953]: I20260306 03:05:27.480201 1953 omaha_request_action.cc:272] Request: Mar 6 03:05:27.481240 update_engine[1953]: Mar 6 03:05:27.481240 update_engine[1953]: Mar 6 03:05:27.481240 update_engine[1953]: Mar 6 03:05:27.481240 update_engine[1953]: Mar 6 03:05:27.481240 update_engine[1953]: Mar 6 03:05:27.481240 update_engine[1953]: Mar 6 03:05:27.481240 update_engine[1953]: I20260306 03:05:27.480211 1953 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 6 03:05:27.481240 update_engine[1953]: I20260306 03:05:27.480247 1953 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 6 03:05:27.481240 update_engine[1953]: I20260306 03:05:27.480609 1953 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 6 03:05:27.481749 locksmithd[2017]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Mar 6 03:05:27.482235 update_engine[1953]: E20260306 03:05:27.482194 1953 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 6 03:05:27.482316 update_engine[1953]: I20260306 03:05:27.482276 1953 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Mar 6 03:05:27.482316 update_engine[1953]: I20260306 03:05:27.482290 1953 omaha_request_action.cc:617] Omaha request response: Mar 6 03:05:27.482316 update_engine[1953]: I20260306 03:05:27.482299 1953 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 6 03:05:27.482316 update_engine[1953]: I20260306 03:05:27.482307 1953 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 6 03:05:27.482316 update_engine[1953]: I20260306 03:05:27.482313 1953 update_attempter.cc:306] Processing Done. Mar 6 03:05:27.482722 update_engine[1953]: I20260306 03:05:27.482322 1953 update_attempter.cc:310] Error event sent. Mar 6 03:05:27.482722 update_engine[1953]: I20260306 03:05:27.482334 1953 update_check_scheduler.cc:74] Next update check in 41m34s Mar 6 03:05:27.482789 locksmithd[2017]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Mar 6 03:05:28.303275 systemd[1]: cri-containerd-df34b4edfdcd6c7c9bad6dbe00b123c36f4ce211eeb0201e9aaad8c5187ae2de.scope: Deactivated successfully. Mar 6 03:05:28.304044 systemd[1]: cri-containerd-df34b4edfdcd6c7c9bad6dbe00b123c36f4ce211eeb0201e9aaad8c5187ae2de.scope: Consumed 1.478s CPU time, 37.7M memory peak, 49M read from disk. Mar 6 03:05:28.307488 containerd[1983]: time="2026-03-06T03:05:28.307280482Z" level=info msg="received container exit event container_id:\"df34b4edfdcd6c7c9bad6dbe00b123c36f4ce211eeb0201e9aaad8c5187ae2de\" id:\"df34b4edfdcd6c7c9bad6dbe00b123c36f4ce211eeb0201e9aaad8c5187ae2de\" pid:3167 exit_status:1 exited_at:{seconds:1772766328 nanos:306414050}" Mar 6 03:05:28.338530 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-df34b4edfdcd6c7c9bad6dbe00b123c36f4ce211eeb0201e9aaad8c5187ae2de-rootfs.mount: Deactivated successfully. Mar 6 03:05:29.062057 kubelet[3329]: I0306 03:05:29.062022 3329 scope.go:122] "RemoveContainer" containerID="df34b4edfdcd6c7c9bad6dbe00b123c36f4ce211eeb0201e9aaad8c5187ae2de" Mar 6 03:05:29.064408 containerd[1983]: time="2026-03-06T03:05:29.064370485Z" level=info msg="CreateContainer within sandbox \"1856b59249b0dae0b8b8359bf43e58333fc32f6bc14f387667e8df73ec79ce8c\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Mar 6 03:05:29.083951 containerd[1983]: time="2026-03-06T03:05:29.083519358Z" level=info msg="Container 515e60b632c546b6aecabd46d915a47cc1da7e8d3e0b80166ba0a1ff4c161107: CDI devices from CRI Config.CDIDevices: []" Mar 6 03:05:29.092112 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4132609769.mount: Deactivated successfully. Mar 6 03:05:29.100879 containerd[1983]: time="2026-03-06T03:05:29.100711961Z" level=info msg="CreateContainer within sandbox \"1856b59249b0dae0b8b8359bf43e58333fc32f6bc14f387667e8df73ec79ce8c\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"515e60b632c546b6aecabd46d915a47cc1da7e8d3e0b80166ba0a1ff4c161107\"" Mar 6 03:05:29.101262 containerd[1983]: time="2026-03-06T03:05:29.101233784Z" level=info msg="StartContainer for \"515e60b632c546b6aecabd46d915a47cc1da7e8d3e0b80166ba0a1ff4c161107\"" Mar 6 03:05:29.102546 containerd[1983]: time="2026-03-06T03:05:29.102513245Z" level=info msg="connecting to shim 515e60b632c546b6aecabd46d915a47cc1da7e8d3e0b80166ba0a1ff4c161107" address="unix:///run/containerd/s/f0b462813fa9eef36ef15cb36c03036ec312228e0881bbb1621a285f5944bcbd" protocol=ttrpc version=3 Mar 6 03:05:29.140714 systemd[1]: Started cri-containerd-515e60b632c546b6aecabd46d915a47cc1da7e8d3e0b80166ba0a1ff4c161107.scope - libcontainer container 515e60b632c546b6aecabd46d915a47cc1da7e8d3e0b80166ba0a1ff4c161107. Mar 6 03:05:29.202559 containerd[1983]: time="2026-03-06T03:05:29.202519508Z" level=info msg="StartContainer for \"515e60b632c546b6aecabd46d915a47cc1da7e8d3e0b80166ba0a1ff4c161107\" returns successfully" Mar 6 03:05:30.525974 kubelet[3329]: E0306 03:05:30.519709 3329 request.go:1196] "Unexpected error when reading response body" err="context deadline exceeded (Client.Timeout or context cancellation while reading body)" Mar 6 03:05:30.605240 kubelet[3329]: E0306 03:05:30.605184 3329 controller.go:251] "Failed to update lease" err="unexpected error when reading response body. Please retry. Original error: context deadline exceeded (Client.Timeout or context cancellation while reading body)" Mar 6 03:05:36.308221 systemd[1]: cri-containerd-5d3af012445b95d41b3f356e7bf950653d8c514fe78963ab2577ff589f30c77e.scope: Deactivated successfully. Mar 6 03:05:36.308728 systemd[1]: cri-containerd-5d3af012445b95d41b3f356e7bf950653d8c514fe78963ab2577ff589f30c77e.scope: Consumed 387ms CPU time, 91.7M memory peak, 50.5M read from disk. Mar 6 03:05:36.309701 containerd[1983]: time="2026-03-06T03:05:36.309644242Z" level=info msg="received container exit event container_id:\"5d3af012445b95d41b3f356e7bf950653d8c514fe78963ab2577ff589f30c77e\" id:\"5d3af012445b95d41b3f356e7bf950653d8c514fe78963ab2577ff589f30c77e\" pid:6744 exit_status:1 exited_at:{seconds:1772766336 nanos:308339187}" Mar 6 03:05:36.337744 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5d3af012445b95d41b3f356e7bf950653d8c514fe78963ab2577ff589f30c77e-rootfs.mount: Deactivated successfully. Mar 6 03:05:37.090092 kubelet[3329]: I0306 03:05:37.090057 3329 scope.go:122] "RemoveContainer" containerID="4d355dbba419be32533dd4a5b2cb614f3a836aed8d454c883865888563206b87" Mar 6 03:05:37.091258 kubelet[3329]: I0306 03:05:37.091233 3329 scope.go:122] "RemoveContainer" containerID="5d3af012445b95d41b3f356e7bf950653d8c514fe78963ab2577ff589f30c77e" Mar 6 03:05:37.099591 kubelet[3329]: E0306 03:05:37.099539 3329 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-6cf4cccc57-pmvkw_tigera-operator(c54662b0-debd-40bb-b56f-b5102250da55)\"" pod="tigera-operator/tigera-operator-6cf4cccc57-pmvkw" podUID="c54662b0-debd-40bb-b56f-b5102250da55" Mar 6 03:05:37.127113 containerd[1983]: time="2026-03-06T03:05:37.127055689Z" level=info msg="RemoveContainer for \"4d355dbba419be32533dd4a5b2cb614f3a836aed8d454c883865888563206b87\"" Mar 6 03:05:37.143575 containerd[1983]: time="2026-03-06T03:05:37.143520579Z" level=info msg="RemoveContainer for \"4d355dbba419be32533dd4a5b2cb614f3a836aed8d454c883865888563206b87\" returns successfully" Mar 6 03:05:40.606595 kubelet[3329]: E0306 03:05:40.606474 3329 request.go:1196] "Unexpected error when reading response body" err="net/http: request canceled (Client.Timeout or context cancellation while reading body)" Mar 6 03:05:40.607172 kubelet[3329]: E0306 03:05:40.607139 3329 controller.go:251] "Failed to update lease" err="unexpected error when reading response body. Please retry. Original error: net/http: request canceled (Client.Timeout or context cancellation while reading body)"