Oct 13 05:36:38.760496 kernel: Linux version 6.12.51-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Mon Oct 13 03:31:29 -00 2025 Oct 13 05:36:38.760523 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=4919840803704517a91afcb9d57d99e9935244ff049349c54216d9a31bc1da5d Oct 13 05:36:38.760533 kernel: BIOS-provided physical RAM map: Oct 13 05:36:38.760541 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Oct 13 05:36:38.760547 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000786cdfff] usable Oct 13 05:36:38.760554 kernel: BIOS-e820: [mem 0x00000000786ce000-0x000000007894dfff] reserved Oct 13 05:36:38.760563 kernel: BIOS-e820: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Oct 13 05:36:38.760573 kernel: BIOS-e820: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Oct 13 05:36:38.760580 kernel: BIOS-e820: [mem 0x00000000789de000-0x000000007c97bfff] usable Oct 13 05:36:38.760588 kernel: BIOS-e820: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Oct 13 05:36:38.760595 kernel: NX (Execute Disable) protection: active Oct 13 05:36:38.760602 kernel: APIC: Static calls initialized Oct 13 05:36:38.760610 kernel: e820: update [mem 0x768c0018-0x768c8e57] usable ==> usable Oct 13 05:36:38.760618 kernel: extended physical RAM map: Oct 13 05:36:38.760629 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Oct 13 05:36:38.760638 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000768c0017] usable Oct 13 05:36:38.760646 kernel: reserve setup_data: [mem 0x00000000768c0018-0x00000000768c8e57] usable Oct 13 05:36:38.760654 kernel: reserve setup_data: [mem 0x00000000768c8e58-0x00000000786cdfff] usable Oct 13 05:36:38.760662 kernel: reserve setup_data: [mem 0x00000000786ce000-0x000000007894dfff] reserved Oct 13 05:36:38.760670 kernel: reserve setup_data: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Oct 13 05:36:38.760678 kernel: reserve setup_data: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Oct 13 05:36:38.760689 kernel: reserve setup_data: [mem 0x00000000789de000-0x000000007c97bfff] usable Oct 13 05:36:38.760697 kernel: reserve setup_data: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Oct 13 05:36:38.760705 kernel: efi: EFI v2.7 by EDK II Oct 13 05:36:38.760714 kernel: efi: SMBIOS=0x7886a000 ACPI=0x7895d000 ACPI 2.0=0x7895d014 MEMATTR=0x77003518 Oct 13 05:36:38.760722 kernel: secureboot: Secure boot disabled Oct 13 05:36:38.760730 kernel: SMBIOS 2.7 present. Oct 13 05:36:38.760738 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Oct 13 05:36:38.760747 kernel: DMI: Memory slots populated: 1/1 Oct 13 05:36:38.760755 kernel: Hypervisor detected: KVM Oct 13 05:36:38.760763 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Oct 13 05:36:38.760771 kernel: kvm-clock: using sched offset of 6044257228 cycles Oct 13 05:36:38.760783 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Oct 13 05:36:38.760791 kernel: tsc: Detected 2499.996 MHz processor Oct 13 05:36:38.760800 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Oct 13 05:36:38.760809 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Oct 13 05:36:38.760817 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Oct 13 05:36:38.760826 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Oct 13 05:36:38.760835 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Oct 13 05:36:38.760846 kernel: Using GB pages for direct mapping Oct 13 05:36:38.760858 kernel: ACPI: Early table checksum verification disabled Oct 13 05:36:38.760867 kernel: ACPI: RSDP 0x000000007895D014 000024 (v02 AMAZON) Oct 13 05:36:38.760876 kernel: ACPI: XSDT 0x000000007895C0E8 00006C (v01 AMAZON AMZNFACP 00000001 01000013) Oct 13 05:36:38.760885 kernel: ACPI: FACP 0x0000000078955000 000114 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Oct 13 05:36:38.760896 kernel: ACPI: DSDT 0x0000000078956000 00115A (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Oct 13 05:36:38.760905 kernel: ACPI: FACS 0x00000000789D0000 000040 Oct 13 05:36:38.760914 kernel: ACPI: WAET 0x000000007895B000 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Oct 13 05:36:38.760923 kernel: ACPI: SLIT 0x000000007895A000 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Oct 13 05:36:38.760932 kernel: ACPI: APIC 0x0000000078959000 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Oct 13 05:36:38.760941 kernel: ACPI: SRAT 0x0000000078958000 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Oct 13 05:36:38.760952 kernel: ACPI: HPET 0x0000000078954000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Oct 13 05:36:38.760961 kernel: ACPI: SSDT 0x0000000078953000 000759 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Oct 13 05:36:38.760970 kernel: ACPI: SSDT 0x0000000078952000 00007F (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Oct 13 05:36:38.760979 kernel: ACPI: BGRT 0x0000000078951000 000038 (v01 AMAZON AMAZON 00000002 01000013) Oct 13 05:36:38.760988 kernel: ACPI: Reserving FACP table memory at [mem 0x78955000-0x78955113] Oct 13 05:36:38.760997 kernel: ACPI: Reserving DSDT table memory at [mem 0x78956000-0x78957159] Oct 13 05:36:38.761007 kernel: ACPI: Reserving FACS table memory at [mem 0x789d0000-0x789d003f] Oct 13 05:36:38.761038 kernel: ACPI: Reserving WAET table memory at [mem 0x7895b000-0x7895b027] Oct 13 05:36:38.761047 kernel: ACPI: Reserving SLIT table memory at [mem 0x7895a000-0x7895a06b] Oct 13 05:36:38.761056 kernel: ACPI: Reserving APIC table memory at [mem 0x78959000-0x78959075] Oct 13 05:36:38.761065 kernel: ACPI: Reserving SRAT table memory at [mem 0x78958000-0x7895809f] Oct 13 05:36:38.761074 kernel: ACPI: Reserving HPET table memory at [mem 0x78954000-0x78954037] Oct 13 05:36:38.761083 kernel: ACPI: Reserving SSDT table memory at [mem 0x78953000-0x78953758] Oct 13 05:36:38.761092 kernel: ACPI: Reserving SSDT table memory at [mem 0x78952000-0x7895207e] Oct 13 05:36:38.761101 kernel: ACPI: Reserving BGRT table memory at [mem 0x78951000-0x78951037] Oct 13 05:36:38.761112 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Oct 13 05:36:38.761121 kernel: NUMA: Initialized distance table, cnt=1 Oct 13 05:36:38.761130 kernel: NODE_DATA(0) allocated [mem 0x7a8eddc0-0x7a8f4fff] Oct 13 05:36:38.761139 kernel: Zone ranges: Oct 13 05:36:38.761148 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Oct 13 05:36:38.761157 kernel: DMA32 [mem 0x0000000001000000-0x000000007c97bfff] Oct 13 05:36:38.761166 kernel: Normal empty Oct 13 05:36:38.761175 kernel: Device empty Oct 13 05:36:38.761186 kernel: Movable zone start for each node Oct 13 05:36:38.761195 kernel: Early memory node ranges Oct 13 05:36:38.761204 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Oct 13 05:36:38.761213 kernel: node 0: [mem 0x0000000000100000-0x00000000786cdfff] Oct 13 05:36:38.761222 kernel: node 0: [mem 0x00000000789de000-0x000000007c97bfff] Oct 13 05:36:38.761231 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007c97bfff] Oct 13 05:36:38.761240 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 13 05:36:38.761251 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Oct 13 05:36:38.761260 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Oct 13 05:36:38.761269 kernel: On node 0, zone DMA32: 13956 pages in unavailable ranges Oct 13 05:36:38.761278 kernel: ACPI: PM-Timer IO Port: 0xb008 Oct 13 05:36:38.761287 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Oct 13 05:36:38.761296 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Oct 13 05:36:38.761305 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Oct 13 05:36:38.761314 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Oct 13 05:36:38.761326 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Oct 13 05:36:38.761335 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Oct 13 05:36:38.761344 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Oct 13 05:36:38.761353 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Oct 13 05:36:38.761362 kernel: TSC deadline timer available Oct 13 05:36:38.761371 kernel: CPU topo: Max. logical packages: 1 Oct 13 05:36:38.761380 kernel: CPU topo: Max. logical dies: 1 Oct 13 05:36:38.761391 kernel: CPU topo: Max. dies per package: 1 Oct 13 05:36:38.761399 kernel: CPU topo: Max. threads per core: 2 Oct 13 05:36:38.761408 kernel: CPU topo: Num. cores per package: 1 Oct 13 05:36:38.761417 kernel: CPU topo: Num. threads per package: 2 Oct 13 05:36:38.761426 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Oct 13 05:36:38.761435 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Oct 13 05:36:38.761444 kernel: [mem 0x7ca00000-0xffffffff] available for PCI devices Oct 13 05:36:38.761453 kernel: Booting paravirtualized kernel on KVM Oct 13 05:36:38.761464 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Oct 13 05:36:38.761474 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Oct 13 05:36:38.761483 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Oct 13 05:36:38.761492 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Oct 13 05:36:38.761500 kernel: pcpu-alloc: [0] 0 1 Oct 13 05:36:38.761510 kernel: kvm-guest: PV spinlocks enabled Oct 13 05:36:38.761519 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Oct 13 05:36:38.761532 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=4919840803704517a91afcb9d57d99e9935244ff049349c54216d9a31bc1da5d Oct 13 05:36:38.761541 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Oct 13 05:36:38.761550 kernel: random: crng init done Oct 13 05:36:38.761559 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 13 05:36:38.761568 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Oct 13 05:36:38.761577 kernel: Fallback order for Node 0: 0 Oct 13 05:36:38.761589 kernel: Built 1 zonelists, mobility grouping on. Total pages: 509451 Oct 13 05:36:38.761598 kernel: Policy zone: DMA32 Oct 13 05:36:38.761615 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 13 05:36:38.761627 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Oct 13 05:36:38.761637 kernel: Kernel/User page tables isolation: enabled Oct 13 05:36:38.761646 kernel: ftrace: allocating 40210 entries in 158 pages Oct 13 05:36:38.761656 kernel: ftrace: allocated 158 pages with 5 groups Oct 13 05:36:38.761665 kernel: Dynamic Preempt: voluntary Oct 13 05:36:38.761675 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 13 05:36:38.761687 kernel: rcu: RCU event tracing is enabled. Oct 13 05:36:38.761697 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Oct 13 05:36:38.761707 kernel: Trampoline variant of Tasks RCU enabled. Oct 13 05:36:38.761716 kernel: Rude variant of Tasks RCU enabled. Oct 13 05:36:38.761725 kernel: Tracing variant of Tasks RCU enabled. Oct 13 05:36:38.761737 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 13 05:36:38.761747 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Oct 13 05:36:38.761756 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Oct 13 05:36:38.761766 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Oct 13 05:36:38.761776 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Oct 13 05:36:38.761785 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Oct 13 05:36:38.761795 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Oct 13 05:36:38.761807 kernel: Console: colour dummy device 80x25 Oct 13 05:36:38.761816 kernel: printk: legacy console [tty0] enabled Oct 13 05:36:38.761826 kernel: printk: legacy console [ttyS0] enabled Oct 13 05:36:38.761835 kernel: ACPI: Core revision 20240827 Oct 13 05:36:38.761845 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Oct 13 05:36:38.761855 kernel: APIC: Switch to symmetric I/O mode setup Oct 13 05:36:38.761864 kernel: x2apic enabled Oct 13 05:36:38.761876 kernel: APIC: Switched APIC routing to: physical x2apic Oct 13 05:36:38.761886 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Oct 13 05:36:38.761896 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499996) Oct 13 05:36:38.761905 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Oct 13 05:36:38.761915 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Oct 13 05:36:38.761924 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Oct 13 05:36:38.761934 kernel: Spectre V2 : Mitigation: Retpolines Oct 13 05:36:38.761943 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Oct 13 05:36:38.761954 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Oct 13 05:36:38.761964 kernel: RETBleed: Vulnerable Oct 13 05:36:38.761973 kernel: Speculative Store Bypass: Vulnerable Oct 13 05:36:38.761982 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Oct 13 05:36:38.761991 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Oct 13 05:36:38.762000 kernel: GDS: Unknown: Dependent on hypervisor status Oct 13 05:36:38.762009 kernel: active return thunk: its_return_thunk Oct 13 05:36:38.762030 kernel: ITS: Mitigation: Aligned branch/return thunks Oct 13 05:36:38.762040 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Oct 13 05:36:38.762050 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Oct 13 05:36:38.762061 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Oct 13 05:36:38.762071 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Oct 13 05:36:38.762081 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Oct 13 05:36:38.762090 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Oct 13 05:36:38.762099 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Oct 13 05:36:38.762108 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Oct 13 05:36:38.762118 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Oct 13 05:36:38.762127 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Oct 13 05:36:38.762137 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Oct 13 05:36:38.762146 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Oct 13 05:36:38.762155 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Oct 13 05:36:38.762167 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Oct 13 05:36:38.762176 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Oct 13 05:36:38.762191 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Oct 13 05:36:38.762201 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Oct 13 05:36:38.762210 kernel: Freeing SMP alternatives memory: 32K Oct 13 05:36:38.762219 kernel: pid_max: default: 32768 minimum: 301 Oct 13 05:36:38.762228 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Oct 13 05:36:38.762237 kernel: landlock: Up and running. Oct 13 05:36:38.762247 kernel: SELinux: Initializing. Oct 13 05:36:38.762256 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Oct 13 05:36:38.762265 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Oct 13 05:36:38.762277 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Oct 13 05:36:38.762287 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Oct 13 05:36:38.762296 kernel: signal: max sigframe size: 3632 Oct 13 05:36:38.762306 kernel: rcu: Hierarchical SRCU implementation. Oct 13 05:36:38.762316 kernel: rcu: Max phase no-delay instances is 400. Oct 13 05:36:38.762325 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Oct 13 05:36:38.762335 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Oct 13 05:36:38.762347 kernel: smp: Bringing up secondary CPUs ... Oct 13 05:36:38.762357 kernel: smpboot: x86: Booting SMP configuration: Oct 13 05:36:38.762367 kernel: .... node #0, CPUs: #1 Oct 13 05:36:38.762377 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Oct 13 05:36:38.762388 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Oct 13 05:36:38.762397 kernel: smp: Brought up 1 node, 2 CPUs Oct 13 05:36:38.762407 kernel: smpboot: Total of 2 processors activated (9999.98 BogoMIPS) Oct 13 05:36:38.762419 kernel: Memory: 1938768K/2037804K available (14336K kernel code, 2450K rwdata, 10012K rodata, 24532K init, 1684K bss, 94468K reserved, 0K cma-reserved) Oct 13 05:36:38.762429 kernel: devtmpfs: initialized Oct 13 05:36:38.762439 kernel: x86/mm: Memory block size: 128MB Oct 13 05:36:38.762449 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7895e000-0x789ddfff] (524288 bytes) Oct 13 05:36:38.762458 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 13 05:36:38.762468 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Oct 13 05:36:38.762481 kernel: pinctrl core: initialized pinctrl subsystem Oct 13 05:36:38.762493 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 13 05:36:38.762502 kernel: audit: initializing netlink subsys (disabled) Oct 13 05:36:38.762512 kernel: audit: type=2000 audit(1760333796.372:1): state=initialized audit_enabled=0 res=1 Oct 13 05:36:38.762521 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 13 05:36:38.762531 kernel: thermal_sys: Registered thermal governor 'user_space' Oct 13 05:36:38.762541 kernel: cpuidle: using governor menu Oct 13 05:36:38.762551 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 13 05:36:38.762563 kernel: dca service started, version 1.12.1 Oct 13 05:36:38.762572 kernel: PCI: Using configuration type 1 for base access Oct 13 05:36:38.762582 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Oct 13 05:36:38.762592 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Oct 13 05:36:38.762601 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Oct 13 05:36:38.762611 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Oct 13 05:36:38.762621 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Oct 13 05:36:38.762633 kernel: ACPI: Added _OSI(Module Device) Oct 13 05:36:38.762643 kernel: ACPI: Added _OSI(Processor Device) Oct 13 05:36:38.762652 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 13 05:36:38.762662 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Oct 13 05:36:38.762672 kernel: ACPI: Interpreter enabled Oct 13 05:36:38.762681 kernel: ACPI: PM: (supports S0 S5) Oct 13 05:36:38.762691 kernel: ACPI: Using IOAPIC for interrupt routing Oct 13 05:36:38.762703 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Oct 13 05:36:38.762713 kernel: PCI: Using E820 reservations for host bridge windows Oct 13 05:36:38.762723 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Oct 13 05:36:38.762732 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 13 05:36:38.762950 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Oct 13 05:36:38.764436 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Oct 13 05:36:38.764613 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Oct 13 05:36:38.764627 kernel: acpiphp: Slot [3] registered Oct 13 05:36:38.764637 kernel: acpiphp: Slot [4] registered Oct 13 05:36:38.764647 kernel: acpiphp: Slot [5] registered Oct 13 05:36:38.764657 kernel: acpiphp: Slot [6] registered Oct 13 05:36:38.764666 kernel: acpiphp: Slot [7] registered Oct 13 05:36:38.764676 kernel: acpiphp: Slot [8] registered Oct 13 05:36:38.764689 kernel: acpiphp: Slot [9] registered Oct 13 05:36:38.764699 kernel: acpiphp: Slot [10] registered Oct 13 05:36:38.764709 kernel: acpiphp: Slot [11] registered Oct 13 05:36:38.764719 kernel: acpiphp: Slot [12] registered Oct 13 05:36:38.764729 kernel: acpiphp: Slot [13] registered Oct 13 05:36:38.764738 kernel: acpiphp: Slot [14] registered Oct 13 05:36:38.764748 kernel: acpiphp: Slot [15] registered Oct 13 05:36:38.764757 kernel: acpiphp: Slot [16] registered Oct 13 05:36:38.764769 kernel: acpiphp: Slot [17] registered Oct 13 05:36:38.764779 kernel: acpiphp: Slot [18] registered Oct 13 05:36:38.764788 kernel: acpiphp: Slot [19] registered Oct 13 05:36:38.764798 kernel: acpiphp: Slot [20] registered Oct 13 05:36:38.764808 kernel: acpiphp: Slot [21] registered Oct 13 05:36:38.764817 kernel: acpiphp: Slot [22] registered Oct 13 05:36:38.764827 kernel: acpiphp: Slot [23] registered Oct 13 05:36:38.764839 kernel: acpiphp: Slot [24] registered Oct 13 05:36:38.764849 kernel: acpiphp: Slot [25] registered Oct 13 05:36:38.764859 kernel: acpiphp: Slot [26] registered Oct 13 05:36:38.764868 kernel: acpiphp: Slot [27] registered Oct 13 05:36:38.764878 kernel: acpiphp: Slot [28] registered Oct 13 05:36:38.764888 kernel: acpiphp: Slot [29] registered Oct 13 05:36:38.764897 kernel: acpiphp: Slot [30] registered Oct 13 05:36:38.764907 kernel: acpiphp: Slot [31] registered Oct 13 05:36:38.764919 kernel: PCI host bridge to bus 0000:00 Oct 13 05:36:38.765238 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Oct 13 05:36:38.765375 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Oct 13 05:36:38.765495 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Oct 13 05:36:38.765612 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Oct 13 05:36:38.765729 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x2000ffffffff window] Oct 13 05:36:38.765850 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 13 05:36:38.765998 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint Oct 13 05:36:38.766295 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint Oct 13 05:36:38.766436 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 conventional PCI endpoint Oct 13 05:36:38.766574 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Oct 13 05:36:38.766702 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Oct 13 05:36:38.766839 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Oct 13 05:36:38.766968 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Oct 13 05:36:38.767119 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Oct 13 05:36:38.767249 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Oct 13 05:36:38.767382 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Oct 13 05:36:38.767517 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 conventional PCI endpoint Oct 13 05:36:38.767645 kernel: pci 0000:00:03.0: BAR 0 [mem 0x80000000-0x803fffff pref] Oct 13 05:36:38.767772 kernel: pci 0000:00:03.0: ROM [mem 0xffff0000-0xffffffff pref] Oct 13 05:36:38.767899 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Oct 13 05:36:38.768063 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 PCIe Endpoint Oct 13 05:36:38.768197 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80404000-0x80407fff] Oct 13 05:36:38.768329 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 PCIe Endpoint Oct 13 05:36:38.768458 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80400000-0x80403fff] Oct 13 05:36:38.768471 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Oct 13 05:36:38.768481 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Oct 13 05:36:38.768492 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Oct 13 05:36:38.768505 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Oct 13 05:36:38.768515 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Oct 13 05:36:38.768525 kernel: iommu: Default domain type: Translated Oct 13 05:36:38.768535 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Oct 13 05:36:38.768545 kernel: efivars: Registered efivars operations Oct 13 05:36:38.768554 kernel: PCI: Using ACPI for IRQ routing Oct 13 05:36:38.768565 kernel: PCI: pci_cache_line_size set to 64 bytes Oct 13 05:36:38.768578 kernel: e820: reserve RAM buffer [mem 0x768c0018-0x77ffffff] Oct 13 05:36:38.768588 kernel: e820: reserve RAM buffer [mem 0x786ce000-0x7bffffff] Oct 13 05:36:38.768598 kernel: e820: reserve RAM buffer [mem 0x7c97c000-0x7fffffff] Oct 13 05:36:38.768727 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Oct 13 05:36:38.768855 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Oct 13 05:36:38.768986 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Oct 13 05:36:38.768998 kernel: vgaarb: loaded Oct 13 05:36:38.769011 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Oct 13 05:36:38.769030 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Oct 13 05:36:38.769041 kernel: clocksource: Switched to clocksource kvm-clock Oct 13 05:36:38.769050 kernel: VFS: Disk quotas dquot_6.6.0 Oct 13 05:36:38.769061 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 13 05:36:38.769071 kernel: pnp: PnP ACPI init Oct 13 05:36:38.769081 kernel: pnp: PnP ACPI: found 5 devices Oct 13 05:36:38.769093 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Oct 13 05:36:38.769103 kernel: NET: Registered PF_INET protocol family Oct 13 05:36:38.769114 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 13 05:36:38.769124 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Oct 13 05:36:38.769134 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 13 05:36:38.769144 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Oct 13 05:36:38.769154 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Oct 13 05:36:38.769166 kernel: TCP: Hash tables configured (established 16384 bind 16384) Oct 13 05:36:38.769176 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Oct 13 05:36:38.769186 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Oct 13 05:36:38.769196 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 13 05:36:38.769206 kernel: NET: Registered PF_XDP protocol family Oct 13 05:36:38.769331 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Oct 13 05:36:38.769448 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Oct 13 05:36:38.769569 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Oct 13 05:36:38.769688 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Oct 13 05:36:38.769807 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x2000ffffffff window] Oct 13 05:36:38.769970 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Oct 13 05:36:38.769983 kernel: PCI: CLS 0 bytes, default 64 Oct 13 05:36:38.769994 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Oct 13 05:36:38.770008 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Oct 13 05:36:38.771554 kernel: clocksource: Switched to clocksource tsc Oct 13 05:36:38.771570 kernel: Initialise system trusted keyrings Oct 13 05:36:38.771580 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Oct 13 05:36:38.771591 kernel: Key type asymmetric registered Oct 13 05:36:38.771602 kernel: Asymmetric key parser 'x509' registered Oct 13 05:36:38.771612 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Oct 13 05:36:38.771627 kernel: io scheduler mq-deadline registered Oct 13 05:36:38.771637 kernel: io scheduler kyber registered Oct 13 05:36:38.771647 kernel: io scheduler bfq registered Oct 13 05:36:38.771657 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Oct 13 05:36:38.771667 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 13 05:36:38.771677 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Oct 13 05:36:38.771687 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Oct 13 05:36:38.771699 kernel: i8042: Warning: Keylock active Oct 13 05:36:38.771709 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Oct 13 05:36:38.771719 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Oct 13 05:36:38.771894 kernel: rtc_cmos 00:00: RTC can wake from S4 Oct 13 05:36:38.772034 kernel: rtc_cmos 00:00: registered as rtc0 Oct 13 05:36:38.772159 kernel: rtc_cmos 00:00: setting system clock to 2025-10-13T05:36:35 UTC (1760333795) Oct 13 05:36:38.772286 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Oct 13 05:36:38.772316 kernel: intel_pstate: CPU model not supported Oct 13 05:36:38.772328 kernel: efifb: probing for efifb Oct 13 05:36:38.772339 kernel: efifb: framebuffer at 0x80000000, using 1876k, total 1875k Oct 13 05:36:38.772350 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 Oct 13 05:36:38.772360 kernel: efifb: scrolling: redraw Oct 13 05:36:38.772370 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Oct 13 05:36:38.772380 kernel: Console: switching to colour frame buffer device 100x37 Oct 13 05:36:38.772393 kernel: fb0: EFI VGA frame buffer device Oct 13 05:36:38.772404 kernel: pstore: Using crash dump compression: deflate Oct 13 05:36:38.772414 kernel: pstore: Registered efi_pstore as persistent store backend Oct 13 05:36:38.772424 kernel: NET: Registered PF_INET6 protocol family Oct 13 05:36:38.772435 kernel: Segment Routing with IPv6 Oct 13 05:36:38.772445 kernel: In-situ OAM (IOAM) with IPv6 Oct 13 05:36:38.772456 kernel: NET: Registered PF_PACKET protocol family Oct 13 05:36:38.772468 kernel: Key type dns_resolver registered Oct 13 05:36:38.772478 kernel: IPI shorthand broadcast: enabled Oct 13 05:36:38.772489 kernel: sched_clock: Marking stable (959001580, 156608335)->(1193322259, -77712344) Oct 13 05:36:38.772499 kernel: registered taskstats version 1 Oct 13 05:36:38.772510 kernel: Loading compiled-in X.509 certificates Oct 13 05:36:38.772520 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.51-flatcar: 9f1258ccc510afd4f2a37f4774c4b2e958d823b7' Oct 13 05:36:38.772530 kernel: Demotion targets for Node 0: null Oct 13 05:36:38.772543 kernel: Key type .fscrypt registered Oct 13 05:36:38.772553 kernel: Key type fscrypt-provisioning registered Oct 13 05:36:38.772564 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 13 05:36:38.772574 kernel: ima: Allocated hash algorithm: sha1 Oct 13 05:36:38.772584 kernel: ima: No architecture policies found Oct 13 05:36:38.772595 kernel: clk: Disabling unused clocks Oct 13 05:36:38.772615 kernel: Freeing unused kernel image (initmem) memory: 24532K Oct 13 05:36:38.772628 kernel: Write protecting the kernel read-only data: 24576k Oct 13 05:36:38.772638 kernel: Freeing unused kernel image (rodata/data gap) memory: 228K Oct 13 05:36:38.772651 kernel: Run /init as init process Oct 13 05:36:38.772662 kernel: with arguments: Oct 13 05:36:38.772674 kernel: /init Oct 13 05:36:38.772685 kernel: with environment: Oct 13 05:36:38.772695 kernel: HOME=/ Oct 13 05:36:38.772705 kernel: TERM=linux Oct 13 05:36:38.772715 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Oct 13 05:36:38.772824 kernel: nvme nvme0: pci function 0000:00:04.0 Oct 13 05:36:38.772841 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Oct 13 05:36:38.772933 kernel: nvme nvme0: 2/0/0 default/read/poll queues Oct 13 05:36:38.772947 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 13 05:36:38.772957 kernel: GPT:25804799 != 33554431 Oct 13 05:36:38.772967 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 13 05:36:38.772978 kernel: GPT:25804799 != 33554431 Oct 13 05:36:38.772987 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 13 05:36:38.773000 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Oct 13 05:36:38.773025 kernel: Invalid ELF header magic: != \u007fELF Oct 13 05:36:38.773082 kernel: Invalid ELF header magic: != \u007fELF Oct 13 05:36:38.773109 kernel: SCSI subsystem initialized Oct 13 05:36:38.773122 kernel: Invalid ELF header magic: != \u007fELF Oct 13 05:36:38.773133 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 13 05:36:38.773144 kernel: device-mapper: uevent: version 1.0.3 Oct 13 05:36:38.773155 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Oct 13 05:36:38.773174 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Oct 13 05:36:38.773185 kernel: Invalid ELF header magic: != \u007fELF Oct 13 05:36:38.773195 kernel: Invalid ELF header magic: != \u007fELF Oct 13 05:36:38.773270 kernel: raid6: avx512x4 gen() 18243 MB/s Oct 13 05:36:38.773292 kernel: raid6: avx512x2 gen() 18221 MB/s Oct 13 05:36:38.773305 kernel: raid6: avx512x1 gen() 18181 MB/s Oct 13 05:36:38.773315 kernel: raid6: avx2x4 gen() 18016 MB/s Oct 13 05:36:38.773330 kernel: raid6: avx2x2 gen() 18073 MB/s Oct 13 05:36:38.773340 kernel: raid6: avx2x1 gen() 13908 MB/s Oct 13 05:36:38.773351 kernel: raid6: using algorithm avx512x4 gen() 18243 MB/s Oct 13 05:36:38.773368 kernel: raid6: .... xor() 7721 MB/s, rmw enabled Oct 13 05:36:38.773379 kernel: raid6: using avx512x2 recovery algorithm Oct 13 05:36:38.773390 kernel: Invalid ELF header magic: != \u007fELF Oct 13 05:36:38.773401 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Oct 13 05:36:38.773414 kernel: Invalid ELF header magic: != \u007fELF Oct 13 05:36:38.773423 kernel: Invalid ELF header magic: != \u007fELF Oct 13 05:36:38.773434 kernel: xor: automatically using best checksumming function avx Oct 13 05:36:38.773444 kernel: Invalid ELF header magic: != \u007fELF Oct 13 05:36:38.773454 kernel: Btrfs loaded, zoned=no, fsverity=no Oct 13 05:36:38.773465 kernel: BTRFS: device fsid e87b15e9-127c-40e2-bae7-d0ea05b4f2e3 devid 1 transid 36 /dev/mapper/usr (254:0) scanned by mount (167) Oct 13 05:36:38.773475 kernel: BTRFS info (device dm-0): first mount of filesystem e87b15e9-127c-40e2-bae7-d0ea05b4f2e3 Oct 13 05:36:38.773486 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Oct 13 05:36:38.773499 kernel: BTRFS info (device dm-0): enabling ssd optimizations Oct 13 05:36:38.773509 kernel: BTRFS info (device dm-0): disabling log replay at mount time Oct 13 05:36:38.773520 kernel: BTRFS info (device dm-0): enabling free space tree Oct 13 05:36:38.773531 kernel: Invalid ELF header magic: != \u007fELF Oct 13 05:36:38.773541 kernel: loop: module loaded Oct 13 05:36:38.773552 kernel: loop0: detected capacity change from 0 to 100048 Oct 13 05:36:38.773562 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 13 05:36:38.773578 systemd[1]: Successfully made /usr/ read-only. Oct 13 05:36:38.773593 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 13 05:36:38.773605 systemd[1]: Detected virtualization amazon. Oct 13 05:36:38.773615 systemd[1]: Detected architecture x86-64. Oct 13 05:36:38.773626 systemd[1]: Running in initrd. Oct 13 05:36:38.773636 systemd[1]: No hostname configured, using default hostname. Oct 13 05:36:38.773650 systemd[1]: Hostname set to . Oct 13 05:36:38.773660 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Oct 13 05:36:38.773671 systemd[1]: Queued start job for default target initrd.target. Oct 13 05:36:38.773682 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Oct 13 05:36:38.773693 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 13 05:36:38.773704 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 13 05:36:38.773716 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Oct 13 05:36:38.773729 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 13 05:36:38.773741 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Oct 13 05:36:38.773752 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Oct 13 05:36:38.773763 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 13 05:36:38.773774 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 13 05:36:38.773787 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Oct 13 05:36:38.773799 systemd[1]: Reached target paths.target - Path Units. Oct 13 05:36:38.773816 systemd[1]: Reached target slices.target - Slice Units. Oct 13 05:36:38.773827 systemd[1]: Reached target swap.target - Swaps. Oct 13 05:36:38.773838 systemd[1]: Reached target timers.target - Timer Units. Oct 13 05:36:38.773852 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Oct 13 05:36:38.773863 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 13 05:36:38.773876 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 13 05:36:38.773887 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Oct 13 05:36:38.773898 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 13 05:36:38.773909 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 13 05:36:38.773920 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 13 05:36:38.773932 systemd[1]: Reached target sockets.target - Socket Units. Oct 13 05:36:38.773948 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Oct 13 05:36:38.773962 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Oct 13 05:36:38.773973 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 13 05:36:38.773992 systemd[1]: Finished network-cleanup.service - Network Cleanup. Oct 13 05:36:38.774006 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Oct 13 05:36:38.774047 systemd[1]: Starting systemd-fsck-usr.service... Oct 13 05:36:38.774062 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 13 05:36:38.774076 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 13 05:36:38.774087 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 13 05:36:38.774099 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Oct 13 05:36:38.774110 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 13 05:36:38.774124 systemd[1]: Finished systemd-fsck-usr.service. Oct 13 05:36:38.774135 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 13 05:36:38.774146 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 13 05:36:38.774157 kernel: Bridge firewalling registered Oct 13 05:36:38.774168 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 13 05:36:38.774215 systemd-journald[303]: Collecting audit messages is disabled. Oct 13 05:36:38.774249 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 13 05:36:38.774260 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 13 05:36:38.774272 systemd-journald[303]: Journal started Oct 13 05:36:38.774296 systemd-journald[303]: Runtime Journal (/run/log/journal/ec2e26e6dd68e790377b6c9cdf323ccc) is 4.8M, max 38.4M, 33.6M free. Oct 13 05:36:38.754430 systemd-modules-load[305]: Inserted module 'br_netfilter' Oct 13 05:36:38.782986 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 13 05:36:38.783068 systemd[1]: Started systemd-journald.service - Journal Service. Oct 13 05:36:38.790253 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 13 05:36:38.814375 systemd-tmpfiles[323]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Oct 13 05:36:38.815140 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 13 05:36:38.829728 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 13 05:36:38.832805 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 13 05:36:38.836959 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 13 05:36:38.838888 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 13 05:36:38.842211 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 13 05:36:38.871171 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 13 05:36:38.876107 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Oct 13 05:36:38.906649 dracut-cmdline[344]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=4919840803704517a91afcb9d57d99e9935244ff049349c54216d9a31bc1da5d Oct 13 05:36:38.911293 systemd-resolved[330]: Positive Trust Anchors: Oct 13 05:36:38.911304 systemd-resolved[330]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 13 05:36:38.911310 systemd-resolved[330]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Oct 13 05:36:38.911369 systemd-resolved[330]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 13 05:36:38.943963 systemd-resolved[330]: Defaulting to hostname 'linux'. Oct 13 05:36:38.946230 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 13 05:36:38.947907 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 13 05:36:39.085044 kernel: Loading iSCSI transport class v2.0-870. Oct 13 05:36:39.215047 kernel: iscsi: registered transport (tcp) Oct 13 05:36:39.239141 kernel: iscsi: registered transport (qla4xxx) Oct 13 05:36:39.239212 kernel: QLogic iSCSI HBA Driver Oct 13 05:36:39.297778 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 13 05:36:39.318889 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 13 05:36:39.321451 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 13 05:36:39.369968 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Oct 13 05:36:39.372046 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Oct 13 05:36:39.373560 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Oct 13 05:36:39.410174 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Oct 13 05:36:39.413176 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 13 05:36:39.443513 systemd-udevd[588]: Using default interface naming scheme 'v257'. Oct 13 05:36:39.455467 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 13 05:36:39.461161 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Oct 13 05:36:39.474275 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 13 05:36:39.477793 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 13 05:36:39.489862 dracut-pre-trigger[671]: rd.md=0: removing MD RAID activation Oct 13 05:36:39.519581 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Oct 13 05:36:39.523150 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 13 05:36:39.529511 systemd-networkd[682]: lo: Link UP Oct 13 05:36:39.529519 systemd-networkd[682]: lo: Gained carrier Oct 13 05:36:39.530275 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 13 05:36:39.530917 systemd[1]: Reached target network.target - Network. Oct 13 05:36:39.587059 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 13 05:36:39.592001 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Oct 13 05:36:39.701584 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 13 05:36:39.701937 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 13 05:36:39.702841 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 13 05:36:39.708504 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 13 05:36:39.726330 kernel: ena 0000:00:05.0: ENA device version: 0.10 Oct 13 05:36:39.726696 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Oct 13 05:36:39.736070 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Oct 13 05:36:39.743167 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80400000, mac addr 06:2a:1d:dd:cc:23 Oct 13 05:36:39.744550 (udev-worker)[718]: Network interface NamePolicy= disabled on kernel command line. Oct 13 05:36:39.759758 systemd-networkd[682]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 13 05:36:39.760621 kernel: cryptd: max_cpu_qlen set to 1000 Oct 13 05:36:39.759771 systemd-networkd[682]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 13 05:36:39.766563 systemd-networkd[682]: eth0: Link UP Oct 13 05:36:39.766689 systemd-networkd[682]: eth0: Gained carrier Oct 13 05:36:39.766704 systemd-networkd[682]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 13 05:36:39.773943 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 13 05:36:39.777101 systemd-networkd[682]: eth0: DHCPv4 address 172.31.26.130/20, gateway 172.31.16.1 acquired from 172.31.16.1 Oct 13 05:36:39.882047 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input3 Oct 13 05:36:39.914039 kernel: AES CTR mode by8 optimization enabled Oct 13 05:36:39.914106 kernel: nvme nvme0: using unchecked data buffer Oct 13 05:36:40.007955 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Oct 13 05:36:40.072916 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Oct 13 05:36:40.087205 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Oct 13 05:36:40.111539 disk-uuid[892]: Primary Header is updated. Oct 13 05:36:40.111539 disk-uuid[892]: Secondary Entries is updated. Oct 13 05:36:40.111539 disk-uuid[892]: Secondary Header is updated. Oct 13 05:36:40.133844 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Oct 13 05:36:40.167950 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Oct 13 05:36:40.426852 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Oct 13 05:36:40.428776 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Oct 13 05:36:40.429495 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 13 05:36:40.430700 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 13 05:36:40.432891 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Oct 13 05:36:40.457487 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Oct 13 05:36:40.956248 systemd-networkd[682]: eth0: Gained IPv6LL Oct 13 05:36:41.213880 disk-uuid[898]: Warning: The kernel is still using the old partition table. Oct 13 05:36:41.213880 disk-uuid[898]: The new table will be used at the next reboot or after you Oct 13 05:36:41.213880 disk-uuid[898]: run partprobe(8) or kpartx(8) Oct 13 05:36:41.213880 disk-uuid[898]: The operation has completed successfully. Oct 13 05:36:41.224145 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 13 05:36:41.224292 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Oct 13 05:36:41.226254 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Oct 13 05:36:41.266204 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1098) Oct 13 05:36:41.266271 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 56bbaf92-79f4-4948-a1fd-5992c383eba8 Oct 13 05:36:41.270152 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Oct 13 05:36:41.276138 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Oct 13 05:36:41.276210 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Oct 13 05:36:41.288041 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 56bbaf92-79f4-4948-a1fd-5992c383eba8 Oct 13 05:36:41.288057 systemd[1]: Finished ignition-setup.service - Ignition (setup). Oct 13 05:36:41.290254 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Oct 13 05:36:42.402606 ignition[1117]: Ignition 2.22.0 Oct 13 05:36:42.402621 ignition[1117]: Stage: fetch-offline Oct 13 05:36:42.403072 ignition[1117]: no configs at "/usr/lib/ignition/base.d" Oct 13 05:36:42.403089 ignition[1117]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Oct 13 05:36:42.403410 ignition[1117]: Ignition finished successfully Oct 13 05:36:42.406600 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Oct 13 05:36:42.408571 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Oct 13 05:36:42.438647 ignition[1123]: Ignition 2.22.0 Oct 13 05:36:42.438664 ignition[1123]: Stage: fetch Oct 13 05:36:42.439145 ignition[1123]: no configs at "/usr/lib/ignition/base.d" Oct 13 05:36:42.439157 ignition[1123]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Oct 13 05:36:42.439263 ignition[1123]: PUT http://169.254.169.254/latest/api/token: attempt #1 Oct 13 05:36:42.447622 ignition[1123]: PUT result: OK Oct 13 05:36:42.449948 ignition[1123]: parsed url from cmdline: "" Oct 13 05:36:42.449957 ignition[1123]: no config URL provided Oct 13 05:36:42.449967 ignition[1123]: reading system config file "/usr/lib/ignition/user.ign" Oct 13 05:36:42.449991 ignition[1123]: no config at "/usr/lib/ignition/user.ign" Oct 13 05:36:42.450008 ignition[1123]: PUT http://169.254.169.254/latest/api/token: attempt #1 Oct 13 05:36:42.450551 ignition[1123]: PUT result: OK Oct 13 05:36:42.450598 ignition[1123]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Oct 13 05:36:42.451339 ignition[1123]: GET result: OK Oct 13 05:36:42.451404 ignition[1123]: parsing config with SHA512: 4561f08284a3f19ee8a5856708372051caa5a67d33403eb42d70ebb65c16469e1522684cda8ad65e465bda3d998a24482607fa6d78a691e8e43693f65d2c7b0c Oct 13 05:36:42.456987 unknown[1123]: fetched base config from "system" Oct 13 05:36:42.456996 unknown[1123]: fetched base config from "system" Oct 13 05:36:42.457775 ignition[1123]: fetch: fetch complete Oct 13 05:36:42.457001 unknown[1123]: fetched user config from "aws" Oct 13 05:36:42.457780 ignition[1123]: fetch: fetch passed Oct 13 05:36:42.460355 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Oct 13 05:36:42.457829 ignition[1123]: Ignition finished successfully Oct 13 05:36:42.462613 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Oct 13 05:36:42.495800 ignition[1129]: Ignition 2.22.0 Oct 13 05:36:42.495814 ignition[1129]: Stage: kargs Oct 13 05:36:42.496218 ignition[1129]: no configs at "/usr/lib/ignition/base.d" Oct 13 05:36:42.496230 ignition[1129]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Oct 13 05:36:42.496342 ignition[1129]: PUT http://169.254.169.254/latest/api/token: attempt #1 Oct 13 05:36:42.497158 ignition[1129]: PUT result: OK Oct 13 05:36:42.500591 ignition[1129]: kargs: kargs passed Oct 13 05:36:42.500670 ignition[1129]: Ignition finished successfully Oct 13 05:36:42.502887 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Oct 13 05:36:42.504562 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Oct 13 05:36:42.536315 ignition[1136]: Ignition 2.22.0 Oct 13 05:36:42.536331 ignition[1136]: Stage: disks Oct 13 05:36:42.536730 ignition[1136]: no configs at "/usr/lib/ignition/base.d" Oct 13 05:36:42.536742 ignition[1136]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Oct 13 05:36:42.536852 ignition[1136]: PUT http://169.254.169.254/latest/api/token: attempt #1 Oct 13 05:36:42.546608 ignition[1136]: PUT result: OK Oct 13 05:36:42.550928 ignition[1136]: disks: disks passed Oct 13 05:36:42.551004 ignition[1136]: Ignition finished successfully Oct 13 05:36:42.553051 systemd[1]: Finished ignition-disks.service - Ignition (disks). Oct 13 05:36:42.553722 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Oct 13 05:36:42.554184 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 13 05:36:42.554936 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 13 05:36:42.555486 systemd[1]: Reached target sysinit.target - System Initialization. Oct 13 05:36:42.556081 systemd[1]: Reached target basic.target - Basic System. Oct 13 05:36:42.557825 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Oct 13 05:36:42.627806 systemd-fsck[1144]: ROOT: clean, 15/1631200 files, 112378/1617920 blocks Oct 13 05:36:42.631292 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Oct 13 05:36:42.635157 systemd[1]: Mounting sysroot.mount - /sysroot... Oct 13 05:36:42.884041 kernel: EXT4-fs (nvme0n1p9): mounted filesystem c7d6ef00-6dd1-40b4-91f2-c4c5965e3cac r/w with ordered data mode. Quota mode: none. Oct 13 05:36:42.884909 systemd[1]: Mounted sysroot.mount - /sysroot. Oct 13 05:36:42.885828 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Oct 13 05:36:42.888519 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 13 05:36:42.891144 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Oct 13 05:36:42.892327 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Oct 13 05:36:42.892718 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 13 05:36:42.892747 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Oct 13 05:36:42.900480 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Oct 13 05:36:42.902295 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Oct 13 05:36:42.921422 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1164) Oct 13 05:36:42.929390 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 56bbaf92-79f4-4948-a1fd-5992c383eba8 Oct 13 05:36:42.929482 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Oct 13 05:36:42.943107 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Oct 13 05:36:42.943188 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Oct 13 05:36:42.945941 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 13 05:36:44.072441 initrd-setup-root[1188]: cut: /sysroot/etc/passwd: No such file or directory Oct 13 05:36:44.143779 initrd-setup-root[1195]: cut: /sysroot/etc/group: No such file or directory Oct 13 05:36:44.148196 initrd-setup-root[1202]: cut: /sysroot/etc/shadow: No such file or directory Oct 13 05:36:44.153763 initrd-setup-root[1209]: cut: /sysroot/etc/gshadow: No such file or directory Oct 13 05:36:44.636124 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Oct 13 05:36:44.638430 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Oct 13 05:36:44.642210 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Oct 13 05:36:44.659700 systemd[1]: sysroot-oem.mount: Deactivated successfully. Oct 13 05:36:44.662078 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 56bbaf92-79f4-4948-a1fd-5992c383eba8 Oct 13 05:36:44.695410 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Oct 13 05:36:44.701622 ignition[1277]: INFO : Ignition 2.22.0 Oct 13 05:36:44.701622 ignition[1277]: INFO : Stage: mount Oct 13 05:36:44.703490 ignition[1277]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 13 05:36:44.703490 ignition[1277]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Oct 13 05:36:44.703490 ignition[1277]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Oct 13 05:36:44.704779 ignition[1277]: INFO : PUT result: OK Oct 13 05:36:44.707242 ignition[1277]: INFO : mount: mount passed Oct 13 05:36:44.707819 ignition[1277]: INFO : Ignition finished successfully Oct 13 05:36:44.709389 systemd[1]: Finished ignition-mount.service - Ignition (mount). Oct 13 05:36:44.710973 systemd[1]: Starting ignition-files.service - Ignition (files)... Oct 13 05:36:44.740328 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 13 05:36:44.768040 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1289) Oct 13 05:36:44.768096 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 56bbaf92-79f4-4948-a1fd-5992c383eba8 Oct 13 05:36:44.770186 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Oct 13 05:36:44.779035 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Oct 13 05:36:44.779118 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Oct 13 05:36:44.781187 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 13 05:36:44.817606 ignition[1305]: INFO : Ignition 2.22.0 Oct 13 05:36:44.817606 ignition[1305]: INFO : Stage: files Oct 13 05:36:44.818808 ignition[1305]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 13 05:36:44.818808 ignition[1305]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Oct 13 05:36:44.818808 ignition[1305]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Oct 13 05:36:44.819799 ignition[1305]: INFO : PUT result: OK Oct 13 05:36:44.821475 ignition[1305]: DEBUG : files: compiled without relabeling support, skipping Oct 13 05:36:44.823030 ignition[1305]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 13 05:36:44.823030 ignition[1305]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 13 05:36:44.851544 ignition[1305]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 13 05:36:44.852558 ignition[1305]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 13 05:36:44.852558 ignition[1305]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 13 05:36:44.852116 unknown[1305]: wrote ssh authorized keys file for user: core Oct 13 05:36:44.882034 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Oct 13 05:36:44.883056 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Oct 13 05:36:44.948084 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 13 05:36:45.163523 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Oct 13 05:36:45.164523 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Oct 13 05:36:45.164523 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Oct 13 05:36:45.164523 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 13 05:36:45.164523 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 13 05:36:45.164523 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 13 05:36:45.164523 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 13 05:36:45.164523 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 13 05:36:45.164523 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 13 05:36:45.169752 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 13 05:36:45.170552 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 13 05:36:45.170552 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Oct 13 05:36:45.172775 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Oct 13 05:36:45.172775 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Oct 13 05:36:45.174549 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Oct 13 05:36:45.548757 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Oct 13 05:36:46.620391 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Oct 13 05:36:46.620391 ignition[1305]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Oct 13 05:36:46.622331 ignition[1305]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 13 05:36:46.627805 ignition[1305]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 13 05:36:46.627805 ignition[1305]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Oct 13 05:36:46.627805 ignition[1305]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Oct 13 05:36:46.631786 ignition[1305]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Oct 13 05:36:46.631786 ignition[1305]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 13 05:36:46.631786 ignition[1305]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 13 05:36:46.631786 ignition[1305]: INFO : files: files passed Oct 13 05:36:46.631786 ignition[1305]: INFO : Ignition finished successfully Oct 13 05:36:46.630385 systemd[1]: Finished ignition-files.service - Ignition (files). Oct 13 05:36:46.633112 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Oct 13 05:36:46.636806 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Oct 13 05:36:46.643428 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 13 05:36:46.644218 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Oct 13 05:36:46.690532 initrd-setup-root-after-ignition[1335]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 13 05:36:46.690532 initrd-setup-root-after-ignition[1335]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Oct 13 05:36:46.694373 initrd-setup-root-after-ignition[1339]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 13 05:36:46.697136 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 13 05:36:46.697854 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Oct 13 05:36:46.700134 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Oct 13 05:36:46.754209 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 13 05:36:46.754321 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Oct 13 05:36:46.756005 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Oct 13 05:36:46.756658 systemd[1]: Reached target initrd.target - Initrd Default Target. Oct 13 05:36:46.757609 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Oct 13 05:36:46.758493 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Oct 13 05:36:46.782933 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 13 05:36:46.785362 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Oct 13 05:36:46.811077 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Oct 13 05:36:46.811315 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Oct 13 05:36:46.812248 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 13 05:36:46.813084 systemd[1]: Stopped target timers.target - Timer Units. Oct 13 05:36:46.813911 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 13 05:36:46.814139 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 13 05:36:46.815394 systemd[1]: Stopped target initrd.target - Initrd Default Target. Oct 13 05:36:46.816210 systemd[1]: Stopped target basic.target - Basic System. Oct 13 05:36:46.817060 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Oct 13 05:36:46.817820 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Oct 13 05:36:46.818535 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Oct 13 05:36:46.819489 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Oct 13 05:36:46.820299 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Oct 13 05:36:46.821105 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Oct 13 05:36:46.821893 systemd[1]: Stopped target sysinit.target - System Initialization. Oct 13 05:36:46.822851 systemd[1]: Stopped target local-fs.target - Local File Systems. Oct 13 05:36:46.823844 systemd[1]: Stopped target swap.target - Swaps. Oct 13 05:36:46.824638 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 13 05:36:46.824884 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Oct 13 05:36:46.825930 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Oct 13 05:36:46.826920 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 13 05:36:46.827550 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Oct 13 05:36:46.827684 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 13 05:36:46.828362 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 13 05:36:46.828582 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Oct 13 05:36:46.829595 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 13 05:36:46.829827 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 13 05:36:46.830982 systemd[1]: ignition-files.service: Deactivated successfully. Oct 13 05:36:46.831167 systemd[1]: Stopped ignition-files.service - Ignition (files). Oct 13 05:36:46.834139 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Oct 13 05:36:46.836298 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Oct 13 05:36:46.836800 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 13 05:36:46.838181 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 13 05:36:46.840328 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 13 05:36:46.840560 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Oct 13 05:36:46.841410 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 13 05:36:46.842166 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Oct 13 05:36:46.852522 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 13 05:36:46.852653 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Oct 13 05:36:46.877665 ignition[1359]: INFO : Ignition 2.22.0 Oct 13 05:36:46.878335 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 13 05:36:46.880101 ignition[1359]: INFO : Stage: umount Oct 13 05:36:46.880101 ignition[1359]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 13 05:36:46.880101 ignition[1359]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Oct 13 05:36:46.880101 ignition[1359]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Oct 13 05:36:46.882343 ignition[1359]: INFO : PUT result: OK Oct 13 05:36:46.885869 ignition[1359]: INFO : umount: umount passed Oct 13 05:36:46.886943 ignition[1359]: INFO : Ignition finished successfully Oct 13 05:36:46.888738 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 13 05:36:46.888905 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Oct 13 05:36:46.890109 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 13 05:36:46.890233 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Oct 13 05:36:46.892211 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 13 05:36:46.892302 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Oct 13 05:36:46.893059 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 13 05:36:46.893132 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Oct 13 05:36:46.893702 systemd[1]: ignition-fetch.service: Deactivated successfully. Oct 13 05:36:46.893772 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Oct 13 05:36:46.894425 systemd[1]: Stopped target network.target - Network. Oct 13 05:36:46.895191 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 13 05:36:46.895266 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Oct 13 05:36:46.895859 systemd[1]: Stopped target paths.target - Path Units. Oct 13 05:36:46.896462 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 13 05:36:46.896535 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 13 05:36:46.897130 systemd[1]: Stopped target slices.target - Slice Units. Oct 13 05:36:46.897725 systemd[1]: Stopped target sockets.target - Socket Units. Oct 13 05:36:46.898385 systemd[1]: iscsid.socket: Deactivated successfully. Oct 13 05:36:46.898447 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Oct 13 05:36:46.899152 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 13 05:36:46.899209 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 13 05:36:46.899783 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 13 05:36:46.899870 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Oct 13 05:36:46.900801 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Oct 13 05:36:46.900870 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Oct 13 05:36:46.901490 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 13 05:36:46.901558 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Oct 13 05:36:46.902292 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Oct 13 05:36:46.903459 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Oct 13 05:36:46.914556 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 13 05:36:46.914838 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Oct 13 05:36:46.917279 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 13 05:36:46.917411 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Oct 13 05:36:46.921128 systemd[1]: Stopped target network-pre.target - Preparation for Network. Oct 13 05:36:46.921649 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 13 05:36:46.921708 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Oct 13 05:36:46.923769 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Oct 13 05:36:46.925342 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 13 05:36:46.925426 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 13 05:36:46.926239 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 13 05:36:46.926305 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 13 05:36:46.928130 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 13 05:36:46.928191 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Oct 13 05:36:46.930480 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 13 05:36:46.947326 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 13 05:36:46.947506 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 13 05:36:46.950647 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 13 05:36:46.950896 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Oct 13 05:36:46.954439 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 13 05:36:46.954504 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Oct 13 05:36:46.955578 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 13 05:36:46.955660 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Oct 13 05:36:46.960394 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 13 05:36:46.960486 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Oct 13 05:36:46.961633 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 13 05:36:46.961707 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 13 05:36:46.963915 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Oct 13 05:36:46.965553 systemd[1]: systemd-network-generator.service: Deactivated successfully. Oct 13 05:36:46.965646 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Oct 13 05:36:46.966268 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 13 05:36:46.966330 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 13 05:36:46.966977 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Oct 13 05:36:46.969092 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 13 05:36:46.969804 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 13 05:36:46.969869 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Oct 13 05:36:46.971087 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 13 05:36:46.971153 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 13 05:36:46.984102 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 13 05:36:46.989577 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Oct 13 05:36:46.992868 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 13 05:36:46.992982 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Oct 13 05:36:46.993763 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Oct 13 05:36:46.995590 systemd[1]: Starting initrd-switch-root.service - Switch Root... Oct 13 05:36:47.023243 systemd[1]: Switching root. Oct 13 05:36:47.051678 systemd-journald[303]: Journal stopped Oct 13 05:36:50.149097 systemd-journald[303]: Received SIGTERM from PID 1 (systemd). Oct 13 05:36:50.149166 kernel: SELinux: policy capability network_peer_controls=1 Oct 13 05:36:50.149181 kernel: SELinux: policy capability open_perms=1 Oct 13 05:36:50.149198 kernel: SELinux: policy capability extended_socket_class=1 Oct 13 05:36:50.149214 kernel: SELinux: policy capability always_check_network=0 Oct 13 05:36:50.149227 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 13 05:36:50.149244 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 13 05:36:50.149256 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 13 05:36:50.149271 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 13 05:36:50.149284 kernel: SELinux: policy capability userspace_initial_context=0 Oct 13 05:36:50.149297 kernel: audit: type=1403 audit(1760333808.043:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 13 05:36:50.149311 systemd[1]: Successfully loaded SELinux policy in 175.553ms. Oct 13 05:36:50.149331 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 6.727ms. Oct 13 05:36:50.149346 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 13 05:36:50.149360 systemd[1]: Detected virtualization amazon. Oct 13 05:36:50.149375 systemd[1]: Detected architecture x86-64. Oct 13 05:36:50.149388 systemd[1]: Detected first boot. Oct 13 05:36:50.149402 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Oct 13 05:36:50.149416 zram_generator::config[1402]: No configuration found. Oct 13 05:36:50.149430 kernel: Guest personality initialized and is inactive Oct 13 05:36:50.149443 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Oct 13 05:36:50.149458 kernel: Initialized host personality Oct 13 05:36:50.149471 kernel: NET: Registered PF_VSOCK protocol family Oct 13 05:36:50.149484 systemd[1]: Populated /etc with preset unit settings. Oct 13 05:36:50.149497 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 13 05:36:50.149511 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Oct 13 05:36:50.149525 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 13 05:36:50.149541 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Oct 13 05:36:50.149556 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Oct 13 05:36:50.149570 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Oct 13 05:36:50.149584 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Oct 13 05:36:50.149598 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Oct 13 05:36:50.149611 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Oct 13 05:36:50.149625 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Oct 13 05:36:50.149638 systemd[1]: Created slice user.slice - User and Session Slice. Oct 13 05:36:50.149655 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 13 05:36:50.149669 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 13 05:36:50.149682 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Oct 13 05:36:50.149695 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Oct 13 05:36:50.149708 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Oct 13 05:36:50.149722 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 13 05:36:50.149738 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Oct 13 05:36:50.149751 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 13 05:36:50.149765 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 13 05:36:50.149778 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Oct 13 05:36:50.149791 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Oct 13 05:36:50.149805 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Oct 13 05:36:50.149819 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Oct 13 05:36:50.149835 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 13 05:36:50.149848 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 13 05:36:50.149861 systemd[1]: Reached target slices.target - Slice Units. Oct 13 05:36:50.149875 systemd[1]: Reached target swap.target - Swaps. Oct 13 05:36:50.149889 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Oct 13 05:36:50.149902 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Oct 13 05:36:50.149915 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Oct 13 05:36:50.149929 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 13 05:36:50.149945 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 13 05:36:50.149959 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 13 05:36:50.149972 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Oct 13 05:36:50.149985 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Oct 13 05:36:50.149999 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Oct 13 05:36:50.150030 systemd[1]: Mounting media.mount - External Media Directory... Oct 13 05:36:50.150044 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 13 05:36:50.150064 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Oct 13 05:36:50.150077 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Oct 13 05:36:50.150090 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Oct 13 05:36:50.150104 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 13 05:36:50.150117 systemd[1]: Reached target machines.target - Containers. Oct 13 05:36:50.150130 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Oct 13 05:36:50.150147 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 13 05:36:50.150160 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 13 05:36:50.150173 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Oct 13 05:36:50.150186 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 13 05:36:50.150199 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 13 05:36:50.150213 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 13 05:36:50.150226 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Oct 13 05:36:50.150242 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 13 05:36:50.150256 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 13 05:36:50.150269 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 13 05:36:50.150283 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Oct 13 05:36:50.150296 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 13 05:36:50.150309 systemd[1]: Stopped systemd-fsck-usr.service. Oct 13 05:36:50.150323 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 13 05:36:50.150339 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 13 05:36:50.150352 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 13 05:36:50.150366 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 13 05:36:50.150379 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Oct 13 05:36:50.150395 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Oct 13 05:36:50.150408 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 13 05:36:50.150422 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 13 05:36:50.150437 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Oct 13 05:36:50.150450 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Oct 13 05:36:50.150464 systemd[1]: Mounted media.mount - External Media Directory. Oct 13 05:36:50.150477 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Oct 13 05:36:50.150493 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Oct 13 05:36:50.150506 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Oct 13 05:36:50.150520 kernel: fuse: init (API version 7.41) Oct 13 05:36:50.150532 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 13 05:36:50.150552 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 13 05:36:50.150568 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Oct 13 05:36:50.150582 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 13 05:36:50.150598 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 13 05:36:50.150611 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 13 05:36:50.150624 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 13 05:36:50.150637 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 13 05:36:50.150651 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Oct 13 05:36:50.150667 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 13 05:36:50.150692 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 13 05:36:50.150712 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 13 05:36:50.150732 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 13 05:36:50.150751 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Oct 13 05:36:50.150769 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 13 05:36:50.150786 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Oct 13 05:36:50.150799 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Oct 13 05:36:50.150812 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Oct 13 05:36:50.150826 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 13 05:36:50.150839 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 13 05:36:50.150853 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Oct 13 05:36:50.150867 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 13 05:36:50.150910 systemd-journald[1481]: Collecting audit messages is disabled. Oct 13 05:36:50.150944 systemd-journald[1481]: Journal started Oct 13 05:36:50.150970 systemd-journald[1481]: Runtime Journal (/run/log/journal/ec2e26e6dd68e790377b6c9cdf323ccc) is 4.8M, max 38.4M, 33.6M free. Oct 13 05:36:49.783143 systemd[1]: Queued start job for default target multi-user.target. Oct 13 05:36:49.796430 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Oct 13 05:36:49.796977 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 13 05:36:50.178779 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Oct 13 05:36:50.183063 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 13 05:36:50.196896 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Oct 13 05:36:50.196977 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 13 05:36:50.207072 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 13 05:36:50.210036 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Oct 13 05:36:50.220046 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 13 05:36:50.224033 systemd[1]: Started systemd-journald.service - Journal Service. Oct 13 05:36:50.228172 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Oct 13 05:36:50.228974 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Oct 13 05:36:50.231884 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Oct 13 05:36:50.232441 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Oct 13 05:36:50.249567 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Oct 13 05:36:50.256806 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Oct 13 05:36:50.273244 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Oct 13 05:36:50.275652 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Oct 13 05:36:50.277961 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 13 05:36:50.287060 kernel: loop1: detected capacity change from 0 to 72360 Oct 13 05:36:50.290258 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 13 05:36:50.312504 systemd-journald[1481]: Time spent on flushing to /var/log/journal/ec2e26e6dd68e790377b6c9cdf323ccc is 30.891ms for 1014 entries. Oct 13 05:36:50.312504 systemd-journald[1481]: System Journal (/var/log/journal/ec2e26e6dd68e790377b6c9cdf323ccc) is 8M, max 588.1M, 580.1M free. Oct 13 05:36:50.352202 systemd-journald[1481]: Received client request to flush runtime journal. Oct 13 05:36:50.352247 kernel: ACPI: bus type drm_connector registered Oct 13 05:36:50.317280 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 13 05:36:50.317451 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 13 05:36:50.326957 systemd-tmpfiles[1518]: ACLs are not supported, ignoring. Oct 13 05:36:50.326971 systemd-tmpfiles[1518]: ACLs are not supported, ignoring. Oct 13 05:36:50.335484 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 13 05:36:50.338844 systemd[1]: Starting systemd-sysusers.service - Create System Users... Oct 13 05:36:50.354432 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Oct 13 05:36:50.373038 kernel: loop2: detected capacity change from 0 to 128048 Oct 13 05:36:50.388320 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Oct 13 05:36:50.399172 systemd[1]: Finished systemd-sysusers.service - Create System Users. Oct 13 05:36:50.403265 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 13 05:36:50.407181 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 13 05:36:50.439731 systemd-tmpfiles[1556]: ACLs are not supported, ignoring. Oct 13 05:36:50.440166 systemd-tmpfiles[1556]: ACLs are not supported, ignoring. Oct 13 05:36:50.445678 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 13 05:36:50.516916 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Oct 13 05:36:50.560824 systemd[1]: Started systemd-userdbd.service - User Database Manager. Oct 13 05:36:50.676417 systemd-resolved[1555]: Positive Trust Anchors: Oct 13 05:36:50.676825 systemd-resolved[1555]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 13 05:36:50.676895 systemd-resolved[1555]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Oct 13 05:36:50.677011 systemd-resolved[1555]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 13 05:36:50.683788 systemd-resolved[1555]: Defaulting to hostname 'linux'. Oct 13 05:36:50.685745 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 13 05:36:50.686769 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 13 05:36:50.794058 kernel: loop3: detected capacity change from 0 to 110984 Oct 13 05:36:50.798748 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 13 05:36:51.004608 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Oct 13 05:36:51.007225 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 13 05:36:51.045193 systemd-udevd[1568]: Using default interface naming scheme 'v257'. Oct 13 05:36:51.064046 kernel: loop4: detected capacity change from 0 to 229808 Oct 13 05:36:51.159009 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 13 05:36:51.163210 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 13 05:36:51.232694 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Oct 13 05:36:51.234646 (udev-worker)[1577]: Network interface NamePolicy= disabled on kernel command line. Oct 13 05:36:51.279132 systemd-networkd[1573]: lo: Link UP Oct 13 05:36:51.279142 systemd-networkd[1573]: lo: Gained carrier Oct 13 05:36:51.281346 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 13 05:36:51.282124 systemd[1]: Reached target network.target - Network. Oct 13 05:36:51.287206 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Oct 13 05:36:51.289819 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Oct 13 05:36:51.320004 systemd-networkd[1573]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 13 05:36:51.320815 systemd-networkd[1573]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 13 05:36:51.326184 kernel: loop5: detected capacity change from 0 to 72360 Oct 13 05:36:51.328596 systemd-networkd[1573]: eth0: Link UP Oct 13 05:36:51.331380 systemd-networkd[1573]: eth0: Gained carrier Oct 13 05:36:51.331416 systemd-networkd[1573]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 13 05:36:51.336786 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Oct 13 05:36:51.341105 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Oct 13 05:36:51.343138 systemd-networkd[1573]: eth0: DHCPv4 address 172.31.26.130/20, gateway 172.31.16.1 acquired from 172.31.16.1 Oct 13 05:36:51.352069 kernel: loop6: detected capacity change from 0 to 128048 Oct 13 05:36:51.355042 kernel: mousedev: PS/2 mouse device common for all mice Oct 13 05:36:51.374088 kernel: loop7: detected capacity change from 0 to 110984 Oct 13 05:36:51.387041 kernel: loop1: detected capacity change from 0 to 229808 Oct 13 05:36:51.391045 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Oct 13 05:36:51.395368 kernel: ACPI: button: Power Button [PWRF] Oct 13 05:36:51.395458 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input5 Oct 13 05:36:51.400136 kernel: ACPI: button: Sleep Button [SLPF] Oct 13 05:36:51.411085 (sd-merge)[1599]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw', 'oem-ami.raw'. Oct 13 05:36:51.417119 (sd-merge)[1599]: Merged extensions into '/usr'. Oct 13 05:36:51.422744 systemd[1]: Reload requested from client PID 1517 ('systemd-sysext') (unit systemd-sysext.service)... Oct 13 05:36:51.422768 systemd[1]: Reloading... Oct 13 05:36:51.525044 zram_generator::config[1641]: No configuration found. Oct 13 05:36:51.832355 systemd[1]: Reloading finished in 408 ms. Oct 13 05:36:51.854063 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Oct 13 05:36:51.901244 systemd[1]: Starting ensure-sysext.service... Oct 13 05:36:51.903510 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 13 05:36:51.907713 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 13 05:36:51.942643 systemd[1]: Reload requested from client PID 1781 ('systemctl') (unit ensure-sysext.service)... Oct 13 05:36:51.942680 systemd[1]: Reloading... Oct 13 05:36:52.007444 systemd-tmpfiles[1782]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Oct 13 05:36:52.009492 systemd-tmpfiles[1782]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Oct 13 05:36:52.010049 systemd-tmpfiles[1782]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 13 05:36:52.010602 systemd-tmpfiles[1782]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Oct 13 05:36:52.012103 systemd-tmpfiles[1782]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 13 05:36:52.013501 systemd-tmpfiles[1782]: ACLs are not supported, ignoring. Oct 13 05:36:52.013699 systemd-tmpfiles[1782]: ACLs are not supported, ignoring. Oct 13 05:36:52.023326 systemd-tmpfiles[1782]: Detected autofs mount point /boot during canonicalization of boot. Oct 13 05:36:52.023504 systemd-tmpfiles[1782]: Skipping /boot Oct 13 05:36:52.040050 systemd-tmpfiles[1782]: Detected autofs mount point /boot during canonicalization of boot. Oct 13 05:36:52.040217 systemd-tmpfiles[1782]: Skipping /boot Oct 13 05:36:52.063085 zram_generator::config[1822]: No configuration found. Oct 13 05:36:52.312246 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Oct 13 05:36:52.313474 systemd[1]: Reloading finished in 370 ms. Oct 13 05:36:52.340358 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 13 05:36:52.343362 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 13 05:36:52.369945 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 13 05:36:52.371597 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 13 05:36:52.375310 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Oct 13 05:36:52.376225 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 13 05:36:52.378386 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Oct 13 05:36:52.384597 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 13 05:36:52.388419 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 13 05:36:52.393494 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 13 05:36:52.395218 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 13 05:36:52.398157 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Oct 13 05:36:52.398839 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 13 05:36:52.403790 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Oct 13 05:36:52.412163 systemd-networkd[1573]: eth0: Gained IPv6LL Oct 13 05:36:52.416877 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Oct 13 05:36:52.417784 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 13 05:36:52.426397 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Oct 13 05:36:52.429087 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 13 05:36:52.429933 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 13 05:36:52.432499 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 13 05:36:52.432950 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 13 05:36:52.434809 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 13 05:36:52.435491 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 13 05:36:52.455769 systemd[1]: Reached target network-online.target - Network is Online. Oct 13 05:36:52.456572 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 13 05:36:52.456933 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 13 05:36:52.461436 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 13 05:36:52.464548 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 13 05:36:52.467522 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 13 05:36:52.468195 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 13 05:36:52.468465 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 13 05:36:52.468708 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 13 05:36:52.481933 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 13 05:36:52.482329 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 13 05:36:52.487225 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 13 05:36:52.488040 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 13 05:36:52.488216 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 13 05:36:52.488449 systemd[1]: Reached target time-set.target - System Time Set. Oct 13 05:36:52.492239 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 13 05:36:52.493743 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Oct 13 05:36:52.494954 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 13 05:36:52.496481 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 13 05:36:52.498943 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Oct 13 05:36:52.512276 systemd[1]: Finished ensure-sysext.service. Oct 13 05:36:52.513528 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 13 05:36:52.513764 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 13 05:36:52.521259 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 13 05:36:52.521530 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 13 05:36:52.527164 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 13 05:36:52.540814 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Oct 13 05:36:52.541753 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 13 05:36:52.541966 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 13 05:36:52.543373 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 13 05:36:52.559501 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Oct 13 05:36:52.560463 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 13 05:36:52.641434 augenrules[1921]: No rules Oct 13 05:36:52.643091 systemd[1]: audit-rules.service: Deactivated successfully. Oct 13 05:36:52.643329 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 13 05:36:55.461648 ldconfig[1877]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 13 05:36:55.467728 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Oct 13 05:36:55.469342 systemd[1]: Starting systemd-update-done.service - Update is Completed... Oct 13 05:36:55.492076 systemd[1]: Finished systemd-update-done.service - Update is Completed. Oct 13 05:36:55.492804 systemd[1]: Reached target sysinit.target - System Initialization. Oct 13 05:36:55.493434 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Oct 13 05:36:55.493898 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Oct 13 05:36:55.494346 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Oct 13 05:36:55.495047 systemd[1]: Started logrotate.timer - Daily rotation of log files. Oct 13 05:36:55.495541 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Oct 13 05:36:55.495940 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Oct 13 05:36:55.496476 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 13 05:36:55.496523 systemd[1]: Reached target paths.target - Path Units. Oct 13 05:36:55.496897 systemd[1]: Reached target timers.target - Timer Units. Oct 13 05:36:55.498829 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Oct 13 05:36:55.500529 systemd[1]: Starting docker.socket - Docker Socket for the API... Oct 13 05:36:55.503221 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Oct 13 05:36:55.503797 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Oct 13 05:36:55.504627 systemd[1]: Reached target ssh-access.target - SSH Access Available. Oct 13 05:36:55.507900 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Oct 13 05:36:55.508718 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Oct 13 05:36:55.509884 systemd[1]: Listening on docker.socket - Docker Socket for the API. Oct 13 05:36:55.511351 systemd[1]: Reached target sockets.target - Socket Units. Oct 13 05:36:55.511756 systemd[1]: Reached target basic.target - Basic System. Oct 13 05:36:55.512204 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Oct 13 05:36:55.512239 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Oct 13 05:36:55.513379 systemd[1]: Starting containerd.service - containerd container runtime... Oct 13 05:36:55.518231 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Oct 13 05:36:55.520457 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Oct 13 05:36:55.525515 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Oct 13 05:36:55.529214 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Oct 13 05:36:55.535298 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Oct 13 05:36:55.537124 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Oct 13 05:36:55.539792 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Oct 13 05:36:55.550246 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 05:36:55.554865 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Oct 13 05:36:55.560286 systemd[1]: Started ntpd.service - Network Time Service. Oct 13 05:36:55.567607 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Oct 13 05:36:55.571214 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Oct 13 05:36:55.576562 systemd[1]: Starting setup-oem.service - Setup OEM... Oct 13 05:36:55.585514 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Oct 13 05:36:55.598301 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Oct 13 05:36:55.609378 systemd[1]: Starting systemd-logind.service - User Login Management... Oct 13 05:36:55.610163 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 13 05:36:55.614316 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 13 05:36:55.621924 systemd[1]: Starting update-engine.service - Update Engine... Oct 13 05:36:55.627578 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Oct 13 05:36:55.638750 jq[1936]: false Oct 13 05:36:55.644670 jq[1951]: true Oct 13 05:36:55.642106 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Oct 13 05:36:55.664046 google_oslogin_nss_cache[1938]: oslogin_cache_refresh[1938]: Refreshing passwd entry cache Oct 13 05:36:55.650881 oslogin_cache_refresh[1938]: Refreshing passwd entry cache Oct 13 05:36:55.678009 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 13 05:36:55.679173 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Oct 13 05:36:55.692567 google_oslogin_nss_cache[1938]: oslogin_cache_refresh[1938]: Failure getting users, quitting Oct 13 05:36:55.692567 google_oslogin_nss_cache[1938]: oslogin_cache_refresh[1938]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Oct 13 05:36:55.692567 google_oslogin_nss_cache[1938]: oslogin_cache_refresh[1938]: Refreshing group entry cache Oct 13 05:36:55.691566 oslogin_cache_refresh[1938]: Failure getting users, quitting Oct 13 05:36:55.691589 oslogin_cache_refresh[1938]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Oct 13 05:36:55.691643 oslogin_cache_refresh[1938]: Refreshing group entry cache Oct 13 05:36:55.703825 google_oslogin_nss_cache[1938]: oslogin_cache_refresh[1938]: Failure getting groups, quitting Oct 13 05:36:55.703825 google_oslogin_nss_cache[1938]: oslogin_cache_refresh[1938]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Oct 13 05:36:55.698377 oslogin_cache_refresh[1938]: Failure getting groups, quitting Oct 13 05:36:55.698393 oslogin_cache_refresh[1938]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Oct 13 05:36:55.723420 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Oct 13 05:36:55.727850 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Oct 13 05:36:55.730673 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 13 05:36:55.731639 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Oct 13 05:36:55.745764 jq[1954]: true Oct 13 05:36:55.745206 (ntainerd)[1965]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Oct 13 05:36:55.751239 ntpd[1941]: ntpd 4.2.8p18@1.4062-o Mon Oct 13 03:01:42 UTC 2025 (1): Starting Oct 13 05:36:55.764629 ntpd[1941]: 13 Oct 05:36:55 ntpd[1941]: ntpd 4.2.8p18@1.4062-o Mon Oct 13 03:01:42 UTC 2025 (1): Starting Oct 13 05:36:55.764629 ntpd[1941]: 13 Oct 05:36:55 ntpd[1941]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Oct 13 05:36:55.764629 ntpd[1941]: 13 Oct 05:36:55 ntpd[1941]: ---------------------------------------------------- Oct 13 05:36:55.764629 ntpd[1941]: 13 Oct 05:36:55 ntpd[1941]: ntp-4 is maintained by Network Time Foundation, Oct 13 05:36:55.764629 ntpd[1941]: 13 Oct 05:36:55 ntpd[1941]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Oct 13 05:36:55.764629 ntpd[1941]: 13 Oct 05:36:55 ntpd[1941]: corporation. Support and training for ntp-4 are Oct 13 05:36:55.764629 ntpd[1941]: 13 Oct 05:36:55 ntpd[1941]: available at https://www.nwtime.org/support Oct 13 05:36:55.764629 ntpd[1941]: 13 Oct 05:36:55 ntpd[1941]: ---------------------------------------------------- Oct 13 05:36:55.764629 ntpd[1941]: 13 Oct 05:36:55 ntpd[1941]: proto: precision = 0.092 usec (-23) Oct 13 05:36:55.764629 ntpd[1941]: 13 Oct 05:36:55 ntpd[1941]: basedate set to 2025-10-01 Oct 13 05:36:55.764629 ntpd[1941]: 13 Oct 05:36:55 ntpd[1941]: gps base set to 2025-10-05 (week 2387) Oct 13 05:36:55.764629 ntpd[1941]: 13 Oct 05:36:55 ntpd[1941]: Listen and drop on 0 v6wildcard [::]:123 Oct 13 05:36:55.764629 ntpd[1941]: 13 Oct 05:36:55 ntpd[1941]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Oct 13 05:36:55.764629 ntpd[1941]: 13 Oct 05:36:55 ntpd[1941]: Listen normally on 2 lo 127.0.0.1:123 Oct 13 05:36:55.764629 ntpd[1941]: 13 Oct 05:36:55 ntpd[1941]: Listen normally on 3 eth0 172.31.26.130:123 Oct 13 05:36:55.764629 ntpd[1941]: 13 Oct 05:36:55 ntpd[1941]: Listen normally on 4 lo [::1]:123 Oct 13 05:36:55.764629 ntpd[1941]: 13 Oct 05:36:55 ntpd[1941]: Listen normally on 5 eth0 [fe80::42a:1dff:fedd:cc23%2]:123 Oct 13 05:36:55.764629 ntpd[1941]: 13 Oct 05:36:55 ntpd[1941]: Listening on routing socket on fd #22 for interface updates Oct 13 05:36:55.751308 ntpd[1941]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Oct 13 05:36:55.751319 ntpd[1941]: ---------------------------------------------------- Oct 13 05:36:55.751329 ntpd[1941]: ntp-4 is maintained by Network Time Foundation, Oct 13 05:36:55.751339 ntpd[1941]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Oct 13 05:36:55.751349 ntpd[1941]: corporation. Support and training for ntp-4 are Oct 13 05:36:55.751359 ntpd[1941]: available at https://www.nwtime.org/support Oct 13 05:36:55.751369 ntpd[1941]: ---------------------------------------------------- Oct 13 05:36:55.756508 ntpd[1941]: proto: precision = 0.092 usec (-23) Oct 13 05:36:55.762070 ntpd[1941]: basedate set to 2025-10-01 Oct 13 05:36:55.762095 ntpd[1941]: gps base set to 2025-10-05 (week 2387) Oct 13 05:36:55.762247 ntpd[1941]: Listen and drop on 0 v6wildcard [::]:123 Oct 13 05:36:55.762281 ntpd[1941]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Oct 13 05:36:55.762492 ntpd[1941]: Listen normally on 2 lo 127.0.0.1:123 Oct 13 05:36:55.762519 ntpd[1941]: Listen normally on 3 eth0 172.31.26.130:123 Oct 13 05:36:55.762551 ntpd[1941]: Listen normally on 4 lo [::1]:123 Oct 13 05:36:55.762582 ntpd[1941]: Listen normally on 5 eth0 [fe80::42a:1dff:fedd:cc23%2]:123 Oct 13 05:36:55.762610 ntpd[1941]: Listening on routing socket on fd #22 for interface updates Oct 13 05:36:55.787105 extend-filesystems[1937]: Found /dev/nvme0n1p6 Oct 13 05:36:55.807193 ntpd[1941]: 13 Oct 05:36:55 ntpd[1941]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Oct 13 05:36:55.807193 ntpd[1941]: 13 Oct 05:36:55 ntpd[1941]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Oct 13 05:36:55.804574 ntpd[1941]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Oct 13 05:36:55.807336 tar[1958]: linux-amd64/LICENSE Oct 13 05:36:55.807336 tar[1958]: linux-amd64/helm Oct 13 05:36:55.807656 extend-filesystems[1937]: Found /dev/nvme0n1p9 Oct 13 05:36:55.791096 systemd[1]: motdgen.service: Deactivated successfully. Oct 13 05:36:55.804607 ntpd[1941]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Oct 13 05:36:55.808261 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Oct 13 05:36:55.840548 update_engine[1949]: I20251013 05:36:55.832456 1949 main.cc:92] Flatcar Update Engine starting Oct 13 05:36:55.843112 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Oct 13 05:36:55.848180 extend-filesystems[1937]: Checking size of /dev/nvme0n1p9 Oct 13 05:36:55.859343 coreos-metadata[1933]: Oct 13 05:36:55.856 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Oct 13 05:36:55.868677 coreos-metadata[1933]: Oct 13 05:36:55.867 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Oct 13 05:36:55.868677 coreos-metadata[1933]: Oct 13 05:36:55.868 INFO Fetch successful Oct 13 05:36:55.868677 coreos-metadata[1933]: Oct 13 05:36:55.868 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Oct 13 05:36:55.876249 coreos-metadata[1933]: Oct 13 05:36:55.873 INFO Fetch successful Oct 13 05:36:55.876249 coreos-metadata[1933]: Oct 13 05:36:55.873 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Oct 13 05:36:55.876249 coreos-metadata[1933]: Oct 13 05:36:55.874 INFO Fetch successful Oct 13 05:36:55.876249 coreos-metadata[1933]: Oct 13 05:36:55.875 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Oct 13 05:36:55.876249 coreos-metadata[1933]: Oct 13 05:36:55.875 INFO Fetch successful Oct 13 05:36:55.876249 coreos-metadata[1933]: Oct 13 05:36:55.875 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Oct 13 05:36:55.876249 coreos-metadata[1933]: Oct 13 05:36:55.876 INFO Fetch failed with 404: resource not found Oct 13 05:36:55.876249 coreos-metadata[1933]: Oct 13 05:36:55.876 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Oct 13 05:36:55.877413 coreos-metadata[1933]: Oct 13 05:36:55.876 INFO Fetch successful Oct 13 05:36:55.880213 coreos-metadata[1933]: Oct 13 05:36:55.880 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Oct 13 05:36:55.881513 coreos-metadata[1933]: Oct 13 05:36:55.880 INFO Fetch successful Oct 13 05:36:55.881513 coreos-metadata[1933]: Oct 13 05:36:55.880 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Oct 13 05:36:55.883676 coreos-metadata[1933]: Oct 13 05:36:55.882 INFO Fetch successful Oct 13 05:36:55.883676 coreos-metadata[1933]: Oct 13 05:36:55.882 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Oct 13 05:36:55.885063 coreos-metadata[1933]: Oct 13 05:36:55.884 INFO Fetch successful Oct 13 05:36:55.885063 coreos-metadata[1933]: Oct 13 05:36:55.884 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Oct 13 05:36:55.887457 coreos-metadata[1933]: Oct 13 05:36:55.885 INFO Fetch successful Oct 13 05:36:55.905359 systemd[1]: Finished setup-oem.service - Setup OEM. Oct 13 05:36:55.907350 dbus-daemon[1934]: [system] SELinux support is enabled Oct 13 05:36:55.910886 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Oct 13 05:36:55.912693 systemd[1]: Started dbus.service - D-Bus System Message Bus. Oct 13 05:36:55.917877 dbus-daemon[1934]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1573 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Oct 13 05:36:55.919493 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 13 05:36:55.919533 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Oct 13 05:36:55.920147 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 13 05:36:55.920175 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Oct 13 05:36:55.922281 update_engine[1949]: I20251013 05:36:55.922079 1949 update_check_scheduler.cc:74] Next update check in 8m56s Oct 13 05:36:55.922787 systemd[1]: Started update-engine.service - Update Engine. Oct 13 05:36:55.925171 dbus-daemon[1934]: [system] Successfully activated service 'org.freedesktop.systemd1' Oct 13 05:36:55.928765 extend-filesystems[1937]: Resized partition /dev/nvme0n1p9 Oct 13 05:36:55.937044 extend-filesystems[2022]: resize2fs 1.47.3 (8-Jul-2025) Oct 13 05:36:55.936339 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Oct 13 05:36:55.941633 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 1617920 to 2604027 blocks Oct 13 05:36:55.944920 systemd-logind[1948]: Watching system buttons on /dev/input/event2 (Power Button) Oct 13 05:36:55.944953 systemd-logind[1948]: Watching system buttons on /dev/input/event3 (Sleep Button) Oct 13 05:36:55.944979 systemd-logind[1948]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Oct 13 05:36:55.957730 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 2604027 Oct 13 05:36:55.960323 systemd-logind[1948]: New seat seat0. Oct 13 05:36:55.968805 extend-filesystems[2022]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Oct 13 05:36:55.968805 extend-filesystems[2022]: old_desc_blocks = 1, new_desc_blocks = 2 Oct 13 05:36:55.968805 extend-filesystems[2022]: The filesystem on /dev/nvme0n1p9 is now 2604027 (4k) blocks long. Oct 13 05:36:55.980338 systemd[1]: Started locksmithd.service - Cluster reboot manager. Oct 13 05:36:55.983069 extend-filesystems[1937]: Resized filesystem in /dev/nvme0n1p9 Oct 13 05:36:55.981198 systemd[1]: Started systemd-logind.service - User Login Management. Oct 13 05:36:55.983441 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 13 05:36:55.983719 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Oct 13 05:36:56.048435 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Oct 13 05:36:56.049564 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Oct 13 05:36:56.083255 bash[2031]: Updated "/home/core/.ssh/authorized_keys" Oct 13 05:36:56.086936 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Oct 13 05:36:56.104100 systemd[1]: Starting sshkeys.service... Oct 13 05:36:56.183876 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Oct 13 05:36:56.192929 dbus-daemon[1934]: [system] Successfully activated service 'org.freedesktop.hostname1' Oct 13 05:36:56.195503 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Oct 13 05:36:56.199427 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Oct 13 05:36:56.206210 dbus-daemon[1934]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=2021 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Oct 13 05:36:56.215211 systemd[1]: Starting polkit.service - Authorization Manager... Oct 13 05:36:56.481805 amazon-ssm-agent[2017]: Initializing new seelog logger Oct 13 05:36:56.493050 amazon-ssm-agent[2017]: New Seelog Logger Creation Complete Oct 13 05:36:56.493050 amazon-ssm-agent[2017]: 2025/10/13 05:36:56 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Oct 13 05:36:56.493050 amazon-ssm-agent[2017]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Oct 13 05:36:56.493050 amazon-ssm-agent[2017]: 2025/10/13 05:36:56 processing appconfig overrides Oct 13 05:36:56.497145 amazon-ssm-agent[2017]: 2025/10/13 05:36:56 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Oct 13 05:36:56.497145 amazon-ssm-agent[2017]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Oct 13 05:36:56.497284 amazon-ssm-agent[2017]: 2025/10/13 05:36:56 processing appconfig overrides Oct 13 05:36:56.497597 amazon-ssm-agent[2017]: 2025/10/13 05:36:56 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Oct 13 05:36:56.497597 amazon-ssm-agent[2017]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Oct 13 05:36:56.497692 amazon-ssm-agent[2017]: 2025/10/13 05:36:56 processing appconfig overrides Oct 13 05:36:56.500341 amazon-ssm-agent[2017]: 2025-10-13 05:36:56.4953 INFO Proxy environment variables: Oct 13 05:36:56.501988 amazon-ssm-agent[2017]: 2025/10/13 05:36:56 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Oct 13 05:36:56.504308 amazon-ssm-agent[2017]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Oct 13 05:36:56.504308 amazon-ssm-agent[2017]: 2025/10/13 05:36:56 processing appconfig overrides Oct 13 05:36:56.536432 polkitd[2057]: Started polkitd version 126 Oct 13 05:36:56.579427 polkitd[2057]: Loading rules from directory /etc/polkit-1/rules.d Oct 13 05:36:56.579936 polkitd[2057]: Loading rules from directory /run/polkit-1/rules.d Oct 13 05:36:56.579995 polkitd[2057]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Oct 13 05:36:56.581709 polkitd[2057]: Loading rules from directory /usr/local/share/polkit-1/rules.d Oct 13 05:36:56.581755 polkitd[2057]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Oct 13 05:36:56.581828 polkitd[2057]: Loading rules from directory /usr/share/polkit-1/rules.d Oct 13 05:36:56.593138 polkitd[2057]: Finished loading, compiling and executing 2 rules Oct 13 05:36:56.596503 systemd[1]: Started polkit.service - Authorization Manager. Oct 13 05:36:56.602483 amazon-ssm-agent[2017]: 2025-10-13 05:36:56.4970 INFO https_proxy: Oct 13 05:36:56.609867 dbus-daemon[1934]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Oct 13 05:36:56.611175 polkitd[2057]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Oct 13 05:36:56.621622 locksmithd[2023]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 13 05:36:56.642304 coreos-metadata[2055]: Oct 13 05:36:56.642 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Oct 13 05:36:56.654489 coreos-metadata[2055]: Oct 13 05:36:56.654 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Oct 13 05:36:56.656461 coreos-metadata[2055]: Oct 13 05:36:56.656 INFO Fetch successful Oct 13 05:36:56.656552 coreos-metadata[2055]: Oct 13 05:36:56.656 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Oct 13 05:36:56.657512 coreos-metadata[2055]: Oct 13 05:36:56.657 INFO Fetch successful Oct 13 05:36:56.664743 unknown[2055]: wrote ssh authorized keys file for user: core Oct 13 05:36:56.688670 systemd-hostnamed[2021]: Hostname set to (transient) Oct 13 05:36:56.688791 systemd-resolved[1555]: System hostname changed to 'ip-172-31-26-130'. Oct 13 05:36:56.702655 amazon-ssm-agent[2017]: 2025-10-13 05:36:56.4970 INFO http_proxy: Oct 13 05:36:56.720840 update-ssh-keys[2153]: Updated "/home/core/.ssh/authorized_keys" Oct 13 05:36:56.719409 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Oct 13 05:36:56.729218 systemd[1]: Finished sshkeys.service. Oct 13 05:36:56.811459 amazon-ssm-agent[2017]: 2025-10-13 05:36:56.4970 INFO no_proxy: Oct 13 05:36:56.841256 sshd_keygen[1990]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 13 05:36:56.915113 amazon-ssm-agent[2017]: 2025-10-13 05:36:56.4972 INFO Checking if agent identity type OnPrem can be assumed Oct 13 05:36:56.992686 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Oct 13 05:36:56.998385 systemd[1]: Starting issuegen.service - Generate /run/issue... Oct 13 05:36:57.014094 amazon-ssm-agent[2017]: 2025-10-13 05:36:56.4974 INFO Checking if agent identity type EC2 can be assumed Oct 13 05:36:57.046001 systemd[1]: issuegen.service: Deactivated successfully. Oct 13 05:36:57.046331 systemd[1]: Finished issuegen.service - Generate /run/issue. Oct 13 05:36:57.054852 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Oct 13 05:36:57.084057 containerd[1965]: time="2025-10-13T05:36:57Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Oct 13 05:36:57.087611 containerd[1965]: time="2025-10-13T05:36:57.087569310Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Oct 13 05:36:57.089850 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Oct 13 05:36:57.093754 systemd[1]: Started getty@tty1.service - Getty on tty1. Oct 13 05:36:57.099524 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Oct 13 05:36:57.100410 systemd[1]: Reached target getty.target - Login Prompts. Oct 13 05:36:57.116180 amazon-ssm-agent[2017]: 2025-10-13 05:36:56.8574 INFO Agent will take identity from EC2 Oct 13 05:36:57.135037 containerd[1965]: time="2025-10-13T05:36:57.134656134Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="12.959µs" Oct 13 05:36:57.135037 containerd[1965]: time="2025-10-13T05:36:57.134704306Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Oct 13 05:36:57.135037 containerd[1965]: time="2025-10-13T05:36:57.134730578Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Oct 13 05:36:57.135037 containerd[1965]: time="2025-10-13T05:36:57.134914847Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Oct 13 05:36:57.135037 containerd[1965]: time="2025-10-13T05:36:57.134933350Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Oct 13 05:36:57.135037 containerd[1965]: time="2025-10-13T05:36:57.134964574Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Oct 13 05:36:57.137005 containerd[1965]: time="2025-10-13T05:36:57.135815507Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Oct 13 05:36:57.137005 containerd[1965]: time="2025-10-13T05:36:57.135840609Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Oct 13 05:36:57.137005 containerd[1965]: time="2025-10-13T05:36:57.136137110Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Oct 13 05:36:57.137005 containerd[1965]: time="2025-10-13T05:36:57.136155821Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Oct 13 05:36:57.137005 containerd[1965]: time="2025-10-13T05:36:57.136171037Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Oct 13 05:36:57.137005 containerd[1965]: time="2025-10-13T05:36:57.136184109Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Oct 13 05:36:57.137005 containerd[1965]: time="2025-10-13T05:36:57.136266616Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Oct 13 05:36:57.137005 containerd[1965]: time="2025-10-13T05:36:57.136481353Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Oct 13 05:36:57.137005 containerd[1965]: time="2025-10-13T05:36:57.136514256Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Oct 13 05:36:57.137005 containerd[1965]: time="2025-10-13T05:36:57.136529004Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Oct 13 05:36:57.137980 containerd[1965]: time="2025-10-13T05:36:57.137956531Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Oct 13 05:36:57.138920 containerd[1965]: time="2025-10-13T05:36:57.138899387Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Oct 13 05:36:57.139569 containerd[1965]: time="2025-10-13T05:36:57.139119729Z" level=info msg="metadata content store policy set" policy=shared Oct 13 05:36:57.146148 containerd[1965]: time="2025-10-13T05:36:57.146103595Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Oct 13 05:36:57.146647 containerd[1965]: time="2025-10-13T05:36:57.146579011Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Oct 13 05:36:57.147372 containerd[1965]: time="2025-10-13T05:36:57.146608350Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Oct 13 05:36:57.147372 containerd[1965]: time="2025-10-13T05:36:57.146760623Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Oct 13 05:36:57.147372 containerd[1965]: time="2025-10-13T05:36:57.146781871Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Oct 13 05:36:57.147372 containerd[1965]: time="2025-10-13T05:36:57.146796923Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Oct 13 05:36:57.147372 containerd[1965]: time="2025-10-13T05:36:57.146823198Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Oct 13 05:36:57.147372 containerd[1965]: time="2025-10-13T05:36:57.146854479Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Oct 13 05:36:57.147372 containerd[1965]: time="2025-10-13T05:36:57.146870420Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Oct 13 05:36:57.147372 containerd[1965]: time="2025-10-13T05:36:57.146885314Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Oct 13 05:36:57.147372 containerd[1965]: time="2025-10-13T05:36:57.146898687Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Oct 13 05:36:57.147372 containerd[1965]: time="2025-10-13T05:36:57.146915622Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Oct 13 05:36:57.148458 containerd[1965]: time="2025-10-13T05:36:57.147799693Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Oct 13 05:36:57.148458 containerd[1965]: time="2025-10-13T05:36:57.147834679Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Oct 13 05:36:57.148458 containerd[1965]: time="2025-10-13T05:36:57.147857866Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Oct 13 05:36:57.148458 containerd[1965]: time="2025-10-13T05:36:57.147873753Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Oct 13 05:36:57.148458 containerd[1965]: time="2025-10-13T05:36:57.147890196Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Oct 13 05:36:57.148458 containerd[1965]: time="2025-10-13T05:36:57.147904992Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Oct 13 05:36:57.148458 containerd[1965]: time="2025-10-13T05:36:57.147922339Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Oct 13 05:36:57.148458 containerd[1965]: time="2025-10-13T05:36:57.147936663Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Oct 13 05:36:57.148458 containerd[1965]: time="2025-10-13T05:36:57.147952478Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Oct 13 05:36:57.148458 containerd[1965]: time="2025-10-13T05:36:57.147967441Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Oct 13 05:36:57.148458 containerd[1965]: time="2025-10-13T05:36:57.147982370Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Oct 13 05:36:57.148458 containerd[1965]: time="2025-10-13T05:36:57.148084568Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Oct 13 05:36:57.148458 containerd[1965]: time="2025-10-13T05:36:57.148102499Z" level=info msg="Start snapshots syncer" Oct 13 05:36:57.149819 containerd[1965]: time="2025-10-13T05:36:57.148972001Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Oct 13 05:36:57.149819 containerd[1965]: time="2025-10-13T05:36:57.149680728Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Oct 13 05:36:57.150057 containerd[1965]: time="2025-10-13T05:36:57.149751597Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Oct 13 05:36:57.151034 containerd[1965]: time="2025-10-13T05:36:57.150849533Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Oct 13 05:36:57.151690 containerd[1965]: time="2025-10-13T05:36:57.151449314Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Oct 13 05:36:57.151690 containerd[1965]: time="2025-10-13T05:36:57.151485474Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Oct 13 05:36:57.151690 containerd[1965]: time="2025-10-13T05:36:57.151503613Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Oct 13 05:36:57.151690 containerd[1965]: time="2025-10-13T05:36:57.151522659Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Oct 13 05:36:57.151690 containerd[1965]: time="2025-10-13T05:36:57.151540867Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Oct 13 05:36:57.151690 containerd[1965]: time="2025-10-13T05:36:57.151556979Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Oct 13 05:36:57.151690 containerd[1965]: time="2025-10-13T05:36:57.151573753Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Oct 13 05:36:57.151690 containerd[1965]: time="2025-10-13T05:36:57.151605457Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Oct 13 05:36:57.151690 containerd[1965]: time="2025-10-13T05:36:57.151620699Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Oct 13 05:36:57.151690 containerd[1965]: time="2025-10-13T05:36:57.151636457Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Oct 13 05:36:57.152731 containerd[1965]: time="2025-10-13T05:36:57.152419998Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Oct 13 05:36:57.152731 containerd[1965]: time="2025-10-13T05:36:57.152515732Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Oct 13 05:36:57.152731 containerd[1965]: time="2025-10-13T05:36:57.152532415Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Oct 13 05:36:57.152731 containerd[1965]: time="2025-10-13T05:36:57.152547223Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Oct 13 05:36:57.152731 containerd[1965]: time="2025-10-13T05:36:57.152559523Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Oct 13 05:36:57.152731 containerd[1965]: time="2025-10-13T05:36:57.152574435Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Oct 13 05:36:57.152731 containerd[1965]: time="2025-10-13T05:36:57.152603294Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Oct 13 05:36:57.152731 containerd[1965]: time="2025-10-13T05:36:57.152625422Z" level=info msg="runtime interface created" Oct 13 05:36:57.152731 containerd[1965]: time="2025-10-13T05:36:57.152632655Z" level=info msg="created NRI interface" Oct 13 05:36:57.152731 containerd[1965]: time="2025-10-13T05:36:57.152644161Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Oct 13 05:36:57.152731 containerd[1965]: time="2025-10-13T05:36:57.152661472Z" level=info msg="Connect containerd service" Oct 13 05:36:57.152731 containerd[1965]: time="2025-10-13T05:36:57.152700262Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Oct 13 05:36:57.156067 containerd[1965]: time="2025-10-13T05:36:57.155364067Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 13 05:36:57.205364 tar[1958]: linux-amd64/README.md Oct 13 05:36:57.215003 amazon-ssm-agent[2017]: 2025-10-13 05:36:56.8591 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.3.0.0 Oct 13 05:36:57.227464 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Oct 13 05:36:57.315035 amazon-ssm-agent[2017]: 2025-10-13 05:36:56.8591 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Oct 13 05:36:57.426084 amazon-ssm-agent[2017]: 2025-10-13 05:36:56.8591 INFO [amazon-ssm-agent] Starting Core Agent Oct 13 05:36:57.525321 amazon-ssm-agent[2017]: 2025-10-13 05:36:56.8592 INFO [amazon-ssm-agent] Registrar detected. Attempting registration Oct 13 05:36:57.552046 amazon-ssm-agent[2017]: 2025/10/13 05:36:57 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Oct 13 05:36:57.552046 amazon-ssm-agent[2017]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Oct 13 05:36:57.552046 amazon-ssm-agent[2017]: 2025/10/13 05:36:57 processing appconfig overrides Oct 13 05:36:57.576510 amazon-ssm-agent[2017]: 2025-10-13 05:36:56.8592 INFO [Registrar] Starting registrar module Oct 13 05:36:57.576658 amazon-ssm-agent[2017]: 2025-10-13 05:36:56.8775 INFO [EC2Identity] Checking disk for registration info Oct 13 05:36:57.576708 amazon-ssm-agent[2017]: 2025-10-13 05:36:56.8776 INFO [EC2Identity] No registration info found for ec2 instance, attempting registration Oct 13 05:36:57.576774 amazon-ssm-agent[2017]: 2025-10-13 05:36:56.8776 INFO [EC2Identity] Generating registration keypair Oct 13 05:36:57.576774 amazon-ssm-agent[2017]: 2025-10-13 05:36:57.5097 INFO [EC2Identity] Checking write access before registering Oct 13 05:36:57.576821 amazon-ssm-agent[2017]: 2025-10-13 05:36:57.5101 INFO [EC2Identity] Registering EC2 instance with Systems Manager Oct 13 05:36:57.576821 amazon-ssm-agent[2017]: 2025-10-13 05:36:57.5504 INFO [EC2Identity] EC2 registration was successful. Oct 13 05:36:57.576821 amazon-ssm-agent[2017]: 2025-10-13 05:36:57.5508 INFO [amazon-ssm-agent] Registration attempted. Resuming core agent startup. Oct 13 05:36:57.576821 amazon-ssm-agent[2017]: 2025-10-13 05:36:57.5509 INFO [CredentialRefresher] credentialRefresher has started Oct 13 05:36:57.576821 amazon-ssm-agent[2017]: 2025-10-13 05:36:57.5509 INFO [CredentialRefresher] Starting credentials refresher loop Oct 13 05:36:57.576821 amazon-ssm-agent[2017]: 2025-10-13 05:36:57.5762 INFO EC2RoleProvider Successfully connected with instance profile role credentials Oct 13 05:36:57.576821 amazon-ssm-agent[2017]: 2025-10-13 05:36:57.5764 INFO [CredentialRefresher] Credentials ready Oct 13 05:36:57.625043 amazon-ssm-agent[2017]: 2025-10-13 05:36:57.5768 INFO [CredentialRefresher] Next credential rotation will be in 29.9999907377 minutes Oct 13 05:36:57.709584 containerd[1965]: time="2025-10-13T05:36:57.709443325Z" level=info msg="Start subscribing containerd event" Oct 13 05:36:57.709584 containerd[1965]: time="2025-10-13T05:36:57.709522958Z" level=info msg="Start recovering state" Oct 13 05:36:57.709731 containerd[1965]: time="2025-10-13T05:36:57.709665725Z" level=info msg="Start event monitor" Oct 13 05:36:57.709731 containerd[1965]: time="2025-10-13T05:36:57.709682392Z" level=info msg="Start cni network conf syncer for default" Oct 13 05:36:57.709731 containerd[1965]: time="2025-10-13T05:36:57.709691387Z" level=info msg="Start streaming server" Oct 13 05:36:57.709731 containerd[1965]: time="2025-10-13T05:36:57.709710812Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Oct 13 05:36:57.709878 containerd[1965]: time="2025-10-13T05:36:57.709721740Z" level=info msg="runtime interface starting up..." Oct 13 05:36:57.709878 containerd[1965]: time="2025-10-13T05:36:57.709748079Z" level=info msg="starting plugins..." Oct 13 05:36:57.709878 containerd[1965]: time="2025-10-13T05:36:57.709765663Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Oct 13 05:36:57.711602 containerd[1965]: time="2025-10-13T05:36:57.710137443Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 13 05:36:57.711602 containerd[1965]: time="2025-10-13T05:36:57.710206489Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 13 05:36:57.711602 containerd[1965]: time="2025-10-13T05:36:57.710268225Z" level=info msg="containerd successfully booted in 0.629473s" Oct 13 05:36:57.710448 systemd[1]: Started containerd.service - containerd container runtime. Oct 13 05:36:58.589446 amazon-ssm-agent[2017]: 2025-10-13 05:36:58.5893 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Oct 13 05:36:58.691495 amazon-ssm-agent[2017]: 2025-10-13 05:36:58.5912 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2214) started Oct 13 05:36:58.792398 amazon-ssm-agent[2017]: 2025-10-13 05:36:58.5913 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Oct 13 05:37:00.762504 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 05:37:00.764291 systemd[1]: Reached target multi-user.target - Multi-User System. Oct 13 05:37:00.766258 systemd[1]: Startup finished in 3.203s (kernel) + 10.107s (initrd) + 12.896s (userspace) = 26.207s. Oct 13 05:37:00.771945 (kubelet)[2231]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 13 05:37:02.388458 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Oct 13 05:37:02.389993 systemd[1]: Started sshd@0-172.31.26.130:22-139.178.89.65:40786.service - OpenSSH per-connection server daemon (139.178.89.65:40786). Oct 13 05:37:02.671746 sshd[2241]: Accepted publickey for core from 139.178.89.65 port 40786 ssh2: RSA SHA256:BpThGCEpsxsVRhv7+/XjjOxxuXXUG8jXPfh6ni97mp0 Oct 13 05:37:02.672658 sshd-session[2241]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:37:02.697554 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Oct 13 05:37:02.701410 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Oct 13 05:37:02.709456 kubelet[2231]: E1013 05:37:02.709421 2231 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 13 05:37:02.713284 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 13 05:37:02.713478 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 13 05:37:02.716462 systemd[1]: kubelet.service: Consumed 1.103s CPU time, 268.6M memory peak. Oct 13 05:37:02.721481 systemd-logind[1948]: New session 1 of user core. Oct 13 05:37:02.731887 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Oct 13 05:37:02.735378 systemd[1]: Starting user@500.service - User Manager for UID 500... Oct 13 05:37:02.749799 (systemd)[2248]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 13 05:37:04.749349 systemd-resolved[1555]: Clock change detected. Flushing caches. Oct 13 05:37:04.751457 systemd-logind[1948]: New session c1 of user core. Oct 13 05:37:04.899319 systemd[2248]: Queued start job for default target default.target. Oct 13 05:37:04.909843 systemd[2248]: Created slice app.slice - User Application Slice. Oct 13 05:37:04.909893 systemd[2248]: Reached target paths.target - Paths. Oct 13 05:37:04.910108 systemd[2248]: Reached target timers.target - Timers. Oct 13 05:37:04.912249 systemd[2248]: Starting dbus.socket - D-Bus User Message Bus Socket... Oct 13 05:37:04.934771 systemd[2248]: Listening on dbus.socket - D-Bus User Message Bus Socket. Oct 13 05:37:04.934939 systemd[2248]: Reached target sockets.target - Sockets. Oct 13 05:37:04.935008 systemd[2248]: Reached target basic.target - Basic System. Oct 13 05:37:04.935062 systemd[2248]: Reached target default.target - Main User Target. Oct 13 05:37:04.935103 systemd[2248]: Startup finished in 176ms. Oct 13 05:37:04.935253 systemd[1]: Started user@500.service - User Manager for UID 500. Oct 13 05:37:04.946671 systemd[1]: Started session-1.scope - Session 1 of User core. Oct 13 05:37:05.094332 systemd[1]: Started sshd@1-172.31.26.130:22-139.178.89.65:40798.service - OpenSSH per-connection server daemon (139.178.89.65:40798). Oct 13 05:37:05.270636 sshd[2259]: Accepted publickey for core from 139.178.89.65 port 40798 ssh2: RSA SHA256:BpThGCEpsxsVRhv7+/XjjOxxuXXUG8jXPfh6ni97mp0 Oct 13 05:37:05.272117 sshd-session[2259]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:37:05.277663 systemd-logind[1948]: New session 2 of user core. Oct 13 05:37:05.286689 systemd[1]: Started session-2.scope - Session 2 of User core. Oct 13 05:37:05.401588 sshd[2262]: Connection closed by 139.178.89.65 port 40798 Oct 13 05:37:05.402137 sshd-session[2259]: pam_unix(sshd:session): session closed for user core Oct 13 05:37:05.405824 systemd[1]: sshd@1-172.31.26.130:22-139.178.89.65:40798.service: Deactivated successfully. Oct 13 05:37:05.407613 systemd[1]: session-2.scope: Deactivated successfully. Oct 13 05:37:05.409363 systemd-logind[1948]: Session 2 logged out. Waiting for processes to exit. Oct 13 05:37:05.410344 systemd-logind[1948]: Removed session 2. Oct 13 05:37:05.439891 systemd[1]: Started sshd@2-172.31.26.130:22-139.178.89.65:40810.service - OpenSSH per-connection server daemon (139.178.89.65:40810). Oct 13 05:37:05.615682 sshd[2268]: Accepted publickey for core from 139.178.89.65 port 40810 ssh2: RSA SHA256:BpThGCEpsxsVRhv7+/XjjOxxuXXUG8jXPfh6ni97mp0 Oct 13 05:37:05.616827 sshd-session[2268]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:37:05.622131 systemd-logind[1948]: New session 3 of user core. Oct 13 05:37:05.624606 systemd[1]: Started session-3.scope - Session 3 of User core. Oct 13 05:37:05.747885 sshd[2271]: Connection closed by 139.178.89.65 port 40810 Oct 13 05:37:05.748761 sshd-session[2268]: pam_unix(sshd:session): session closed for user core Oct 13 05:37:05.753507 systemd[1]: sshd@2-172.31.26.130:22-139.178.89.65:40810.service: Deactivated successfully. Oct 13 05:37:05.755319 systemd[1]: session-3.scope: Deactivated successfully. Oct 13 05:37:05.756400 systemd-logind[1948]: Session 3 logged out. Waiting for processes to exit. Oct 13 05:37:05.757952 systemd-logind[1948]: Removed session 3. Oct 13 05:37:05.784352 systemd[1]: Started sshd@3-172.31.26.130:22-139.178.89.65:40818.service - OpenSSH per-connection server daemon (139.178.89.65:40818). Oct 13 05:37:05.954721 sshd[2277]: Accepted publickey for core from 139.178.89.65 port 40818 ssh2: RSA SHA256:BpThGCEpsxsVRhv7+/XjjOxxuXXUG8jXPfh6ni97mp0 Oct 13 05:37:05.956524 sshd-session[2277]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:37:05.962615 systemd-logind[1948]: New session 4 of user core. Oct 13 05:37:05.971670 systemd[1]: Started session-4.scope - Session 4 of User core. Oct 13 05:37:06.090810 sshd[2280]: Connection closed by 139.178.89.65 port 40818 Oct 13 05:37:06.091342 sshd-session[2277]: pam_unix(sshd:session): session closed for user core Oct 13 05:37:06.095796 systemd[1]: sshd@3-172.31.26.130:22-139.178.89.65:40818.service: Deactivated successfully. Oct 13 05:37:06.097860 systemd[1]: session-4.scope: Deactivated successfully. Oct 13 05:37:06.098869 systemd-logind[1948]: Session 4 logged out. Waiting for processes to exit. Oct 13 05:37:06.100487 systemd-logind[1948]: Removed session 4. Oct 13 05:37:06.128509 systemd[1]: Started sshd@4-172.31.26.130:22-139.178.89.65:40824.service - OpenSSH per-connection server daemon (139.178.89.65:40824). Oct 13 05:37:06.302663 sshd[2286]: Accepted publickey for core from 139.178.89.65 port 40824 ssh2: RSA SHA256:BpThGCEpsxsVRhv7+/XjjOxxuXXUG8jXPfh6ni97mp0 Oct 13 05:37:06.304319 sshd-session[2286]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:37:06.310028 systemd-logind[1948]: New session 5 of user core. Oct 13 05:37:06.315650 systemd[1]: Started session-5.scope - Session 5 of User core. Oct 13 05:37:06.501552 sudo[2290]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 13 05:37:06.501923 sudo[2290]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 13 05:37:06.514618 sudo[2290]: pam_unix(sudo:session): session closed for user root Oct 13 05:37:06.538147 sshd[2289]: Connection closed by 139.178.89.65 port 40824 Oct 13 05:37:06.538896 sshd-session[2286]: pam_unix(sshd:session): session closed for user core Oct 13 05:37:06.542947 systemd[1]: sshd@4-172.31.26.130:22-139.178.89.65:40824.service: Deactivated successfully. Oct 13 05:37:06.545131 systemd[1]: session-5.scope: Deactivated successfully. Oct 13 05:37:06.547623 systemd-logind[1948]: Session 5 logged out. Waiting for processes to exit. Oct 13 05:37:06.548922 systemd-logind[1948]: Removed session 5. Oct 13 05:37:06.573377 systemd[1]: Started sshd@5-172.31.26.130:22-139.178.89.65:40834.service - OpenSSH per-connection server daemon (139.178.89.65:40834). Oct 13 05:37:06.749236 sshd[2296]: Accepted publickey for core from 139.178.89.65 port 40834 ssh2: RSA SHA256:BpThGCEpsxsVRhv7+/XjjOxxuXXUG8jXPfh6ni97mp0 Oct 13 05:37:06.750687 sshd-session[2296]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:37:06.756993 systemd-logind[1948]: New session 6 of user core. Oct 13 05:37:06.763655 systemd[1]: Started session-6.scope - Session 6 of User core. Oct 13 05:37:06.862075 sudo[2301]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 13 05:37:06.862470 sudo[2301]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 13 05:37:06.869784 sudo[2301]: pam_unix(sudo:session): session closed for user root Oct 13 05:37:06.876976 sudo[2300]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Oct 13 05:37:06.877242 sudo[2300]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 13 05:37:06.887701 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 13 05:37:06.925443 augenrules[2323]: No rules Oct 13 05:37:06.926755 systemd[1]: audit-rules.service: Deactivated successfully. Oct 13 05:37:06.927142 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 13 05:37:06.928337 sudo[2300]: pam_unix(sudo:session): session closed for user root Oct 13 05:37:06.951318 sshd[2299]: Connection closed by 139.178.89.65 port 40834 Oct 13 05:37:06.951860 sshd-session[2296]: pam_unix(sshd:session): session closed for user core Oct 13 05:37:06.956327 systemd[1]: sshd@5-172.31.26.130:22-139.178.89.65:40834.service: Deactivated successfully. Oct 13 05:37:06.958073 systemd[1]: session-6.scope: Deactivated successfully. Oct 13 05:37:06.958911 systemd-logind[1948]: Session 6 logged out. Waiting for processes to exit. Oct 13 05:37:06.960381 systemd-logind[1948]: Removed session 6. Oct 13 05:37:06.986326 systemd[1]: Started sshd@6-172.31.26.130:22-139.178.89.65:40836.service - OpenSSH per-connection server daemon (139.178.89.65:40836). Oct 13 05:37:07.160631 sshd[2332]: Accepted publickey for core from 139.178.89.65 port 40836 ssh2: RSA SHA256:BpThGCEpsxsVRhv7+/XjjOxxuXXUG8jXPfh6ni97mp0 Oct 13 05:37:07.161848 sshd-session[2332]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:37:07.167054 systemd-logind[1948]: New session 7 of user core. Oct 13 05:37:07.169617 systemd[1]: Started session-7.scope - Session 7 of User core. Oct 13 05:37:07.270258 sudo[2336]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 13 05:37:07.270661 sudo[2336]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 13 05:37:08.551695 systemd[1]: Starting docker.service - Docker Application Container Engine... Oct 13 05:37:08.567833 (dockerd)[2355]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Oct 13 05:37:09.526789 dockerd[2355]: time="2025-10-13T05:37:09.526572346Z" level=info msg="Starting up" Oct 13 05:37:09.528955 dockerd[2355]: time="2025-10-13T05:37:09.528572340Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Oct 13 05:37:09.540120 dockerd[2355]: time="2025-10-13T05:37:09.540068128Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Oct 13 05:37:09.627167 dockerd[2355]: time="2025-10-13T05:37:09.626943672Z" level=info msg="Loading containers: start." Oct 13 05:37:09.670452 kernel: Initializing XFRM netlink socket Oct 13 05:37:10.018656 (udev-worker)[2376]: Network interface NamePolicy= disabled on kernel command line. Oct 13 05:37:10.063341 systemd-networkd[1573]: docker0: Link UP Oct 13 05:37:10.075433 dockerd[2355]: time="2025-10-13T05:37:10.075359413Z" level=info msg="Loading containers: done." Oct 13 05:37:10.089282 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2353886862-merged.mount: Deactivated successfully. Oct 13 05:37:10.099317 dockerd[2355]: time="2025-10-13T05:37:10.099259642Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 13 05:37:10.099549 dockerd[2355]: time="2025-10-13T05:37:10.099373049Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Oct 13 05:37:10.099549 dockerd[2355]: time="2025-10-13T05:37:10.099511261Z" level=info msg="Initializing buildkit" Oct 13 05:37:10.137656 dockerd[2355]: time="2025-10-13T05:37:10.137604982Z" level=info msg="Completed buildkit initialization" Oct 13 05:37:10.145314 dockerd[2355]: time="2025-10-13T05:37:10.145263532Z" level=info msg="Daemon has completed initialization" Oct 13 05:37:10.146165 dockerd[2355]: time="2025-10-13T05:37:10.145500291Z" level=info msg="API listen on /run/docker.sock" Oct 13 05:37:10.145598 systemd[1]: Started docker.service - Docker Application Container Engine. Oct 13 05:37:11.956534 containerd[1965]: time="2025-10-13T05:37:11.956471180Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\"" Oct 13 05:37:12.562612 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1069489646.mount: Deactivated successfully. Oct 13 05:37:14.271203 containerd[1965]: time="2025-10-13T05:37:14.271147472Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:37:14.273119 containerd[1965]: time="2025-10-13T05:37:14.273070475Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.5: active requests=0, bytes read=30114893" Oct 13 05:37:14.275579 containerd[1965]: time="2025-10-13T05:37:14.275519543Z" level=info msg="ImageCreate event name:\"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:37:14.282010 containerd[1965]: time="2025-10-13T05:37:14.281955652Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:37:14.283072 containerd[1965]: time="2025-10-13T05:37:14.283032925Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.5\" with image id \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\", size \"30111492\" in 2.326521875s" Oct 13 05:37:14.283072 containerd[1965]: time="2025-10-13T05:37:14.283075387Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\" returns image reference \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\"" Oct 13 05:37:14.284167 containerd[1965]: time="2025-10-13T05:37:14.284121328Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\"" Oct 13 05:37:14.743101 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 13 05:37:14.744879 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 05:37:14.969574 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 05:37:14.978784 (kubelet)[2633]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 13 05:37:15.048221 kubelet[2633]: E1013 05:37:15.048073 2633 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 13 05:37:15.052586 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 13 05:37:15.052860 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 13 05:37:15.053460 systemd[1]: kubelet.service: Consumed 189ms CPU time, 108.7M memory peak. Oct 13 05:37:16.345301 containerd[1965]: time="2025-10-13T05:37:16.345250975Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:37:16.347503 containerd[1965]: time="2025-10-13T05:37:16.347453165Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.5: active requests=0, bytes read=26020844" Oct 13 05:37:16.350206 containerd[1965]: time="2025-10-13T05:37:16.349794805Z" level=info msg="ImageCreate event name:\"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:37:16.353657 containerd[1965]: time="2025-10-13T05:37:16.353622824Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:37:16.354465 containerd[1965]: time="2025-10-13T05:37:16.354433673Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.5\" with image id \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\", size \"27681301\" in 2.070279161s" Oct 13 05:37:16.354574 containerd[1965]: time="2025-10-13T05:37:16.354559674Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\" returns image reference \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\"" Oct 13 05:37:16.355081 containerd[1965]: time="2025-10-13T05:37:16.355049166Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\"" Oct 13 05:37:17.881113 containerd[1965]: time="2025-10-13T05:37:17.881045598Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:37:17.882036 containerd[1965]: time="2025-10-13T05:37:17.881914017Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.5: active requests=0, bytes read=20155568" Oct 13 05:37:17.882862 containerd[1965]: time="2025-10-13T05:37:17.882827489Z" level=info msg="ImageCreate event name:\"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:37:17.885405 containerd[1965]: time="2025-10-13T05:37:17.885374716Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:37:17.886538 containerd[1965]: time="2025-10-13T05:37:17.886317345Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.5\" with image id \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\", size \"21816043\" in 1.531236681s" Oct 13 05:37:17.886538 containerd[1965]: time="2025-10-13T05:37:17.886347237Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\" returns image reference \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\"" Oct 13 05:37:17.886797 containerd[1965]: time="2025-10-13T05:37:17.886783344Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\"" Oct 13 05:37:19.036356 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1707651101.mount: Deactivated successfully. Oct 13 05:37:19.666560 containerd[1965]: time="2025-10-13T05:37:19.666498900Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:37:19.668652 containerd[1965]: time="2025-10-13T05:37:19.668599551Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.5: active requests=0, bytes read=31929469" Oct 13 05:37:19.671231 containerd[1965]: time="2025-10-13T05:37:19.671177899Z" level=info msg="ImageCreate event name:\"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:37:19.677273 containerd[1965]: time="2025-10-13T05:37:19.677148388Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.5\" with image id \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\", repo tag \"registry.k8s.io/kube-proxy:v1.33.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\", size \"31928488\" in 1.790279788s" Oct 13 05:37:19.677273 containerd[1965]: time="2025-10-13T05:37:19.677188447Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\" returns image reference \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\"" Oct 13 05:37:19.677442 containerd[1965]: time="2025-10-13T05:37:19.677362248Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:37:19.678105 containerd[1965]: time="2025-10-13T05:37:19.678077869Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Oct 13 05:37:20.260067 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2607155165.mount: Deactivated successfully. Oct 13 05:37:21.377348 containerd[1965]: time="2025-10-13T05:37:21.377282726Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:37:21.379340 containerd[1965]: time="2025-10-13T05:37:21.379295638Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Oct 13 05:37:21.381760 containerd[1965]: time="2025-10-13T05:37:21.381724073Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:37:21.385563 containerd[1965]: time="2025-10-13T05:37:21.385510167Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:37:21.386607 containerd[1965]: time="2025-10-13T05:37:21.386280176Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.708172704s" Oct 13 05:37:21.386607 containerd[1965]: time="2025-10-13T05:37:21.386314478Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Oct 13 05:37:21.386892 containerd[1965]: time="2025-10-13T05:37:21.386869109Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Oct 13 05:37:21.869922 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount64323551.mount: Deactivated successfully. Oct 13 05:37:21.882701 containerd[1965]: time="2025-10-13T05:37:21.882625624Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 13 05:37:21.884467 containerd[1965]: time="2025-10-13T05:37:21.884432214Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Oct 13 05:37:21.886770 containerd[1965]: time="2025-10-13T05:37:21.886719645Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 13 05:37:21.889677 containerd[1965]: time="2025-10-13T05:37:21.889626019Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 13 05:37:21.890299 containerd[1965]: time="2025-10-13T05:37:21.890144199Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 503.139447ms" Oct 13 05:37:21.890299 containerd[1965]: time="2025-10-13T05:37:21.890173403Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Oct 13 05:37:21.890800 containerd[1965]: time="2025-10-13T05:37:21.890648901Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Oct 13 05:37:22.365684 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount21613910.mount: Deactivated successfully. Oct 13 05:37:24.858837 containerd[1965]: time="2025-10-13T05:37:24.858766489Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:37:24.860638 containerd[1965]: time="2025-10-13T05:37:24.860586312Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58378433" Oct 13 05:37:24.863182 containerd[1965]: time="2025-10-13T05:37:24.863114144Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:37:24.867292 containerd[1965]: time="2025-10-13T05:37:24.867228923Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:37:24.868438 containerd[1965]: time="2025-10-13T05:37:24.868212432Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 2.97753832s" Oct 13 05:37:24.868438 containerd[1965]: time="2025-10-13T05:37:24.868246183Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Oct 13 05:37:25.243286 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Oct 13 05:37:25.245694 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 05:37:25.601602 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 05:37:25.612803 (kubelet)[2795]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 13 05:37:25.685396 kubelet[2795]: E1013 05:37:25.685338 2795 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 13 05:37:25.688705 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 13 05:37:25.688896 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 13 05:37:25.689543 systemd[1]: kubelet.service: Consumed 206ms CPU time, 110.5M memory peak. Oct 13 05:37:28.166974 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 05:37:28.167236 systemd[1]: kubelet.service: Consumed 206ms CPU time, 110.5M memory peak. Oct 13 05:37:28.170160 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 05:37:28.207603 systemd[1]: Reload requested from client PID 2810 ('systemctl') (unit session-7.scope)... Oct 13 05:37:28.207625 systemd[1]: Reloading... Oct 13 05:37:28.370440 zram_generator::config[2859]: No configuration found. Oct 13 05:37:28.624755 systemd[1]: Reloading finished in 416 ms. Oct 13 05:37:28.682119 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Oct 13 05:37:28.682210 systemd[1]: kubelet.service: Failed with result 'signal'. Oct 13 05:37:28.682800 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 05:37:28.682854 systemd[1]: kubelet.service: Consumed 142ms CPU time, 98.3M memory peak. Oct 13 05:37:28.687232 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 05:37:28.713391 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Oct 13 05:37:28.941087 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 05:37:28.952765 (kubelet)[2922]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 13 05:37:29.014805 kubelet[2922]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 13 05:37:29.014805 kubelet[2922]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 13 05:37:29.014805 kubelet[2922]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 13 05:37:29.015171 kubelet[2922]: I1013 05:37:29.014863 2922 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 13 05:37:29.480353 kubelet[2922]: I1013 05:37:29.480305 2922 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Oct 13 05:37:29.480353 kubelet[2922]: I1013 05:37:29.480345 2922 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 13 05:37:29.480946 kubelet[2922]: I1013 05:37:29.480923 2922 server.go:956] "Client rotation is on, will bootstrap in background" Oct 13 05:37:29.543144 kubelet[2922]: I1013 05:37:29.543098 2922 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 13 05:37:29.547150 kubelet[2922]: E1013 05:37:29.547036 2922 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.26.130:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.26.130:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Oct 13 05:37:29.573074 kubelet[2922]: I1013 05:37:29.573035 2922 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 13 05:37:29.577833 kubelet[2922]: I1013 05:37:29.577773 2922 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 13 05:37:29.581764 kubelet[2922]: I1013 05:37:29.581704 2922 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 13 05:37:29.585523 kubelet[2922]: I1013 05:37:29.581754 2922 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-26-130","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 13 05:37:29.587202 kubelet[2922]: I1013 05:37:29.587171 2922 topology_manager.go:138] "Creating topology manager with none policy" Oct 13 05:37:29.587202 kubelet[2922]: I1013 05:37:29.587204 2922 container_manager_linux.go:303] "Creating device plugin manager" Oct 13 05:37:29.587348 kubelet[2922]: I1013 05:37:29.587332 2922 state_mem.go:36] "Initialized new in-memory state store" Oct 13 05:37:29.591542 kubelet[2922]: I1013 05:37:29.591429 2922 kubelet.go:480] "Attempting to sync node with API server" Oct 13 05:37:29.591542 kubelet[2922]: I1013 05:37:29.591486 2922 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 13 05:37:29.592453 kubelet[2922]: I1013 05:37:29.592428 2922 kubelet.go:386] "Adding apiserver pod source" Oct 13 05:37:29.592453 kubelet[2922]: I1013 05:37:29.592456 2922 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 13 05:37:29.596316 kubelet[2922]: E1013 05:37:29.595652 2922 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.26.130:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-26-130&limit=500&resourceVersion=0\": dial tcp 172.31.26.130:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Oct 13 05:37:29.601433 kubelet[2922]: E1013 05:37:29.601372 2922 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.26.130:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.26.130:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Oct 13 05:37:29.602035 kubelet[2922]: I1013 05:37:29.602008 2922 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Oct 13 05:37:29.602585 kubelet[2922]: I1013 05:37:29.602490 2922 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Oct 13 05:37:29.605451 kubelet[2922]: W1013 05:37:29.603555 2922 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 13 05:37:29.606602 kubelet[2922]: I1013 05:37:29.606588 2922 watchdog_linux.go:99] "Systemd watchdog is not enabled" Oct 13 05:37:29.606762 kubelet[2922]: I1013 05:37:29.606752 2922 server.go:1289] "Started kubelet" Oct 13 05:37:29.621341 kubelet[2922]: I1013 05:37:29.621227 2922 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 13 05:37:29.622623 kubelet[2922]: E1013 05:37:29.617518 2922 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.26.130:6443/api/v1/namespaces/default/events\": dial tcp 172.31.26.130:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-26-130.186df64f3fe77679 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-26-130,UID:ip-172-31-26-130,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-26-130,},FirstTimestamp:2025-10-13 05:37:29.606715001 +0000 UTC m=+0.649511844,LastTimestamp:2025-10-13 05:37:29.606715001 +0000 UTC m=+0.649511844,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-26-130,}" Oct 13 05:37:29.628812 kubelet[2922]: I1013 05:37:29.624155 2922 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Oct 13 05:37:29.628812 kubelet[2922]: I1013 05:37:29.625656 2922 server.go:317] "Adding debug handlers to kubelet server" Oct 13 05:37:29.631828 kubelet[2922]: I1013 05:37:29.630529 2922 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 13 05:37:29.631828 kubelet[2922]: I1013 05:37:29.630810 2922 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 13 05:37:29.631828 kubelet[2922]: I1013 05:37:29.631147 2922 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 13 05:37:29.633746 kubelet[2922]: I1013 05:37:29.633723 2922 volume_manager.go:297] "Starting Kubelet Volume Manager" Oct 13 05:37:29.634003 kubelet[2922]: E1013 05:37:29.633979 2922 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-26-130\" not found" Oct 13 05:37:29.637351 kubelet[2922]: E1013 05:37:29.637308 2922 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.26.130:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-26-130?timeout=10s\": dial tcp 172.31.26.130:6443: connect: connection refused" interval="200ms" Oct 13 05:37:29.637861 kubelet[2922]: I1013 05:37:29.637841 2922 reconciler.go:26] "Reconciler: start to sync state" Oct 13 05:37:29.638264 kubelet[2922]: I1013 05:37:29.638241 2922 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Oct 13 05:37:29.638746 kubelet[2922]: E1013 05:37:29.638719 2922 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.26.130:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.26.130:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Oct 13 05:37:29.644806 kubelet[2922]: I1013 05:37:29.644762 2922 factory.go:223] Registration of the systemd container factory successfully Oct 13 05:37:29.644946 kubelet[2922]: I1013 05:37:29.644871 2922 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 13 05:37:29.654501 kubelet[2922]: E1013 05:37:29.653925 2922 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 13 05:37:29.654501 kubelet[2922]: I1013 05:37:29.654152 2922 factory.go:223] Registration of the containerd container factory successfully Oct 13 05:37:29.669171 kubelet[2922]: I1013 05:37:29.669117 2922 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Oct 13 05:37:29.670877 kubelet[2922]: I1013 05:37:29.670839 2922 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Oct 13 05:37:29.670877 kubelet[2922]: I1013 05:37:29.670876 2922 status_manager.go:230] "Starting to sync pod status with apiserver" Oct 13 05:37:29.671029 kubelet[2922]: I1013 05:37:29.670903 2922 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 13 05:37:29.671029 kubelet[2922]: I1013 05:37:29.670913 2922 kubelet.go:2436] "Starting kubelet main sync loop" Oct 13 05:37:29.671029 kubelet[2922]: E1013 05:37:29.670960 2922 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 13 05:37:29.684715 kubelet[2922]: E1013 05:37:29.684677 2922 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.26.130:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.26.130:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Oct 13 05:37:29.688667 kubelet[2922]: I1013 05:37:29.688644 2922 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 13 05:37:29.688844 kubelet[2922]: I1013 05:37:29.688831 2922 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 13 05:37:29.688912 kubelet[2922]: I1013 05:37:29.688903 2922 state_mem.go:36] "Initialized new in-memory state store" Oct 13 05:37:29.694466 kubelet[2922]: I1013 05:37:29.694197 2922 policy_none.go:49] "None policy: Start" Oct 13 05:37:29.694466 kubelet[2922]: I1013 05:37:29.694226 2922 memory_manager.go:186] "Starting memorymanager" policy="None" Oct 13 05:37:29.694466 kubelet[2922]: I1013 05:37:29.694243 2922 state_mem.go:35] "Initializing new in-memory state store" Oct 13 05:37:29.704662 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Oct 13 05:37:29.714239 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Oct 13 05:37:29.717950 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Oct 13 05:37:29.727805 kubelet[2922]: E1013 05:37:29.727402 2922 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Oct 13 05:37:29.727805 kubelet[2922]: I1013 05:37:29.727622 2922 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 13 05:37:29.727805 kubelet[2922]: I1013 05:37:29.727632 2922 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 13 05:37:29.730725 kubelet[2922]: I1013 05:37:29.729743 2922 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 13 05:37:29.732579 kubelet[2922]: E1013 05:37:29.732514 2922 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 13 05:37:29.732579 kubelet[2922]: E1013 05:37:29.732551 2922 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-26-130\" not found" Oct 13 05:37:29.787513 systemd[1]: Created slice kubepods-burstable-pod5ad29533d399393a0fdeb497275ea0e0.slice - libcontainer container kubepods-burstable-pod5ad29533d399393a0fdeb497275ea0e0.slice. Oct 13 05:37:29.796556 kubelet[2922]: E1013 05:37:29.796345 2922 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-26-130\" not found" node="ip-172-31-26-130" Oct 13 05:37:29.800692 systemd[1]: Created slice kubepods-burstable-pod42080cedb099368f218be61e94fea414.slice - libcontainer container kubepods-burstable-pod42080cedb099368f218be61e94fea414.slice. Oct 13 05:37:29.809018 kubelet[2922]: E1013 05:37:29.808987 2922 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-26-130\" not found" node="ip-172-31-26-130" Oct 13 05:37:29.812739 systemd[1]: Created slice kubepods-burstable-pod65d0e757ddee71cecc9bd6c746f26fd6.slice - libcontainer container kubepods-burstable-pod65d0e757ddee71cecc9bd6c746f26fd6.slice. Oct 13 05:37:29.814573 kubelet[2922]: E1013 05:37:29.814546 2922 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-26-130\" not found" node="ip-172-31-26-130" Oct 13 05:37:29.830306 kubelet[2922]: I1013 05:37:29.830053 2922 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-26-130" Oct 13 05:37:29.830491 kubelet[2922]: E1013 05:37:29.830456 2922 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.26.130:6443/api/v1/nodes\": dial tcp 172.31.26.130:6443: connect: connection refused" node="ip-172-31-26-130" Oct 13 05:37:29.839044 kubelet[2922]: E1013 05:37:29.839004 2922 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.26.130:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-26-130?timeout=10s\": dial tcp 172.31.26.130:6443: connect: connection refused" interval="400ms" Oct 13 05:37:29.939694 kubelet[2922]: I1013 05:37:29.939637 2922 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5ad29533d399393a0fdeb497275ea0e0-k8s-certs\") pod \"kube-apiserver-ip-172-31-26-130\" (UID: \"5ad29533d399393a0fdeb497275ea0e0\") " pod="kube-system/kube-apiserver-ip-172-31-26-130" Oct 13 05:37:29.939694 kubelet[2922]: I1013 05:37:29.939694 2922 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/42080cedb099368f218be61e94fea414-ca-certs\") pod \"kube-controller-manager-ip-172-31-26-130\" (UID: \"42080cedb099368f218be61e94fea414\") " pod="kube-system/kube-controller-manager-ip-172-31-26-130" Oct 13 05:37:29.940085 kubelet[2922]: I1013 05:37:29.939719 2922 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/42080cedb099368f218be61e94fea414-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-26-130\" (UID: \"42080cedb099368f218be61e94fea414\") " pod="kube-system/kube-controller-manager-ip-172-31-26-130" Oct 13 05:37:29.940085 kubelet[2922]: I1013 05:37:29.939743 2922 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/42080cedb099368f218be61e94fea414-k8s-certs\") pod \"kube-controller-manager-ip-172-31-26-130\" (UID: \"42080cedb099368f218be61e94fea414\") " pod="kube-system/kube-controller-manager-ip-172-31-26-130" Oct 13 05:37:29.940085 kubelet[2922]: I1013 05:37:29.939767 2922 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/42080cedb099368f218be61e94fea414-kubeconfig\") pod \"kube-controller-manager-ip-172-31-26-130\" (UID: \"42080cedb099368f218be61e94fea414\") " pod="kube-system/kube-controller-manager-ip-172-31-26-130" Oct 13 05:37:29.940085 kubelet[2922]: I1013 05:37:29.939786 2922 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5ad29533d399393a0fdeb497275ea0e0-ca-certs\") pod \"kube-apiserver-ip-172-31-26-130\" (UID: \"5ad29533d399393a0fdeb497275ea0e0\") " pod="kube-system/kube-apiserver-ip-172-31-26-130" Oct 13 05:37:29.940085 kubelet[2922]: I1013 05:37:29.939887 2922 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5ad29533d399393a0fdeb497275ea0e0-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-26-130\" (UID: \"5ad29533d399393a0fdeb497275ea0e0\") " pod="kube-system/kube-apiserver-ip-172-31-26-130" Oct 13 05:37:29.940251 kubelet[2922]: I1013 05:37:29.939915 2922 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/42080cedb099368f218be61e94fea414-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-26-130\" (UID: \"42080cedb099368f218be61e94fea414\") " pod="kube-system/kube-controller-manager-ip-172-31-26-130" Oct 13 05:37:29.940251 kubelet[2922]: I1013 05:37:29.939939 2922 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/65d0e757ddee71cecc9bd6c746f26fd6-kubeconfig\") pod \"kube-scheduler-ip-172-31-26-130\" (UID: \"65d0e757ddee71cecc9bd6c746f26fd6\") " pod="kube-system/kube-scheduler-ip-172-31-26-130" Oct 13 05:37:30.033798 kubelet[2922]: I1013 05:37:30.033763 2922 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-26-130" Oct 13 05:37:30.034246 kubelet[2922]: E1013 05:37:30.034207 2922 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.26.130:6443/api/v1/nodes\": dial tcp 172.31.26.130:6443: connect: connection refused" node="ip-172-31-26-130" Oct 13 05:37:30.098143 containerd[1965]: time="2025-10-13T05:37:30.098092655Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-26-130,Uid:5ad29533d399393a0fdeb497275ea0e0,Namespace:kube-system,Attempt:0,}" Oct 13 05:37:30.110644 containerd[1965]: time="2025-10-13T05:37:30.110312637Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-26-130,Uid:42080cedb099368f218be61e94fea414,Namespace:kube-system,Attempt:0,}" Oct 13 05:37:30.115666 containerd[1965]: time="2025-10-13T05:37:30.115619023Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-26-130,Uid:65d0e757ddee71cecc9bd6c746f26fd6,Namespace:kube-system,Attempt:0,}" Oct 13 05:37:30.240715 kubelet[2922]: E1013 05:37:30.240439 2922 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.26.130:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-26-130?timeout=10s\": dial tcp 172.31.26.130:6443: connect: connection refused" interval="800ms" Oct 13 05:37:30.275463 containerd[1965]: time="2025-10-13T05:37:30.274802222Z" level=info msg="connecting to shim 8d742dfae72508116bd141daf3bdb843a3edecf838c84e69cbb55f65b1aefb44" address="unix:///run/containerd/s/04b24666bcb233dc5540a077b1245fe3dd5e1b08e3ebeb2b8bc3a1b9996fda1f" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:37:30.275717 containerd[1965]: time="2025-10-13T05:37:30.275693619Z" level=info msg="connecting to shim bb8fa4e81c130acb56ae9ffe80ddaa3615e28f21e7357ce3c7f5bc6fdaaf19ce" address="unix:///run/containerd/s/f71fc9748ddec2bd3569c7fa7cb45351ff53d04e936cad74172b6b143ddbd2db" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:37:30.280305 containerd[1965]: time="2025-10-13T05:37:30.280246350Z" level=info msg="connecting to shim ebed4d85b5181561ca3239a63621add503e3e31fb0f71235843a6d4e549f8345" address="unix:///run/containerd/s/306bdfe04ac8998cb7d5642e2b2d277f67905fd6af1d6541b4551eec9e8594b3" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:37:30.375622 systemd[1]: Started cri-containerd-8d742dfae72508116bd141daf3bdb843a3edecf838c84e69cbb55f65b1aefb44.scope - libcontainer container 8d742dfae72508116bd141daf3bdb843a3edecf838c84e69cbb55f65b1aefb44. Oct 13 05:37:30.376917 systemd[1]: Started cri-containerd-bb8fa4e81c130acb56ae9ffe80ddaa3615e28f21e7357ce3c7f5bc6fdaaf19ce.scope - libcontainer container bb8fa4e81c130acb56ae9ffe80ddaa3615e28f21e7357ce3c7f5bc6fdaaf19ce. Oct 13 05:37:30.377958 systemd[1]: Started cri-containerd-ebed4d85b5181561ca3239a63621add503e3e31fb0f71235843a6d4e549f8345.scope - libcontainer container ebed4d85b5181561ca3239a63621add503e3e31fb0f71235843a6d4e549f8345. Oct 13 05:37:30.444482 kubelet[2922]: I1013 05:37:30.444217 2922 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-26-130" Oct 13 05:37:30.446386 kubelet[2922]: E1013 05:37:30.446187 2922 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.26.130:6443/api/v1/nodes\": dial tcp 172.31.26.130:6443: connect: connection refused" node="ip-172-31-26-130" Oct 13 05:37:30.490285 containerd[1965]: time="2025-10-13T05:37:30.489047928Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-26-130,Uid:42080cedb099368f218be61e94fea414,Namespace:kube-system,Attempt:0,} returns sandbox id \"8d742dfae72508116bd141daf3bdb843a3edecf838c84e69cbb55f65b1aefb44\"" Oct 13 05:37:30.505377 containerd[1965]: time="2025-10-13T05:37:30.505334227Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-26-130,Uid:5ad29533d399393a0fdeb497275ea0e0,Namespace:kube-system,Attempt:0,} returns sandbox id \"bb8fa4e81c130acb56ae9ffe80ddaa3615e28f21e7357ce3c7f5bc6fdaaf19ce\"" Oct 13 05:37:30.508845 containerd[1965]: time="2025-10-13T05:37:30.506165046Z" level=info msg="CreateContainer within sandbox \"8d742dfae72508116bd141daf3bdb843a3edecf838c84e69cbb55f65b1aefb44\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 13 05:37:30.521333 containerd[1965]: time="2025-10-13T05:37:30.521300747Z" level=info msg="CreateContainer within sandbox \"bb8fa4e81c130acb56ae9ffe80ddaa3615e28f21e7357ce3c7f5bc6fdaaf19ce\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 13 05:37:30.526942 kubelet[2922]: E1013 05:37:30.526176 2922 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.26.130:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.26.130:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Oct 13 05:37:30.533453 containerd[1965]: time="2025-10-13T05:37:30.533394781Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-26-130,Uid:65d0e757ddee71cecc9bd6c746f26fd6,Namespace:kube-system,Attempt:0,} returns sandbox id \"ebed4d85b5181561ca3239a63621add503e3e31fb0f71235843a6d4e549f8345\"" Oct 13 05:37:30.540542 containerd[1965]: time="2025-10-13T05:37:30.540500586Z" level=info msg="CreateContainer within sandbox \"ebed4d85b5181561ca3239a63621add503e3e31fb0f71235843a6d4e549f8345\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 13 05:37:30.564281 containerd[1965]: time="2025-10-13T05:37:30.564120576Z" level=info msg="Container 40db531112f48f0e47dfe4fb83610dbc7aeb2d492db84ca5d85298567e337b52: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:37:30.564281 containerd[1965]: time="2025-10-13T05:37:30.564147046Z" level=info msg="Container 0e489a95e171bd7b087a09ff4f8bcf292d3207aacf0668457858db59aa18a91d: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:37:30.564555 containerd[1965]: time="2025-10-13T05:37:30.564522187Z" level=info msg="Container ca6b0171b6ddda41644409eb3a5406ee98da870754aad1406ba85e58aead7be0: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:37:30.584675 containerd[1965]: time="2025-10-13T05:37:30.584607321Z" level=info msg="CreateContainer within sandbox \"8d742dfae72508116bd141daf3bdb843a3edecf838c84e69cbb55f65b1aefb44\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"40db531112f48f0e47dfe4fb83610dbc7aeb2d492db84ca5d85298567e337b52\"" Oct 13 05:37:30.585397 containerd[1965]: time="2025-10-13T05:37:30.585331258Z" level=info msg="StartContainer for \"40db531112f48f0e47dfe4fb83610dbc7aeb2d492db84ca5d85298567e337b52\"" Oct 13 05:37:30.586739 containerd[1965]: time="2025-10-13T05:37:30.586697199Z" level=info msg="connecting to shim 40db531112f48f0e47dfe4fb83610dbc7aeb2d492db84ca5d85298567e337b52" address="unix:///run/containerd/s/04b24666bcb233dc5540a077b1245fe3dd5e1b08e3ebeb2b8bc3a1b9996fda1f" protocol=ttrpc version=3 Oct 13 05:37:30.602004 containerd[1965]: time="2025-10-13T05:37:30.601900686Z" level=info msg="CreateContainer within sandbox \"bb8fa4e81c130acb56ae9ffe80ddaa3615e28f21e7357ce3c7f5bc6fdaaf19ce\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"ca6b0171b6ddda41644409eb3a5406ee98da870754aad1406ba85e58aead7be0\"" Oct 13 05:37:30.605124 containerd[1965]: time="2025-10-13T05:37:30.604283545Z" level=info msg="StartContainer for \"ca6b0171b6ddda41644409eb3a5406ee98da870754aad1406ba85e58aead7be0\"" Oct 13 05:37:30.605124 containerd[1965]: time="2025-10-13T05:37:30.604960562Z" level=info msg="CreateContainer within sandbox \"ebed4d85b5181561ca3239a63621add503e3e31fb0f71235843a6d4e549f8345\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"0e489a95e171bd7b087a09ff4f8bcf292d3207aacf0668457858db59aa18a91d\"" Oct 13 05:37:30.605617 containerd[1965]: time="2025-10-13T05:37:30.605565940Z" level=info msg="StartContainer for \"0e489a95e171bd7b087a09ff4f8bcf292d3207aacf0668457858db59aa18a91d\"" Oct 13 05:37:30.606246 containerd[1965]: time="2025-10-13T05:37:30.606154660Z" level=info msg="connecting to shim ca6b0171b6ddda41644409eb3a5406ee98da870754aad1406ba85e58aead7be0" address="unix:///run/containerd/s/f71fc9748ddec2bd3569c7fa7cb45351ff53d04e936cad74172b6b143ddbd2db" protocol=ttrpc version=3 Oct 13 05:37:30.606921 containerd[1965]: time="2025-10-13T05:37:30.606891107Z" level=info msg="connecting to shim 0e489a95e171bd7b087a09ff4f8bcf292d3207aacf0668457858db59aa18a91d" address="unix:///run/containerd/s/306bdfe04ac8998cb7d5642e2b2d277f67905fd6af1d6541b4551eec9e8594b3" protocol=ttrpc version=3 Oct 13 05:37:30.608818 systemd[1]: Started cri-containerd-40db531112f48f0e47dfe4fb83610dbc7aeb2d492db84ca5d85298567e337b52.scope - libcontainer container 40db531112f48f0e47dfe4fb83610dbc7aeb2d492db84ca5d85298567e337b52. Oct 13 05:37:30.643637 systemd[1]: Started cri-containerd-0e489a95e171bd7b087a09ff4f8bcf292d3207aacf0668457858db59aa18a91d.scope - libcontainer container 0e489a95e171bd7b087a09ff4f8bcf292d3207aacf0668457858db59aa18a91d. Oct 13 05:37:30.654896 systemd[1]: Started cri-containerd-ca6b0171b6ddda41644409eb3a5406ee98da870754aad1406ba85e58aead7be0.scope - libcontainer container ca6b0171b6ddda41644409eb3a5406ee98da870754aad1406ba85e58aead7be0. Oct 13 05:37:30.726447 containerd[1965]: time="2025-10-13T05:37:30.726369484Z" level=info msg="StartContainer for \"40db531112f48f0e47dfe4fb83610dbc7aeb2d492db84ca5d85298567e337b52\" returns successfully" Oct 13 05:37:30.739023 containerd[1965]: time="2025-10-13T05:37:30.738985846Z" level=info msg="StartContainer for \"0e489a95e171bd7b087a09ff4f8bcf292d3207aacf0668457858db59aa18a91d\" returns successfully" Oct 13 05:37:30.744518 containerd[1965]: time="2025-10-13T05:37:30.744477127Z" level=info msg="StartContainer for \"ca6b0171b6ddda41644409eb3a5406ee98da870754aad1406ba85e58aead7be0\" returns successfully" Oct 13 05:37:30.778031 kubelet[2922]: E1013 05:37:30.777987 2922 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.26.130:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.26.130:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Oct 13 05:37:30.940099 kubelet[2922]: E1013 05:37:30.939464 2922 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.26.130:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-26-130&limit=500&resourceVersion=0\": dial tcp 172.31.26.130:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Oct 13 05:37:31.041199 kubelet[2922]: E1013 05:37:31.041147 2922 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.26.130:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-26-130?timeout=10s\": dial tcp 172.31.26.130:6443: connect: connection refused" interval="1.6s" Oct 13 05:37:31.196299 kubelet[2922]: E1013 05:37:31.196184 2922 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.26.130:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.26.130:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Oct 13 05:37:31.250437 kubelet[2922]: I1013 05:37:31.250025 2922 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-26-130" Oct 13 05:37:31.250437 kubelet[2922]: E1013 05:37:31.250380 2922 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.26.130:6443/api/v1/nodes\": dial tcp 172.31.26.130:6443: connect: connection refused" node="ip-172-31-26-130" Oct 13 05:37:31.671052 kubelet[2922]: E1013 05:37:31.671002 2922 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.26.130:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.26.130:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Oct 13 05:37:31.718273 kubelet[2922]: E1013 05:37:31.718045 2922 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-26-130\" not found" node="ip-172-31-26-130" Oct 13 05:37:31.723922 kubelet[2922]: E1013 05:37:31.723560 2922 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-26-130\" not found" node="ip-172-31-26-130" Oct 13 05:37:31.726593 kubelet[2922]: E1013 05:37:31.726564 2922 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-26-130\" not found" node="ip-172-31-26-130" Oct 13 05:37:32.598010 kubelet[2922]: E1013 05:37:32.597969 2922 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.26.130:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.26.130:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Oct 13 05:37:32.642153 kubelet[2922]: E1013 05:37:32.642103 2922 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.26.130:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-26-130?timeout=10s\": dial tcp 172.31.26.130:6443: connect: connection refused" interval="3.2s" Oct 13 05:37:32.730578 kubelet[2922]: E1013 05:37:32.730544 2922 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-26-130\" not found" node="ip-172-31-26-130" Oct 13 05:37:32.731246 kubelet[2922]: E1013 05:37:32.731226 2922 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-26-130\" not found" node="ip-172-31-26-130" Oct 13 05:37:32.731656 kubelet[2922]: E1013 05:37:32.731639 2922 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-26-130\" not found" node="ip-172-31-26-130" Oct 13 05:37:32.852950 kubelet[2922]: I1013 05:37:32.852297 2922 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-26-130" Oct 13 05:37:32.852950 kubelet[2922]: E1013 05:37:32.852639 2922 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.26.130:6443/api/v1/nodes\": dial tcp 172.31.26.130:6443: connect: connection refused" node="ip-172-31-26-130" Oct 13 05:37:33.321591 kubelet[2922]: E1013 05:37:33.321548 2922 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.26.130:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-26-130&limit=500&resourceVersion=0\": dial tcp 172.31.26.130:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Oct 13 05:37:33.564555 kubelet[2922]: E1013 05:37:33.564518 2922 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.26.130:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.26.130:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Oct 13 05:37:33.730491 kubelet[2922]: E1013 05:37:33.730270 2922 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-26-130\" not found" node="ip-172-31-26-130" Oct 13 05:37:33.730491 kubelet[2922]: E1013 05:37:33.730344 2922 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-26-130\" not found" node="ip-172-31-26-130" Oct 13 05:37:33.788108 kubelet[2922]: E1013 05:37:33.788044 2922 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.26.130:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.26.130:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Oct 13 05:37:35.271035 kubelet[2922]: E1013 05:37:35.271006 2922 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-26-130\" not found" node="ip-172-31-26-130" Oct 13 05:37:36.057170 kubelet[2922]: I1013 05:37:36.056992 2922 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-26-130" Oct 13 05:37:36.624187 kubelet[2922]: E1013 05:37:36.624135 2922 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-26-130\" not found" node="ip-172-31-26-130" Oct 13 05:37:36.689330 kubelet[2922]: E1013 05:37:36.689113 2922 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-26-130.186df64f3fe77679 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-26-130,UID:ip-172-31-26-130,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-26-130,},FirstTimestamp:2025-10-13 05:37:29.606715001 +0000 UTC m=+0.649511844,LastTimestamp:2025-10-13 05:37:29.606715001 +0000 UTC m=+0.649511844,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-26-130,}" Oct 13 05:37:36.708966 kubelet[2922]: E1013 05:37:36.708912 2922 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-26-130\" not found" node="ip-172-31-26-130" Oct 13 05:37:36.755504 kubelet[2922]: I1013 05:37:36.755469 2922 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-26-130" Oct 13 05:37:36.755504 kubelet[2922]: E1013 05:37:36.755510 2922 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ip-172-31-26-130\": node \"ip-172-31-26-130\" not found" Oct 13 05:37:36.760312 kubelet[2922]: E1013 05:37:36.759253 2922 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-26-130.186df64f42b767ad default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-26-130,UID:ip-172-31-26-130,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:ip-172-31-26-130,},FirstTimestamp:2025-10-13 05:37:29.653897133 +0000 UTC m=+0.696693969,LastTimestamp:2025-10-13 05:37:29.653897133 +0000 UTC m=+0.696693969,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-26-130,}" Oct 13 05:37:36.782495 kubelet[2922]: E1013 05:37:36.782439 2922 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-26-130\" not found" Oct 13 05:37:36.882927 kubelet[2922]: E1013 05:37:36.882811 2922 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-26-130\" not found" Oct 13 05:37:36.983446 kubelet[2922]: E1013 05:37:36.983389 2922 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-26-130\" not found" Oct 13 05:37:37.083986 kubelet[2922]: E1013 05:37:37.083929 2922 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-26-130\" not found" Oct 13 05:37:37.184558 kubelet[2922]: E1013 05:37:37.184435 2922 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-26-130\" not found" Oct 13 05:37:37.284865 kubelet[2922]: E1013 05:37:37.284816 2922 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-26-130\" not found" Oct 13 05:37:37.385775 kubelet[2922]: E1013 05:37:37.385726 2922 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-26-130\" not found" Oct 13 05:37:37.486984 kubelet[2922]: E1013 05:37:37.486671 2922 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-26-130\" not found" Oct 13 05:37:37.586857 kubelet[2922]: E1013 05:37:37.586813 2922 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-26-130\" not found" Oct 13 05:37:37.736882 kubelet[2922]: I1013 05:37:37.736842 2922 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-26-130" Oct 13 05:37:37.752382 kubelet[2922]: I1013 05:37:37.752046 2922 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-26-130" Oct 13 05:37:37.760423 kubelet[2922]: I1013 05:37:37.760375 2922 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-26-130" Oct 13 05:37:38.149516 kubelet[2922]: I1013 05:37:38.149388 2922 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-26-130" Oct 13 05:37:38.156620 kubelet[2922]: E1013 05:37:38.156581 2922 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-26-130\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-26-130" Oct 13 05:37:38.600571 kubelet[2922]: I1013 05:37:38.600520 2922 apiserver.go:52] "Watching apiserver" Oct 13 05:37:38.638715 kubelet[2922]: I1013 05:37:38.638667 2922 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Oct 13 05:37:38.937543 systemd[1]: Reload requested from client PID 3201 ('systemctl') (unit session-7.scope)... Oct 13 05:37:38.937562 systemd[1]: Reloading... Oct 13 05:37:39.018487 zram_generator::config[3243]: No configuration found. Oct 13 05:37:39.317978 systemd[1]: Reloading finished in 380 ms. Oct 13 05:37:39.347007 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 05:37:39.367713 systemd[1]: kubelet.service: Deactivated successfully. Oct 13 05:37:39.368180 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 05:37:39.368259 systemd[1]: kubelet.service: Consumed 1.088s CPU time, 128.6M memory peak. Oct 13 05:37:39.370584 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 05:37:39.620872 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 05:37:39.632946 (kubelet)[3306]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 13 05:37:39.710135 kubelet[3306]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 13 05:37:39.710135 kubelet[3306]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 13 05:37:39.710135 kubelet[3306]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 13 05:37:39.710634 kubelet[3306]: I1013 05:37:39.710190 3306 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 13 05:37:39.718039 kubelet[3306]: I1013 05:37:39.718007 3306 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Oct 13 05:37:39.719455 kubelet[3306]: I1013 05:37:39.718165 3306 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 13 05:37:39.719455 kubelet[3306]: I1013 05:37:39.718379 3306 server.go:956] "Client rotation is on, will bootstrap in background" Oct 13 05:37:39.719661 kubelet[3306]: I1013 05:37:39.719648 3306 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Oct 13 05:37:39.731557 kubelet[3306]: I1013 05:37:39.731529 3306 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 13 05:37:39.739078 kubelet[3306]: I1013 05:37:39.739056 3306 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 13 05:37:39.744012 kubelet[3306]: I1013 05:37:39.743986 3306 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 13 05:37:39.744380 kubelet[3306]: I1013 05:37:39.744350 3306 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 13 05:37:39.744656 kubelet[3306]: I1013 05:37:39.744481 3306 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-26-130","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 13 05:37:39.744745 kubelet[3306]: I1013 05:37:39.744669 3306 topology_manager.go:138] "Creating topology manager with none policy" Oct 13 05:37:39.744745 kubelet[3306]: I1013 05:37:39.744680 3306 container_manager_linux.go:303] "Creating device plugin manager" Oct 13 05:37:39.744745 kubelet[3306]: I1013 05:37:39.744728 3306 state_mem.go:36] "Initialized new in-memory state store" Oct 13 05:37:39.744901 kubelet[3306]: I1013 05:37:39.744868 3306 kubelet.go:480] "Attempting to sync node with API server" Oct 13 05:37:39.744901 kubelet[3306]: I1013 05:37:39.744886 3306 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 13 05:37:39.744901 kubelet[3306]: I1013 05:37:39.744907 3306 kubelet.go:386] "Adding apiserver pod source" Oct 13 05:37:39.745051 kubelet[3306]: I1013 05:37:39.744920 3306 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 13 05:37:39.745852 kubelet[3306]: I1013 05:37:39.745740 3306 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Oct 13 05:37:39.747137 kubelet[3306]: I1013 05:37:39.746499 3306 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Oct 13 05:37:39.748911 kubelet[3306]: I1013 05:37:39.748894 3306 watchdog_linux.go:99] "Systemd watchdog is not enabled" Oct 13 05:37:39.749060 kubelet[3306]: I1013 05:37:39.749050 3306 server.go:1289] "Started kubelet" Oct 13 05:37:39.750800 kubelet[3306]: I1013 05:37:39.750783 3306 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 13 05:37:39.758999 kubelet[3306]: I1013 05:37:39.758959 3306 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Oct 13 05:37:39.759974 kubelet[3306]: I1013 05:37:39.759954 3306 server.go:317] "Adding debug handlers to kubelet server" Oct 13 05:37:39.766914 kubelet[3306]: I1013 05:37:39.766859 3306 volume_manager.go:297] "Starting Kubelet Volume Manager" Oct 13 05:37:39.767237 kubelet[3306]: E1013 05:37:39.767203 3306 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-26-130\" not found" Oct 13 05:37:39.770451 kubelet[3306]: I1013 05:37:39.769574 3306 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 13 05:37:39.770451 kubelet[3306]: I1013 05:37:39.769770 3306 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Oct 13 05:37:39.770451 kubelet[3306]: I1013 05:37:39.769821 3306 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 13 05:37:39.770451 kubelet[3306]: I1013 05:37:39.769884 3306 reconciler.go:26] "Reconciler: start to sync state" Oct 13 05:37:39.770735 kubelet[3306]: I1013 05:37:39.770716 3306 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 13 05:37:39.774671 kubelet[3306]: I1013 05:37:39.774498 3306 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Oct 13 05:37:39.775537 kubelet[3306]: E1013 05:37:39.775380 3306 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 13 05:37:39.777843 kubelet[3306]: I1013 05:37:39.777815 3306 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Oct 13 05:37:39.777947 kubelet[3306]: I1013 05:37:39.777853 3306 status_manager.go:230] "Starting to sync pod status with apiserver" Oct 13 05:37:39.777947 kubelet[3306]: I1013 05:37:39.777876 3306 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 13 05:37:39.777947 kubelet[3306]: I1013 05:37:39.777884 3306 kubelet.go:2436] "Starting kubelet main sync loop" Oct 13 05:37:39.777947 kubelet[3306]: E1013 05:37:39.777930 3306 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 13 05:37:39.788439 kubelet[3306]: I1013 05:37:39.788260 3306 factory.go:223] Registration of the systemd container factory successfully Oct 13 05:37:39.788439 kubelet[3306]: I1013 05:37:39.788362 3306 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 13 05:37:39.794440 kubelet[3306]: I1013 05:37:39.794243 3306 factory.go:223] Registration of the containerd container factory successfully Oct 13 05:37:39.857026 kubelet[3306]: I1013 05:37:39.856993 3306 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 13 05:37:39.857188 kubelet[3306]: I1013 05:37:39.857174 3306 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 13 05:37:39.857249 kubelet[3306]: I1013 05:37:39.857244 3306 state_mem.go:36] "Initialized new in-memory state store" Oct 13 05:37:39.857442 kubelet[3306]: I1013 05:37:39.857408 3306 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 13 05:37:39.858322 kubelet[3306]: I1013 05:37:39.857499 3306 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 13 05:37:39.858322 kubelet[3306]: I1013 05:37:39.857519 3306 policy_none.go:49] "None policy: Start" Oct 13 05:37:39.858322 kubelet[3306]: I1013 05:37:39.857529 3306 memory_manager.go:186] "Starting memorymanager" policy="None" Oct 13 05:37:39.858322 kubelet[3306]: I1013 05:37:39.857538 3306 state_mem.go:35] "Initializing new in-memory state store" Oct 13 05:37:39.858322 kubelet[3306]: I1013 05:37:39.857646 3306 state_mem.go:75] "Updated machine memory state" Oct 13 05:37:39.862431 kubelet[3306]: E1013 05:37:39.862395 3306 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Oct 13 05:37:39.862772 kubelet[3306]: I1013 05:37:39.862733 3306 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 13 05:37:39.862978 kubelet[3306]: I1013 05:37:39.862936 3306 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 13 05:37:39.863959 kubelet[3306]: I1013 05:37:39.863945 3306 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 13 05:37:39.867023 kubelet[3306]: E1013 05:37:39.866989 3306 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 13 05:37:39.880295 kubelet[3306]: I1013 05:37:39.879329 3306 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-26-130" Oct 13 05:37:39.882699 kubelet[3306]: I1013 05:37:39.880695 3306 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-26-130" Oct 13 05:37:39.882699 kubelet[3306]: I1013 05:37:39.881006 3306 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-26-130" Oct 13 05:37:39.902390 kubelet[3306]: E1013 05:37:39.902339 3306 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-26-130\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-26-130" Oct 13 05:37:39.908085 kubelet[3306]: E1013 05:37:39.906730 3306 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-26-130\" already exists" pod="kube-system/kube-apiserver-ip-172-31-26-130" Oct 13 05:37:39.908085 kubelet[3306]: E1013 05:37:39.906832 3306 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-26-130\" already exists" pod="kube-system/kube-scheduler-ip-172-31-26-130" Oct 13 05:37:39.966147 kubelet[3306]: I1013 05:37:39.966118 3306 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-26-130" Oct 13 05:37:39.971305 kubelet[3306]: I1013 05:37:39.971268 3306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5ad29533d399393a0fdeb497275ea0e0-k8s-certs\") pod \"kube-apiserver-ip-172-31-26-130\" (UID: \"5ad29533d399393a0fdeb497275ea0e0\") " pod="kube-system/kube-apiserver-ip-172-31-26-130" Oct 13 05:37:39.971305 kubelet[3306]: I1013 05:37:39.971304 3306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5ad29533d399393a0fdeb497275ea0e0-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-26-130\" (UID: \"5ad29533d399393a0fdeb497275ea0e0\") " pod="kube-system/kube-apiserver-ip-172-31-26-130" Oct 13 05:37:39.971627 kubelet[3306]: I1013 05:37:39.971335 3306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/42080cedb099368f218be61e94fea414-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-26-130\" (UID: \"42080cedb099368f218be61e94fea414\") " pod="kube-system/kube-controller-manager-ip-172-31-26-130" Oct 13 05:37:39.971627 kubelet[3306]: I1013 05:37:39.971352 3306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/42080cedb099368f218be61e94fea414-kubeconfig\") pod \"kube-controller-manager-ip-172-31-26-130\" (UID: \"42080cedb099368f218be61e94fea414\") " pod="kube-system/kube-controller-manager-ip-172-31-26-130" Oct 13 05:37:39.971627 kubelet[3306]: I1013 05:37:39.971369 3306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/42080cedb099368f218be61e94fea414-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-26-130\" (UID: \"42080cedb099368f218be61e94fea414\") " pod="kube-system/kube-controller-manager-ip-172-31-26-130" Oct 13 05:37:39.971627 kubelet[3306]: I1013 05:37:39.971384 3306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/65d0e757ddee71cecc9bd6c746f26fd6-kubeconfig\") pod \"kube-scheduler-ip-172-31-26-130\" (UID: \"65d0e757ddee71cecc9bd6c746f26fd6\") " pod="kube-system/kube-scheduler-ip-172-31-26-130" Oct 13 05:37:39.971627 kubelet[3306]: I1013 05:37:39.971400 3306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5ad29533d399393a0fdeb497275ea0e0-ca-certs\") pod \"kube-apiserver-ip-172-31-26-130\" (UID: \"5ad29533d399393a0fdeb497275ea0e0\") " pod="kube-system/kube-apiserver-ip-172-31-26-130" Oct 13 05:37:39.972017 kubelet[3306]: I1013 05:37:39.971493 3306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/42080cedb099368f218be61e94fea414-ca-certs\") pod \"kube-controller-manager-ip-172-31-26-130\" (UID: \"42080cedb099368f218be61e94fea414\") " pod="kube-system/kube-controller-manager-ip-172-31-26-130" Oct 13 05:37:39.972017 kubelet[3306]: I1013 05:37:39.971975 3306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/42080cedb099368f218be61e94fea414-k8s-certs\") pod \"kube-controller-manager-ip-172-31-26-130\" (UID: \"42080cedb099368f218be61e94fea414\") " pod="kube-system/kube-controller-manager-ip-172-31-26-130" Oct 13 05:37:39.978855 kubelet[3306]: I1013 05:37:39.978825 3306 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-26-130" Oct 13 05:37:39.979020 kubelet[3306]: I1013 05:37:39.978900 3306 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-26-130" Oct 13 05:37:40.753637 kubelet[3306]: I1013 05:37:40.753581 3306 apiserver.go:52] "Watching apiserver" Oct 13 05:37:40.770356 kubelet[3306]: I1013 05:37:40.770309 3306 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Oct 13 05:37:40.842715 kubelet[3306]: I1013 05:37:40.842672 3306 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-26-130" Oct 13 05:37:40.843956 kubelet[3306]: I1013 05:37:40.843724 3306 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-26-130" Oct 13 05:37:40.856947 kubelet[3306]: E1013 05:37:40.856912 3306 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-26-130\" already exists" pod="kube-system/kube-scheduler-ip-172-31-26-130" Oct 13 05:37:40.859366 kubelet[3306]: E1013 05:37:40.859317 3306 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-26-130\" already exists" pod="kube-system/kube-apiserver-ip-172-31-26-130" Oct 13 05:37:40.879088 kubelet[3306]: I1013 05:37:40.878895 3306 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-26-130" podStartSLOduration=3.87887803 podStartE2EDuration="3.87887803s" podCreationTimestamp="2025-10-13 05:37:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 05:37:40.869812638 +0000 UTC m=+1.229237410" watchObservedRunningTime="2025-10-13 05:37:40.87887803 +0000 UTC m=+1.238302785" Oct 13 05:37:40.879447 kubelet[3306]: I1013 05:37:40.879405 3306 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-26-130" podStartSLOduration=3.879394278 podStartE2EDuration="3.879394278s" podCreationTimestamp="2025-10-13 05:37:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 05:37:40.879368753 +0000 UTC m=+1.238793513" watchObservedRunningTime="2025-10-13 05:37:40.879394278 +0000 UTC m=+1.238819029" Oct 13 05:37:40.895187 kubelet[3306]: I1013 05:37:40.894463 3306 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-26-130" podStartSLOduration=3.894443202 podStartE2EDuration="3.894443202s" podCreationTimestamp="2025-10-13 05:37:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 05:37:40.894370989 +0000 UTC m=+1.253795762" watchObservedRunningTime="2025-10-13 05:37:40.894443202 +0000 UTC m=+1.253867978" Oct 13 05:37:43.559996 update_engine[1949]: I20251013 05:37:43.559688 1949 update_attempter.cc:509] Updating boot flags... Oct 13 05:37:44.554163 kubelet[3306]: I1013 05:37:44.554120 3306 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 13 05:37:44.554907 containerd[1965]: time="2025-10-13T05:37:44.554870184Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 13 05:37:44.555480 kubelet[3306]: I1013 05:37:44.555172 3306 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 13 05:37:45.343563 systemd[1]: Created slice kubepods-besteffort-pod2f7ea1df_520c_4106_afb4_80050dffdfb3.slice - libcontainer container kubepods-besteffort-pod2f7ea1df_520c_4106_afb4_80050dffdfb3.slice. Oct 13 05:37:45.412159 kubelet[3306]: I1013 05:37:45.412113 3306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2f7ea1df-520c-4106-afb4-80050dffdfb3-kube-proxy\") pod \"kube-proxy-2cmvn\" (UID: \"2f7ea1df-520c-4106-afb4-80050dffdfb3\") " pod="kube-system/kube-proxy-2cmvn" Oct 13 05:37:45.412159 kubelet[3306]: I1013 05:37:45.412155 3306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2f7ea1df-520c-4106-afb4-80050dffdfb3-xtables-lock\") pod \"kube-proxy-2cmvn\" (UID: \"2f7ea1df-520c-4106-afb4-80050dffdfb3\") " pod="kube-system/kube-proxy-2cmvn" Oct 13 05:37:45.412159 kubelet[3306]: I1013 05:37:45.412174 3306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2f7ea1df-520c-4106-afb4-80050dffdfb3-lib-modules\") pod \"kube-proxy-2cmvn\" (UID: \"2f7ea1df-520c-4106-afb4-80050dffdfb3\") " pod="kube-system/kube-proxy-2cmvn" Oct 13 05:37:45.412406 kubelet[3306]: I1013 05:37:45.412192 3306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svsfr\" (UniqueName: \"kubernetes.io/projected/2f7ea1df-520c-4106-afb4-80050dffdfb3-kube-api-access-svsfr\") pod \"kube-proxy-2cmvn\" (UID: \"2f7ea1df-520c-4106-afb4-80050dffdfb3\") " pod="kube-system/kube-proxy-2cmvn" Oct 13 05:37:45.655446 containerd[1965]: time="2025-10-13T05:37:45.655029278Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2cmvn,Uid:2f7ea1df-520c-4106-afb4-80050dffdfb3,Namespace:kube-system,Attempt:0,}" Oct 13 05:37:45.698558 containerd[1965]: time="2025-10-13T05:37:45.698457112Z" level=info msg="connecting to shim d952e6f6c81b17d466bf86cf571547003c567555fef73c8228905d5bcfd0265e" address="unix:///run/containerd/s/2bbe97a234d50bce56b383126918112f4a49a175be0bef77e525fcf541b66211" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:37:45.751640 systemd[1]: Started cri-containerd-d952e6f6c81b17d466bf86cf571547003c567555fef73c8228905d5bcfd0265e.scope - libcontainer container d952e6f6c81b17d466bf86cf571547003c567555fef73c8228905d5bcfd0265e. Oct 13 05:37:45.827334 containerd[1965]: time="2025-10-13T05:37:45.827295229Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2cmvn,Uid:2f7ea1df-520c-4106-afb4-80050dffdfb3,Namespace:kube-system,Attempt:0,} returns sandbox id \"d952e6f6c81b17d466bf86cf571547003c567555fef73c8228905d5bcfd0265e\"" Oct 13 05:37:45.836077 containerd[1965]: time="2025-10-13T05:37:45.836035103Z" level=info msg="CreateContainer within sandbox \"d952e6f6c81b17d466bf86cf571547003c567555fef73c8228905d5bcfd0265e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 13 05:37:45.878002 containerd[1965]: time="2025-10-13T05:37:45.877883617Z" level=info msg="Container b44db21749a4ad077c92582544eed5ae74d1582b8a6c5fda40a34cdd6c2228cb: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:37:45.879884 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1649779123.mount: Deactivated successfully. Oct 13 05:37:45.897050 systemd[1]: Created slice kubepods-besteffort-podfb77dfca_99c9_4b08_ab84_57e2bbe759aa.slice - libcontainer container kubepods-besteffort-podfb77dfca_99c9_4b08_ab84_57e2bbe759aa.slice. Oct 13 05:37:45.903350 containerd[1965]: time="2025-10-13T05:37:45.903305246Z" level=info msg="CreateContainer within sandbox \"d952e6f6c81b17d466bf86cf571547003c567555fef73c8228905d5bcfd0265e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"b44db21749a4ad077c92582544eed5ae74d1582b8a6c5fda40a34cdd6c2228cb\"" Oct 13 05:37:45.905125 containerd[1965]: time="2025-10-13T05:37:45.905071144Z" level=info msg="StartContainer for \"b44db21749a4ad077c92582544eed5ae74d1582b8a6c5fda40a34cdd6c2228cb\"" Oct 13 05:37:45.909312 containerd[1965]: time="2025-10-13T05:37:45.909214661Z" level=info msg="connecting to shim b44db21749a4ad077c92582544eed5ae74d1582b8a6c5fda40a34cdd6c2228cb" address="unix:///run/containerd/s/2bbe97a234d50bce56b383126918112f4a49a175be0bef77e525fcf541b66211" protocol=ttrpc version=3 Oct 13 05:37:45.916629 kubelet[3306]: I1013 05:37:45.916541 3306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwzf5\" (UniqueName: \"kubernetes.io/projected/fb77dfca-99c9-4b08-ab84-57e2bbe759aa-kube-api-access-mwzf5\") pod \"tigera-operator-755d956888-5ngfz\" (UID: \"fb77dfca-99c9-4b08-ab84-57e2bbe759aa\") " pod="tigera-operator/tigera-operator-755d956888-5ngfz" Oct 13 05:37:45.916629 kubelet[3306]: I1013 05:37:45.916611 3306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/fb77dfca-99c9-4b08-ab84-57e2bbe759aa-var-lib-calico\") pod \"tigera-operator-755d956888-5ngfz\" (UID: \"fb77dfca-99c9-4b08-ab84-57e2bbe759aa\") " pod="tigera-operator/tigera-operator-755d956888-5ngfz" Oct 13 05:37:45.930657 systemd[1]: Started cri-containerd-b44db21749a4ad077c92582544eed5ae74d1582b8a6c5fda40a34cdd6c2228cb.scope - libcontainer container b44db21749a4ad077c92582544eed5ae74d1582b8a6c5fda40a34cdd6c2228cb. Oct 13 05:37:45.981124 containerd[1965]: time="2025-10-13T05:37:45.981086825Z" level=info msg="StartContainer for \"b44db21749a4ad077c92582544eed5ae74d1582b8a6c5fda40a34cdd6c2228cb\" returns successfully" Oct 13 05:37:46.203653 containerd[1965]: time="2025-10-13T05:37:46.203532898Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-5ngfz,Uid:fb77dfca-99c9-4b08-ab84-57e2bbe759aa,Namespace:tigera-operator,Attempt:0,}" Oct 13 05:37:46.233879 containerd[1965]: time="2025-10-13T05:37:46.233831935Z" level=info msg="connecting to shim 38f0a19176ba0d0aed6519617dc7db6c6903e13894baebdba40f8c032fa2cfde" address="unix:///run/containerd/s/d4db5f5ad493c952b8bfc00974a863eb5f8255009fb91ce5c84d9176951516ac" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:37:46.261642 systemd[1]: Started cri-containerd-38f0a19176ba0d0aed6519617dc7db6c6903e13894baebdba40f8c032fa2cfde.scope - libcontainer container 38f0a19176ba0d0aed6519617dc7db6c6903e13894baebdba40f8c032fa2cfde. Oct 13 05:37:46.313876 containerd[1965]: time="2025-10-13T05:37:46.313834225Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-5ngfz,Uid:fb77dfca-99c9-4b08-ab84-57e2bbe759aa,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"38f0a19176ba0d0aed6519617dc7db6c6903e13894baebdba40f8c032fa2cfde\"" Oct 13 05:37:46.316772 containerd[1965]: time="2025-10-13T05:37:46.315477778Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Oct 13 05:37:46.532065 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2390604961.mount: Deactivated successfully. Oct 13 05:37:47.640238 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1966696020.mount: Deactivated successfully. Oct 13 05:37:48.318871 kubelet[3306]: I1013 05:37:48.318800 3306 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-2cmvn" podStartSLOduration=3.318778074 podStartE2EDuration="3.318778074s" podCreationTimestamp="2025-10-13 05:37:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 05:37:46.894961645 +0000 UTC m=+7.254386425" watchObservedRunningTime="2025-10-13 05:37:48.318778074 +0000 UTC m=+8.678202849" Oct 13 05:37:48.985124 containerd[1965]: time="2025-10-13T05:37:48.985068369Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:37:48.986937 containerd[1965]: time="2025-10-13T05:37:48.986901014Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=25062609" Oct 13 05:37:48.989706 containerd[1965]: time="2025-10-13T05:37:48.989620727Z" level=info msg="ImageCreate event name:\"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:37:48.992864 containerd[1965]: time="2025-10-13T05:37:48.992826728Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:37:48.993696 containerd[1965]: time="2025-10-13T05:37:48.993664764Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"25058604\" in 2.678148071s" Oct 13 05:37:48.993822 containerd[1965]: time="2025-10-13T05:37:48.993803799Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\"" Oct 13 05:37:49.005759 containerd[1965]: time="2025-10-13T05:37:49.005715632Z" level=info msg="CreateContainer within sandbox \"38f0a19176ba0d0aed6519617dc7db6c6903e13894baebdba40f8c032fa2cfde\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Oct 13 05:37:49.022939 containerd[1965]: time="2025-10-13T05:37:49.020647194Z" level=info msg="Container b649f37ae49df3c0cad341d08f64273b2de688dd50f9e7f2c2d1f71c5571f038: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:37:49.031566 containerd[1965]: time="2025-10-13T05:37:49.031511382Z" level=info msg="CreateContainer within sandbox \"38f0a19176ba0d0aed6519617dc7db6c6903e13894baebdba40f8c032fa2cfde\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"b649f37ae49df3c0cad341d08f64273b2de688dd50f9e7f2c2d1f71c5571f038\"" Oct 13 05:37:49.033276 containerd[1965]: time="2025-10-13T05:37:49.032883883Z" level=info msg="StartContainer for \"b649f37ae49df3c0cad341d08f64273b2de688dd50f9e7f2c2d1f71c5571f038\"" Oct 13 05:37:49.033986 containerd[1965]: time="2025-10-13T05:37:49.033943955Z" level=info msg="connecting to shim b649f37ae49df3c0cad341d08f64273b2de688dd50f9e7f2c2d1f71c5571f038" address="unix:///run/containerd/s/d4db5f5ad493c952b8bfc00974a863eb5f8255009fb91ce5c84d9176951516ac" protocol=ttrpc version=3 Oct 13 05:37:49.060656 systemd[1]: Started cri-containerd-b649f37ae49df3c0cad341d08f64273b2de688dd50f9e7f2c2d1f71c5571f038.scope - libcontainer container b649f37ae49df3c0cad341d08f64273b2de688dd50f9e7f2c2d1f71c5571f038. Oct 13 05:37:49.150232 containerd[1965]: time="2025-10-13T05:37:49.149798518Z" level=info msg="StartContainer for \"b649f37ae49df3c0cad341d08f64273b2de688dd50f9e7f2c2d1f71c5571f038\" returns successfully" Oct 13 05:38:25.478081 sudo[2336]: pam_unix(sudo:session): session closed for user root Oct 13 05:38:25.500432 sshd[2335]: Connection closed by 139.178.89.65 port 40836 Oct 13 05:38:25.501592 sshd-session[2332]: pam_unix(sshd:session): session closed for user core Oct 13 05:38:25.509747 systemd-logind[1948]: Session 7 logged out. Waiting for processes to exit. Oct 13 05:38:25.512336 systemd[1]: sshd@6-172.31.26.130:22-139.178.89.65:40836.service: Deactivated successfully. Oct 13 05:38:25.519751 systemd[1]: session-7.scope: Deactivated successfully. Oct 13 05:38:25.520273 systemd[1]: session-7.scope: Consumed 5.742s CPU time, 153M memory peak. Oct 13 05:38:25.528534 systemd-logind[1948]: Removed session 7. Oct 13 05:38:30.300635 kubelet[3306]: I1013 05:38:30.300570 3306 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-755d956888-5ngfz" podStartSLOduration=42.620765273 podStartE2EDuration="45.30055517s" podCreationTimestamp="2025-10-13 05:37:45 +0000 UTC" firstStartedPulling="2025-10-13 05:37:46.31510352 +0000 UTC m=+6.674528271" lastFinishedPulling="2025-10-13 05:37:48.994893416 +0000 UTC m=+9.354318168" observedRunningTime="2025-10-13 05:37:49.938737466 +0000 UTC m=+10.298162264" watchObservedRunningTime="2025-10-13 05:38:30.30055517 +0000 UTC m=+50.659979982" Oct 13 05:38:30.312953 systemd[1]: Created slice kubepods-besteffort-pod7eef36be_21ee_4ef1_bb60_df1e9c6d8c1a.slice - libcontainer container kubepods-besteffort-pod7eef36be_21ee_4ef1_bb60_df1e9c6d8c1a.slice. Oct 13 05:38:30.352040 kubelet[3306]: I1013 05:38:30.351999 3306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qsjkj\" (UniqueName: \"kubernetes.io/projected/7eef36be-21ee-4ef1-bb60-df1e9c6d8c1a-kube-api-access-qsjkj\") pod \"calico-typha-7578cc457c-fn9c7\" (UID: \"7eef36be-21ee-4ef1-bb60-df1e9c6d8c1a\") " pod="calico-system/calico-typha-7578cc457c-fn9c7" Oct 13 05:38:30.352040 kubelet[3306]: I1013 05:38:30.352042 3306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/7eef36be-21ee-4ef1-bb60-df1e9c6d8c1a-typha-certs\") pod \"calico-typha-7578cc457c-fn9c7\" (UID: \"7eef36be-21ee-4ef1-bb60-df1e9c6d8c1a\") " pod="calico-system/calico-typha-7578cc457c-fn9c7" Oct 13 05:38:30.352204 kubelet[3306]: I1013 05:38:30.352064 3306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7eef36be-21ee-4ef1-bb60-df1e9c6d8c1a-tigera-ca-bundle\") pod \"calico-typha-7578cc457c-fn9c7\" (UID: \"7eef36be-21ee-4ef1-bb60-df1e9c6d8c1a\") " pod="calico-system/calico-typha-7578cc457c-fn9c7" Oct 13 05:38:30.611081 systemd[1]: Created slice kubepods-besteffort-podb60ff97b_191f_4823_889c_af2d81bc5a36.slice - libcontainer container kubepods-besteffort-podb60ff97b_191f_4823_889c_af2d81bc5a36.slice. Oct 13 05:38:30.618341 containerd[1965]: time="2025-10-13T05:38:30.618289888Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7578cc457c-fn9c7,Uid:7eef36be-21ee-4ef1-bb60-df1e9c6d8c1a,Namespace:calico-system,Attempt:0,}" Oct 13 05:38:30.657443 kubelet[3306]: I1013 05:38:30.653806 3306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/b60ff97b-191f-4823-889c-af2d81bc5a36-policysync\") pod \"calico-node-wpggc\" (UID: \"b60ff97b-191f-4823-889c-af2d81bc5a36\") " pod="calico-system/calico-node-wpggc" Oct 13 05:38:30.657443 kubelet[3306]: I1013 05:38:30.653858 3306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrhnz\" (UniqueName: \"kubernetes.io/projected/b60ff97b-191f-4823-889c-af2d81bc5a36-kube-api-access-zrhnz\") pod \"calico-node-wpggc\" (UID: \"b60ff97b-191f-4823-889c-af2d81bc5a36\") " pod="calico-system/calico-node-wpggc" Oct 13 05:38:30.657443 kubelet[3306]: I1013 05:38:30.653891 3306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/b60ff97b-191f-4823-889c-af2d81bc5a36-cni-log-dir\") pod \"calico-node-wpggc\" (UID: \"b60ff97b-191f-4823-889c-af2d81bc5a36\") " pod="calico-system/calico-node-wpggc" Oct 13 05:38:30.657443 kubelet[3306]: I1013 05:38:30.653913 3306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/b60ff97b-191f-4823-889c-af2d81bc5a36-var-lib-calico\") pod \"calico-node-wpggc\" (UID: \"b60ff97b-191f-4823-889c-af2d81bc5a36\") " pod="calico-system/calico-node-wpggc" Oct 13 05:38:30.657443 kubelet[3306]: I1013 05:38:30.653934 3306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/b60ff97b-191f-4823-889c-af2d81bc5a36-var-run-calico\") pod \"calico-node-wpggc\" (UID: \"b60ff97b-191f-4823-889c-af2d81bc5a36\") " pod="calico-system/calico-node-wpggc" Oct 13 05:38:30.657759 kubelet[3306]: I1013 05:38:30.653958 3306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/b60ff97b-191f-4823-889c-af2d81bc5a36-cni-bin-dir\") pod \"calico-node-wpggc\" (UID: \"b60ff97b-191f-4823-889c-af2d81bc5a36\") " pod="calico-system/calico-node-wpggc" Oct 13 05:38:30.657759 kubelet[3306]: I1013 05:38:30.653978 3306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/b60ff97b-191f-4823-889c-af2d81bc5a36-cni-net-dir\") pod \"calico-node-wpggc\" (UID: \"b60ff97b-191f-4823-889c-af2d81bc5a36\") " pod="calico-system/calico-node-wpggc" Oct 13 05:38:30.657759 kubelet[3306]: I1013 05:38:30.654007 3306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b60ff97b-191f-4823-889c-af2d81bc5a36-tigera-ca-bundle\") pod \"calico-node-wpggc\" (UID: \"b60ff97b-191f-4823-889c-af2d81bc5a36\") " pod="calico-system/calico-node-wpggc" Oct 13 05:38:30.657759 kubelet[3306]: I1013 05:38:30.654031 3306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/b60ff97b-191f-4823-889c-af2d81bc5a36-flexvol-driver-host\") pod \"calico-node-wpggc\" (UID: \"b60ff97b-191f-4823-889c-af2d81bc5a36\") " pod="calico-system/calico-node-wpggc" Oct 13 05:38:30.657759 kubelet[3306]: I1013 05:38:30.654056 3306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/b60ff97b-191f-4823-889c-af2d81bc5a36-node-certs\") pod \"calico-node-wpggc\" (UID: \"b60ff97b-191f-4823-889c-af2d81bc5a36\") " pod="calico-system/calico-node-wpggc" Oct 13 05:38:30.658395 kubelet[3306]: I1013 05:38:30.654081 3306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b60ff97b-191f-4823-889c-af2d81bc5a36-lib-modules\") pod \"calico-node-wpggc\" (UID: \"b60ff97b-191f-4823-889c-af2d81bc5a36\") " pod="calico-system/calico-node-wpggc" Oct 13 05:38:30.658395 kubelet[3306]: I1013 05:38:30.654105 3306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b60ff97b-191f-4823-889c-af2d81bc5a36-xtables-lock\") pod \"calico-node-wpggc\" (UID: \"b60ff97b-191f-4823-889c-af2d81bc5a36\") " pod="calico-system/calico-node-wpggc" Oct 13 05:38:30.693817 containerd[1965]: time="2025-10-13T05:38:30.693757308Z" level=info msg="connecting to shim 52686f0a93b96eb8c6a8790b0fc569a7134e0572172ac803c58fcbfb8ca5cce1" address="unix:///run/containerd/s/ff22beb4429b39a11181ddd75534ae892a44a1f6efe203fd22b5d93a7f48ea7e" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:38:30.750745 systemd[1]: Started cri-containerd-52686f0a93b96eb8c6a8790b0fc569a7134e0572172ac803c58fcbfb8ca5cce1.scope - libcontainer container 52686f0a93b96eb8c6a8790b0fc569a7134e0572172ac803c58fcbfb8ca5cce1. Oct 13 05:38:30.860625 containerd[1965]: time="2025-10-13T05:38:30.860579861Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7578cc457c-fn9c7,Uid:7eef36be-21ee-4ef1-bb60-df1e9c6d8c1a,Namespace:calico-system,Attempt:0,} returns sandbox id \"52686f0a93b96eb8c6a8790b0fc569a7134e0572172ac803c58fcbfb8ca5cce1\"" Oct 13 05:38:30.865033 containerd[1965]: time="2025-10-13T05:38:30.863005593Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Oct 13 05:38:30.915665 kubelet[3306]: E1013 05:38:30.915528 3306 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rx7bk" podUID="96c04204-34c8-461e-8a7a-c15984adac1a" Oct 13 05:38:30.919739 kubelet[3306]: E1013 05:38:30.919649 3306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:38:30.919739 kubelet[3306]: W1013 05:38:30.919685 3306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:38:30.920046 kubelet[3306]: E1013 05:38:30.919718 3306 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:38:30.920559 kubelet[3306]: E1013 05:38:30.920267 3306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:38:30.920559 kubelet[3306]: W1013 05:38:30.920283 3306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:38:30.920559 kubelet[3306]: E1013 05:38:30.920297 3306 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:38:30.920853 kubelet[3306]: E1013 05:38:30.920762 3306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:38:30.921375 kubelet[3306]: W1013 05:38:30.921318 3306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:38:30.921375 kubelet[3306]: E1013 05:38:30.921339 3306 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:38:30.921989 kubelet[3306]: E1013 05:38:30.921847 3306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:38:30.921989 kubelet[3306]: W1013 05:38:30.921863 3306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:38:30.921989 kubelet[3306]: E1013 05:38:30.921879 3306 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:38:30.922672 kubelet[3306]: E1013 05:38:30.922623 3306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:38:30.922937 kubelet[3306]: W1013 05:38:30.922765 3306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:38:30.922937 kubelet[3306]: E1013 05:38:30.922795 3306 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:38:30.923290 kubelet[3306]: E1013 05:38:30.923221 3306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:38:30.923290 kubelet[3306]: W1013 05:38:30.923233 3306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:38:30.923290 kubelet[3306]: E1013 05:38:30.923246 3306 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:38:30.924109 kubelet[3306]: E1013 05:38:30.924026 3306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:38:30.924109 kubelet[3306]: W1013 05:38:30.924041 3306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:38:30.924109 kubelet[3306]: E1013 05:38:30.924057 3306 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:38:30.925100 kubelet[3306]: E1013 05:38:30.925052 3306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:38:30.925100 kubelet[3306]: W1013 05:38:30.925068 3306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:38:30.925557 kubelet[3306]: E1013 05:38:30.925259 3306 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:38:30.926729 kubelet[3306]: E1013 05:38:30.926626 3306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:38:30.926818 kubelet[3306]: W1013 05:38:30.926742 3306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:38:30.926818 kubelet[3306]: E1013 05:38:30.926760 3306 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:38:30.927456 containerd[1965]: time="2025-10-13T05:38:30.926896769Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-wpggc,Uid:b60ff97b-191f-4823-889c-af2d81bc5a36,Namespace:calico-system,Attempt:0,}" Oct 13 05:38:30.927539 kubelet[3306]: E1013 05:38:30.927212 3306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:38:30.927539 kubelet[3306]: W1013 05:38:30.927223 3306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:38:30.927539 kubelet[3306]: E1013 05:38:30.927236 3306 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:38:30.928458 kubelet[3306]: E1013 05:38:30.927914 3306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:38:30.928458 kubelet[3306]: W1013 05:38:30.927930 3306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:38:30.928458 kubelet[3306]: E1013 05:38:30.927944 3306 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:38:30.929087 kubelet[3306]: E1013 05:38:30.929061 3306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:38:30.929087 kubelet[3306]: W1013 05:38:30.929078 3306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:38:30.929749 kubelet[3306]: E1013 05:38:30.929336 3306 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:38:30.931015 kubelet[3306]: E1013 05:38:30.930997 3306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:38:30.931015 kubelet[3306]: W1013 05:38:30.931011 3306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:38:30.931145 kubelet[3306]: E1013 05:38:30.931025 3306 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:38:30.931396 kubelet[3306]: E1013 05:38:30.931378 3306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:38:30.932821 kubelet[3306]: W1013 05:38:30.931404 3306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:38:30.932821 kubelet[3306]: E1013 05:38:30.931468 3306 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:38:30.932821 kubelet[3306]: E1013 05:38:30.931690 3306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:38:30.932821 kubelet[3306]: W1013 05:38:30.931699 3306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:38:30.932821 kubelet[3306]: E1013 05:38:30.931708 3306 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:38:30.932821 kubelet[3306]: E1013 05:38:30.932502 3306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:38:30.932821 kubelet[3306]: W1013 05:38:30.932534 3306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:38:30.932821 kubelet[3306]: E1013 05:38:30.932548 3306 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:38:30.933252 kubelet[3306]: E1013 05:38:30.933241 3306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:38:30.933422 kubelet[3306]: W1013 05:38:30.933320 3306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:38:30.933422 kubelet[3306]: E1013 05:38:30.933334 3306 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:38:30.933660 kubelet[3306]: E1013 05:38:30.933593 3306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:38:30.933660 kubelet[3306]: W1013 05:38:30.933607 3306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:38:30.933660 kubelet[3306]: E1013 05:38:30.933620 3306 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:38:30.934009 kubelet[3306]: E1013 05:38:30.933948 3306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:38:30.934009 kubelet[3306]: W1013 05:38:30.933959 3306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:38:30.934009 kubelet[3306]: E1013 05:38:30.933968 3306 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:38:30.936037 kubelet[3306]: E1013 05:38:30.936014 3306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:38:30.936037 kubelet[3306]: W1013 05:38:30.936035 3306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:38:30.936160 kubelet[3306]: E1013 05:38:30.936050 3306 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:38:30.959326 kubelet[3306]: E1013 05:38:30.959293 3306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:38:30.959326 kubelet[3306]: W1013 05:38:30.959321 3306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:38:30.960661 kubelet[3306]: E1013 05:38:30.959343 3306 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:38:30.960661 kubelet[3306]: I1013 05:38:30.959399 3306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/96c04204-34c8-461e-8a7a-c15984adac1a-varrun\") pod \"csi-node-driver-rx7bk\" (UID: \"96c04204-34c8-461e-8a7a-c15984adac1a\") " pod="calico-system/csi-node-driver-rx7bk" Oct 13 05:38:30.960661 kubelet[3306]: E1013 05:38:30.959775 3306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:38:30.960661 kubelet[3306]: W1013 05:38:30.959788 3306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:38:30.960661 kubelet[3306]: E1013 05:38:30.959815 3306 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:38:30.960661 kubelet[3306]: E1013 05:38:30.960161 3306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:38:30.960661 kubelet[3306]: W1013 05:38:30.960170 3306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:38:30.960661 kubelet[3306]: E1013 05:38:30.960180 3306 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:38:30.960661 kubelet[3306]: E1013 05:38:30.960635 3306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:38:30.961573 kubelet[3306]: W1013 05:38:30.960645 3306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:38:30.961573 kubelet[3306]: E1013 05:38:30.960757 3306 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:38:30.961573 kubelet[3306]: I1013 05:38:30.960790 3306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/96c04204-34c8-461e-8a7a-c15984adac1a-registration-dir\") pod \"csi-node-driver-rx7bk\" (UID: \"96c04204-34c8-461e-8a7a-c15984adac1a\") " pod="calico-system/csi-node-driver-rx7bk" Oct 13 05:38:30.961573 kubelet[3306]: E1013 05:38:30.961568 3306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:38:30.961850 kubelet[3306]: W1013 05:38:30.961832 3306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:38:30.961893 kubelet[3306]: E1013 05:38:30.961853 3306 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:38:30.961893 kubelet[3306]: I1013 05:38:30.961872 3306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/96c04204-34c8-461e-8a7a-c15984adac1a-socket-dir\") pod \"csi-node-driver-rx7bk\" (UID: \"96c04204-34c8-461e-8a7a-c15984adac1a\") " pod="calico-system/csi-node-driver-rx7bk" Oct 13 05:38:30.962627 kubelet[3306]: E1013 05:38:30.962592 3306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:38:30.962627 kubelet[3306]: W1013 05:38:30.962624 3306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:38:30.962729 kubelet[3306]: E1013 05:38:30.962636 3306 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:38:30.962729 kubelet[3306]: I1013 05:38:30.962717 3306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6c9jt\" (UniqueName: \"kubernetes.io/projected/96c04204-34c8-461e-8a7a-c15984adac1a-kube-api-access-6c9jt\") pod \"csi-node-driver-rx7bk\" (UID: \"96c04204-34c8-461e-8a7a-c15984adac1a\") " pod="calico-system/csi-node-driver-rx7bk" Oct 13 05:38:30.962889 kubelet[3306]: E1013 05:38:30.962874 3306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:38:30.962889 kubelet[3306]: W1013 05:38:30.962887 3306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:38:30.962955 kubelet[3306]: E1013 05:38:30.962897 3306 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:38:30.963277 kubelet[3306]: E1013 05:38:30.963250 3306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:38:30.963277 kubelet[3306]: W1013 05:38:30.963266 3306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:38:30.963277 kubelet[3306]: E1013 05:38:30.963277 3306 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:38:30.963670 kubelet[3306]: E1013 05:38:30.963652 3306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:38:30.963670 kubelet[3306]: W1013 05:38:30.963669 3306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:38:30.963761 kubelet[3306]: E1013 05:38:30.963683 3306 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:38:30.963790 kubelet[3306]: I1013 05:38:30.963777 3306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/96c04204-34c8-461e-8a7a-c15984adac1a-kubelet-dir\") pod \"csi-node-driver-rx7bk\" (UID: \"96c04204-34c8-461e-8a7a-c15984adac1a\") " pod="calico-system/csi-node-driver-rx7bk" Oct 13 05:38:30.964639 kubelet[3306]: E1013 05:38:30.964619 3306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:38:30.964639 kubelet[3306]: W1013 05:38:30.964637 3306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:38:30.964639 kubelet[3306]: E1013 05:38:30.964649 3306 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:38:30.965391 kubelet[3306]: E1013 05:38:30.964819 3306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:38:30.965391 kubelet[3306]: W1013 05:38:30.964849 3306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:38:30.965391 kubelet[3306]: E1013 05:38:30.964859 3306 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:38:30.965391 kubelet[3306]: E1013 05:38:30.965102 3306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:38:30.965391 kubelet[3306]: W1013 05:38:30.965110 3306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:38:30.965391 kubelet[3306]: E1013 05:38:30.965118 3306 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:38:30.965699 kubelet[3306]: E1013 05:38:30.965674 3306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:38:30.965699 kubelet[3306]: W1013 05:38:30.965690 3306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:38:30.965762 kubelet[3306]: E1013 05:38:30.965701 3306 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:38:30.966049 kubelet[3306]: E1013 05:38:30.966020 3306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:38:30.966049 kubelet[3306]: W1013 05:38:30.966031 3306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:38:30.966049 kubelet[3306]: E1013 05:38:30.966041 3306 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:38:30.966568 kubelet[3306]: E1013 05:38:30.966542 3306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:38:30.966568 kubelet[3306]: W1013 05:38:30.966556 3306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:38:30.966568 kubelet[3306]: E1013 05:38:30.966567 3306 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:38:30.970325 containerd[1965]: time="2025-10-13T05:38:30.969550615Z" level=info msg="connecting to shim 82e2503da1c1e4bbcee3e2d777550a59110de929207312d0b3d5f13bcaf731c1" address="unix:///run/containerd/s/ec3c98871606db2f3a5cb0656a7de87f8f83381dbd7638c3d88e0b4ab31ee364" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:38:31.002217 systemd[1]: Started cri-containerd-82e2503da1c1e4bbcee3e2d777550a59110de929207312d0b3d5f13bcaf731c1.scope - libcontainer container 82e2503da1c1e4bbcee3e2d777550a59110de929207312d0b3d5f13bcaf731c1. Oct 13 05:38:31.068240 kubelet[3306]: E1013 05:38:31.066400 3306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:38:31.068240 kubelet[3306]: W1013 05:38:31.066526 3306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:38:31.068240 kubelet[3306]: E1013 05:38:31.066553 3306 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:38:31.068240 kubelet[3306]: E1013 05:38:31.067034 3306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:38:31.068240 kubelet[3306]: W1013 05:38:31.067050 3306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:38:31.068240 kubelet[3306]: E1013 05:38:31.067066 3306 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:38:31.068240 kubelet[3306]: E1013 05:38:31.067550 3306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:38:31.068240 kubelet[3306]: W1013 05:38:31.067562 3306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:38:31.068240 kubelet[3306]: E1013 05:38:31.067577 3306 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:38:31.068240 kubelet[3306]: E1013 05:38:31.067992 3306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:38:31.070154 kubelet[3306]: W1013 05:38:31.068003 3306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:38:31.070154 kubelet[3306]: E1013 05:38:31.068016 3306 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:38:31.070245 kubelet[3306]: E1013 05:38:31.070199 3306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:38:31.070245 kubelet[3306]: W1013 05:38:31.070212 3306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:38:31.070245 kubelet[3306]: E1013 05:38:31.070228 3306 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:38:31.074438 kubelet[3306]: E1013 05:38:31.070506 3306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:38:31.074438 kubelet[3306]: W1013 05:38:31.070519 3306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:38:31.074438 kubelet[3306]: E1013 05:38:31.070532 3306 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:38:31.074438 kubelet[3306]: E1013 05:38:31.070783 3306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:38:31.074438 kubelet[3306]: W1013 05:38:31.070797 3306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:38:31.074438 kubelet[3306]: E1013 05:38:31.070809 3306 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:38:31.074438 kubelet[3306]: E1013 05:38:31.071058 3306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:38:31.074438 kubelet[3306]: W1013 05:38:31.071068 3306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:38:31.074438 kubelet[3306]: E1013 05:38:31.071079 3306 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:38:31.074438 kubelet[3306]: E1013 05:38:31.071320 3306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:38:31.074897 kubelet[3306]: W1013 05:38:31.071328 3306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:38:31.074897 kubelet[3306]: E1013 05:38:31.071341 3306 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:38:31.074897 kubelet[3306]: E1013 05:38:31.071701 3306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:38:31.074897 kubelet[3306]: W1013 05:38:31.071714 3306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:38:31.074897 kubelet[3306]: E1013 05:38:31.071727 3306 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:38:31.074897 kubelet[3306]: E1013 05:38:31.071937 3306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:38:31.074897 kubelet[3306]: W1013 05:38:31.071946 3306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:38:31.074897 kubelet[3306]: E1013 05:38:31.071957 3306 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:38:31.074897 kubelet[3306]: E1013 05:38:31.072190 3306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:38:31.074897 kubelet[3306]: W1013 05:38:31.072199 3306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:38:31.075297 kubelet[3306]: E1013 05:38:31.072210 3306 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:38:31.075297 kubelet[3306]: E1013 05:38:31.072433 3306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:38:31.075297 kubelet[3306]: W1013 05:38:31.072441 3306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:38:31.075297 kubelet[3306]: E1013 05:38:31.072451 3306 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:38:31.075297 kubelet[3306]: E1013 05:38:31.072668 3306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:38:31.075297 kubelet[3306]: W1013 05:38:31.072676 3306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:38:31.075297 kubelet[3306]: E1013 05:38:31.072686 3306 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:38:31.075297 kubelet[3306]: E1013 05:38:31.072904 3306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:38:31.075297 kubelet[3306]: W1013 05:38:31.072913 3306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:38:31.075297 kubelet[3306]: E1013 05:38:31.072924 3306 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:38:31.077333 kubelet[3306]: E1013 05:38:31.073167 3306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:38:31.077333 kubelet[3306]: W1013 05:38:31.073177 3306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:38:31.077333 kubelet[3306]: E1013 05:38:31.073188 3306 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:38:31.077333 kubelet[3306]: E1013 05:38:31.073487 3306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:38:31.077333 kubelet[3306]: W1013 05:38:31.073497 3306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:38:31.077333 kubelet[3306]: E1013 05:38:31.073510 3306 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:38:31.077333 kubelet[3306]: E1013 05:38:31.073751 3306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:38:31.077333 kubelet[3306]: W1013 05:38:31.073764 3306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:38:31.077333 kubelet[3306]: E1013 05:38:31.073778 3306 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:38:31.077333 kubelet[3306]: E1013 05:38:31.074048 3306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:38:31.078793 kubelet[3306]: W1013 05:38:31.074059 3306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:38:31.078793 kubelet[3306]: E1013 05:38:31.074070 3306 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:38:31.078793 kubelet[3306]: E1013 05:38:31.074270 3306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:38:31.078793 kubelet[3306]: W1013 05:38:31.074279 3306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:38:31.078793 kubelet[3306]: E1013 05:38:31.074291 3306 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:38:31.078793 kubelet[3306]: E1013 05:38:31.075506 3306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:38:31.078793 kubelet[3306]: W1013 05:38:31.075519 3306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:38:31.078793 kubelet[3306]: E1013 05:38:31.075533 3306 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:38:31.078793 kubelet[3306]: E1013 05:38:31.075834 3306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:38:31.078793 kubelet[3306]: W1013 05:38:31.075844 3306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:38:31.079169 kubelet[3306]: E1013 05:38:31.075857 3306 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:38:31.079169 kubelet[3306]: E1013 05:38:31.076470 3306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:38:31.079169 kubelet[3306]: W1013 05:38:31.076483 3306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:38:31.079169 kubelet[3306]: E1013 05:38:31.076496 3306 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:38:31.079169 kubelet[3306]: E1013 05:38:31.076865 3306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:38:31.079169 kubelet[3306]: W1013 05:38:31.076877 3306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:38:31.079169 kubelet[3306]: E1013 05:38:31.076889 3306 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:38:31.079169 kubelet[3306]: E1013 05:38:31.077547 3306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:38:31.079169 kubelet[3306]: W1013 05:38:31.077558 3306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:38:31.079169 kubelet[3306]: E1013 05:38:31.077573 3306 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:38:31.107942 containerd[1965]: time="2025-10-13T05:38:31.104760000Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-wpggc,Uid:b60ff97b-191f-4823-889c-af2d81bc5a36,Namespace:calico-system,Attempt:0,} returns sandbox id \"82e2503da1c1e4bbcee3e2d777550a59110de929207312d0b3d5f13bcaf731c1\"" Oct 13 05:38:31.132241 kubelet[3306]: E1013 05:38:31.132143 3306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:38:31.132404 kubelet[3306]: W1013 05:38:31.132388 3306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:38:31.132553 kubelet[3306]: E1013 05:38:31.132493 3306 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:38:32.311748 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1638963914.mount: Deactivated successfully. Oct 13 05:38:32.779380 kubelet[3306]: E1013 05:38:32.778833 3306 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rx7bk" podUID="96c04204-34c8-461e-8a7a-c15984adac1a" Oct 13 05:38:33.491938 containerd[1965]: time="2025-10-13T05:38:33.491862080Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:38:33.493881 containerd[1965]: time="2025-10-13T05:38:33.493724092Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=35237389" Oct 13 05:38:33.502284 containerd[1965]: time="2025-10-13T05:38:33.502235363Z" level=info msg="ImageCreate event name:\"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:38:33.506448 containerd[1965]: time="2025-10-13T05:38:33.506290745Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:38:33.507175 containerd[1965]: time="2025-10-13T05:38:33.506754433Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"35237243\" in 2.643708034s" Oct 13 05:38:33.507175 containerd[1965]: time="2025-10-13T05:38:33.506787752Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\"" Oct 13 05:38:33.508397 containerd[1965]: time="2025-10-13T05:38:33.508378012Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Oct 13 05:38:33.527809 containerd[1965]: time="2025-10-13T05:38:33.527762686Z" level=info msg="CreateContainer within sandbox \"52686f0a93b96eb8c6a8790b0fc569a7134e0572172ac803c58fcbfb8ca5cce1\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Oct 13 05:38:33.542006 containerd[1965]: time="2025-10-13T05:38:33.541971782Z" level=info msg="Container d76e05d504ec0af6728490a193593997839dfd852e82c1104e91deca6f13abbb: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:38:33.558651 containerd[1965]: time="2025-10-13T05:38:33.558597333Z" level=info msg="CreateContainer within sandbox \"52686f0a93b96eb8c6a8790b0fc569a7134e0572172ac803c58fcbfb8ca5cce1\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"d76e05d504ec0af6728490a193593997839dfd852e82c1104e91deca6f13abbb\"" Oct 13 05:38:33.561048 containerd[1965]: time="2025-10-13T05:38:33.559621598Z" level=info msg="StartContainer for \"d76e05d504ec0af6728490a193593997839dfd852e82c1104e91deca6f13abbb\"" Oct 13 05:38:33.561048 containerd[1965]: time="2025-10-13T05:38:33.560665117Z" level=info msg="connecting to shim d76e05d504ec0af6728490a193593997839dfd852e82c1104e91deca6f13abbb" address="unix:///run/containerd/s/ff22beb4429b39a11181ddd75534ae892a44a1f6efe203fd22b5d93a7f48ea7e" protocol=ttrpc version=3 Oct 13 05:38:33.612663 systemd[1]: Started cri-containerd-d76e05d504ec0af6728490a193593997839dfd852e82c1104e91deca6f13abbb.scope - libcontainer container d76e05d504ec0af6728490a193593997839dfd852e82c1104e91deca6f13abbb. Oct 13 05:38:33.706984 containerd[1965]: time="2025-10-13T05:38:33.706929177Z" level=info msg="StartContainer for \"d76e05d504ec0af6728490a193593997839dfd852e82c1104e91deca6f13abbb\" returns successfully" Oct 13 05:38:34.161029 kubelet[3306]: E1013 05:38:34.160887 3306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:38:34.161029 kubelet[3306]: W1013 05:38:34.160939 3306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:38:34.162218 kubelet[3306]: E1013 05:38:34.160965 3306 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:38:34.162218 kubelet[3306]: E1013 05:38:34.161884 3306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:38:34.162218 kubelet[3306]: W1013 05:38:34.161899 3306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:38:34.162218 kubelet[3306]: E1013 05:38:34.161937 3306 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:38:34.163986 kubelet[3306]: E1013 05:38:34.162245 3306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:38:34.163986 kubelet[3306]: W1013 05:38:34.162259 3306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:38:34.163986 kubelet[3306]: E1013 05:38:34.162278 3306 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:38:34.163986 kubelet[3306]: E1013 05:38:34.162568 3306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:38:34.163986 kubelet[3306]: W1013 05:38:34.162590 3306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:38:34.163986 kubelet[3306]: E1013 05:38:34.162606 3306 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:38:34.163986 kubelet[3306]: E1013 05:38:34.163024 3306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:38:34.163986 kubelet[3306]: W1013 05:38:34.163039 3306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:38:34.163986 kubelet[3306]: E1013 05:38:34.163051 3306 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:38:34.163986 kubelet[3306]: E1013 05:38:34.163988 3306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:38:34.164808 kubelet[3306]: W1013 05:38:34.164000 3306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:38:34.164808 kubelet[3306]: E1013 05:38:34.164015 3306 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:38:34.164808 kubelet[3306]: E1013 05:38:34.164607 3306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:38:34.164808 kubelet[3306]: W1013 05:38:34.164622 3306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:38:34.164808 kubelet[3306]: E1013 05:38:34.164637 3306 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:38:34.166002 kubelet[3306]: E1013 05:38:34.165539 3306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:38:34.166002 kubelet[3306]: W1013 05:38:34.165552 3306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:38:34.166002 kubelet[3306]: E1013 05:38:34.165566 3306 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:38:34.166002 kubelet[3306]: E1013 05:38:34.165857 3306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:38:34.166002 kubelet[3306]: W1013 05:38:34.165868 3306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:38:34.166002 kubelet[3306]: E1013 05:38:34.165905 3306 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:38:34.167689 kubelet[3306]: E1013 05:38:34.166156 3306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:38:34.167689 kubelet[3306]: W1013 05:38:34.166166 3306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:38:34.167689 kubelet[3306]: E1013 05:38:34.166179 3306 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:38:34.167689 kubelet[3306]: E1013 05:38:34.166574 3306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:38:34.167689 kubelet[3306]: W1013 05:38:34.166595 3306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:38:34.167689 kubelet[3306]: E1013 05:38:34.166609 3306 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:38:34.167689 kubelet[3306]: E1013 05:38:34.166874 3306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:38:34.167689 kubelet[3306]: W1013 05:38:34.166885 3306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:38:34.167689 kubelet[3306]: E1013 05:38:34.166897 3306 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:38:34.169578 kubelet[3306]: E1013 05:38:34.168397 3306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:38:34.169578 kubelet[3306]: W1013 05:38:34.168454 3306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:38:34.169578 kubelet[3306]: E1013 05:38:34.168471 3306 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:38:34.170515 kubelet[3306]: E1013 05:38:34.169722 3306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:38:34.170515 kubelet[3306]: W1013 05:38:34.169735 3306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:38:34.170515 kubelet[3306]: E1013 05:38:34.169750 3306 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:38:34.170515 kubelet[3306]: E1013 05:38:34.170261 3306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:38:34.170515 kubelet[3306]: W1013 05:38:34.170275 3306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:38:34.170515 kubelet[3306]: E1013 05:38:34.170288 3306 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:38:34.197686 kubelet[3306]: E1013 05:38:34.197650 3306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:38:34.197686 kubelet[3306]: W1013 05:38:34.197676 3306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:38:34.197686 kubelet[3306]: E1013 05:38:34.197700 3306 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:38:34.201739 kubelet[3306]: E1013 05:38:34.200509 3306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:38:34.201739 kubelet[3306]: W1013 05:38:34.200532 3306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:38:34.201739 kubelet[3306]: E1013 05:38:34.200554 3306 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:38:34.226814 kubelet[3306]: E1013 05:38:34.226773 3306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:38:34.226814 kubelet[3306]: W1013 05:38:34.226801 3306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:38:34.227013 kubelet[3306]: E1013 05:38:34.226826 3306 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:38:34.227977 kubelet[3306]: E1013 05:38:34.227948 3306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:38:34.227977 kubelet[3306]: W1013 05:38:34.227969 3306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:38:34.228144 kubelet[3306]: E1013 05:38:34.227990 3306 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:38:34.228677 kubelet[3306]: E1013 05:38:34.228492 3306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:38:34.228677 kubelet[3306]: W1013 05:38:34.228673 3306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:38:34.228977 kubelet[3306]: E1013 05:38:34.228949 3306 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:38:34.229665 kubelet[3306]: E1013 05:38:34.229495 3306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:38:34.229665 kubelet[3306]: W1013 05:38:34.229512 3306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:38:34.229665 kubelet[3306]: E1013 05:38:34.229527 3306 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:38:34.231202 kubelet[3306]: E1013 05:38:34.230574 3306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:38:34.231202 kubelet[3306]: W1013 05:38:34.230590 3306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:38:34.231202 kubelet[3306]: E1013 05:38:34.230605 3306 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:38:34.231202 kubelet[3306]: E1013 05:38:34.230844 3306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:38:34.231202 kubelet[3306]: W1013 05:38:34.230854 3306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:38:34.231202 kubelet[3306]: E1013 05:38:34.230867 3306 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:38:34.231202 kubelet[3306]: E1013 05:38:34.231167 3306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:38:34.231202 kubelet[3306]: W1013 05:38:34.231179 3306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:38:34.231202 kubelet[3306]: E1013 05:38:34.231193 3306 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:38:34.236147 kubelet[3306]: E1013 05:38:34.236118 3306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:38:34.236147 kubelet[3306]: W1013 05:38:34.236145 3306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:38:34.236313 kubelet[3306]: E1013 05:38:34.236169 3306 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:38:34.237956 kubelet[3306]: E1013 05:38:34.237933 3306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:38:34.237956 kubelet[3306]: W1013 05:38:34.237954 3306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:38:34.238116 kubelet[3306]: E1013 05:38:34.237974 3306 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:38:34.238807 kubelet[3306]: E1013 05:38:34.238783 3306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:38:34.238807 kubelet[3306]: W1013 05:38:34.238803 3306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:38:34.238941 kubelet[3306]: E1013 05:38:34.238819 3306 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:38:34.239531 kubelet[3306]: E1013 05:38:34.239210 3306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:38:34.239531 kubelet[3306]: W1013 05:38:34.239222 3306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:38:34.239531 kubelet[3306]: E1013 05:38:34.239235 3306 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:38:34.240542 kubelet[3306]: E1013 05:38:34.240521 3306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:38:34.240542 kubelet[3306]: W1013 05:38:34.240541 3306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:38:34.240666 kubelet[3306]: E1013 05:38:34.240556 3306 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:38:34.240872 kubelet[3306]: E1013 05:38:34.240857 3306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:38:34.240872 kubelet[3306]: W1013 05:38:34.240872 3306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:38:34.240980 kubelet[3306]: E1013 05:38:34.240886 3306 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:38:34.242726 kubelet[3306]: E1013 05:38:34.242704 3306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:38:34.242726 kubelet[3306]: W1013 05:38:34.242726 3306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:38:34.242842 kubelet[3306]: E1013 05:38:34.242743 3306 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:38:34.245361 kubelet[3306]: E1013 05:38:34.245339 3306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:38:34.245361 kubelet[3306]: W1013 05:38:34.245360 3306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:38:34.245542 kubelet[3306]: E1013 05:38:34.245377 3306 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:38:34.247453 kubelet[3306]: E1013 05:38:34.246698 3306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:38:34.247453 kubelet[3306]: W1013 05:38:34.246715 3306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:38:34.247453 kubelet[3306]: E1013 05:38:34.246732 3306 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:38:34.782394 kubelet[3306]: E1013 05:38:34.782333 3306 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rx7bk" podUID="96c04204-34c8-461e-8a7a-c15984adac1a" Oct 13 05:38:35.059984 containerd[1965]: time="2025-10-13T05:38:35.059873026Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:38:35.062272 containerd[1965]: time="2025-10-13T05:38:35.062050773Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4446660" Oct 13 05:38:35.064261 containerd[1965]: time="2025-10-13T05:38:35.064221373Z" level=info msg="ImageCreate event name:\"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:38:35.067973 containerd[1965]: time="2025-10-13T05:38:35.067598594Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:38:35.068555 containerd[1965]: time="2025-10-13T05:38:35.068507776Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5939323\" in 1.559982613s" Oct 13 05:38:35.068636 containerd[1965]: time="2025-10-13T05:38:35.068556242Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\"" Oct 13 05:38:35.076621 containerd[1965]: time="2025-10-13T05:38:35.076565521Z" level=info msg="CreateContainer within sandbox \"82e2503da1c1e4bbcee3e2d777550a59110de929207312d0b3d5f13bcaf731c1\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Oct 13 05:38:35.125115 containerd[1965]: time="2025-10-13T05:38:35.121640373Z" level=info msg="Container ee58e0096fb4911fe3e867e8049e13ab2365e8244c76a7fb5fb93c2a6e91ee9b: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:38:35.154539 kubelet[3306]: I1013 05:38:35.153990 3306 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7578cc457c-fn9c7" podStartSLOduration=2.507621843 podStartE2EDuration="5.153185101s" podCreationTimestamp="2025-10-13 05:38:30 +0000 UTC" firstStartedPulling="2025-10-13 05:38:30.862462071 +0000 UTC m=+51.221886828" lastFinishedPulling="2025-10-13 05:38:33.508025336 +0000 UTC m=+53.867450086" observedRunningTime="2025-10-13 05:38:34.19493954 +0000 UTC m=+54.554364329" watchObservedRunningTime="2025-10-13 05:38:35.153185101 +0000 UTC m=+55.512609874" Oct 13 05:38:35.165436 containerd[1965]: time="2025-10-13T05:38:35.164178641Z" level=info msg="CreateContainer within sandbox \"82e2503da1c1e4bbcee3e2d777550a59110de929207312d0b3d5f13bcaf731c1\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"ee58e0096fb4911fe3e867e8049e13ab2365e8244c76a7fb5fb93c2a6e91ee9b\"" Oct 13 05:38:35.166385 containerd[1965]: time="2025-10-13T05:38:35.166353390Z" level=info msg="StartContainer for \"ee58e0096fb4911fe3e867e8049e13ab2365e8244c76a7fb5fb93c2a6e91ee9b\"" Oct 13 05:38:35.167922 containerd[1965]: time="2025-10-13T05:38:35.167893287Z" level=info msg="connecting to shim ee58e0096fb4911fe3e867e8049e13ab2365e8244c76a7fb5fb93c2a6e91ee9b" address="unix:///run/containerd/s/ec3c98871606db2f3a5cb0656a7de87f8f83381dbd7638c3d88e0b4ab31ee364" protocol=ttrpc version=3 Oct 13 05:38:35.176944 kubelet[3306]: E1013 05:38:35.176912 3306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:38:35.176944 kubelet[3306]: W1013 05:38:35.176933 3306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:38:35.177514 kubelet[3306]: E1013 05:38:35.176953 3306 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:38:35.180018 kubelet[3306]: E1013 05:38:35.179990 3306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:38:35.180018 kubelet[3306]: W1013 05:38:35.180011 3306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:38:35.181327 kubelet[3306]: E1013 05:38:35.180030 3306 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:38:35.181327 kubelet[3306]: E1013 05:38:35.180210 3306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:38:35.181327 kubelet[3306]: W1013 05:38:35.180227 3306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:38:35.181327 kubelet[3306]: E1013 05:38:35.180237 3306 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:38:35.183901 kubelet[3306]: E1013 05:38:35.182684 3306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:38:35.183901 kubelet[3306]: W1013 05:38:35.182739 3306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:38:35.183901 kubelet[3306]: E1013 05:38:35.182768 3306 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:38:35.184527 kubelet[3306]: E1013 05:38:35.184506 3306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:38:35.184527 kubelet[3306]: W1013 05:38:35.184526 3306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:38:35.184639 kubelet[3306]: E1013 05:38:35.184545 3306 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:38:35.184839 kubelet[3306]: E1013 05:38:35.184822 3306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:38:35.184839 kubelet[3306]: W1013 05:38:35.184837 3306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:38:35.184901 kubelet[3306]: E1013 05:38:35.184847 3306 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:38:35.185655 kubelet[3306]: E1013 05:38:35.185636 3306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:38:35.185655 kubelet[3306]: W1013 05:38:35.185653 3306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:38:35.185762 kubelet[3306]: E1013 05:38:35.185665 3306 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:38:35.186635 kubelet[3306]: E1013 05:38:35.186615 3306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:38:35.186635 kubelet[3306]: W1013 05:38:35.186632 3306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:38:35.186736 kubelet[3306]: E1013 05:38:35.186644 3306 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:38:35.188378 kubelet[3306]: E1013 05:38:35.188088 3306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:38:35.188378 kubelet[3306]: W1013 05:38:35.188107 3306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:38:35.188378 kubelet[3306]: E1013 05:38:35.188124 3306 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:38:35.188595 kubelet[3306]: E1013 05:38:35.188467 3306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:38:35.188595 kubelet[3306]: W1013 05:38:35.188479 3306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:38:35.188595 kubelet[3306]: E1013 05:38:35.188492 3306 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:38:35.189157 kubelet[3306]: E1013 05:38:35.188878 3306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:38:35.189157 kubelet[3306]: W1013 05:38:35.188891 3306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:38:35.189157 kubelet[3306]: E1013 05:38:35.188904 3306 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:38:35.190041 kubelet[3306]: E1013 05:38:35.189537 3306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:38:35.190041 kubelet[3306]: W1013 05:38:35.189550 3306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:38:35.190041 kubelet[3306]: E1013 05:38:35.189564 3306 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:38:35.190183 kubelet[3306]: E1013 05:38:35.190125 3306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:38:35.190183 kubelet[3306]: W1013 05:38:35.190138 3306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:38:35.190183 kubelet[3306]: E1013 05:38:35.190151 3306 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:38:35.191077 kubelet[3306]: E1013 05:38:35.190772 3306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:38:35.191077 kubelet[3306]: W1013 05:38:35.190784 3306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:38:35.191077 kubelet[3306]: E1013 05:38:35.190798 3306 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:38:35.191875 kubelet[3306]: E1013 05:38:35.191361 3306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:38:35.191875 kubelet[3306]: W1013 05:38:35.191375 3306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:38:35.192018 kubelet[3306]: E1013 05:38:35.191947 3306 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:38:35.210658 systemd[1]: Started cri-containerd-ee58e0096fb4911fe3e867e8049e13ab2365e8244c76a7fb5fb93c2a6e91ee9b.scope - libcontainer container ee58e0096fb4911fe3e867e8049e13ab2365e8244c76a7fb5fb93c2a6e91ee9b. Oct 13 05:38:35.235694 kubelet[3306]: E1013 05:38:35.235665 3306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:38:35.235694 kubelet[3306]: W1013 05:38:35.235685 3306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:38:35.236081 kubelet[3306]: E1013 05:38:35.235707 3306 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:38:35.236558 kubelet[3306]: E1013 05:38:35.236510 3306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:38:35.236558 kubelet[3306]: W1013 05:38:35.236554 3306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:38:35.236839 kubelet[3306]: E1013 05:38:35.236573 3306 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:38:35.237048 kubelet[3306]: E1013 05:38:35.237008 3306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:38:35.237048 kubelet[3306]: W1013 05:38:35.237024 3306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:38:35.237048 kubelet[3306]: E1013 05:38:35.237041 3306 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:38:35.237476 kubelet[3306]: E1013 05:38:35.237455 3306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:38:35.237476 kubelet[3306]: W1013 05:38:35.237472 3306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:38:35.237649 kubelet[3306]: E1013 05:38:35.237488 3306 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:38:35.238713 kubelet[3306]: E1013 05:38:35.238667 3306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:38:35.238713 kubelet[3306]: W1013 05:38:35.238683 3306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:38:35.239079 kubelet[3306]: E1013 05:38:35.238698 3306 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:38:35.239187 kubelet[3306]: E1013 05:38:35.239122 3306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:38:35.239187 kubelet[3306]: W1013 05:38:35.239135 3306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:38:35.239187 kubelet[3306]: E1013 05:38:35.239149 3306 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:38:35.239537 kubelet[3306]: E1013 05:38:35.239450 3306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:38:35.239537 kubelet[3306]: W1013 05:38:35.239462 3306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:38:35.239537 kubelet[3306]: E1013 05:38:35.239476 3306 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:38:35.239924 kubelet[3306]: E1013 05:38:35.239783 3306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:38:35.239924 kubelet[3306]: W1013 05:38:35.239794 3306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:38:35.239924 kubelet[3306]: E1013 05:38:35.239807 3306 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:38:35.240455 kubelet[3306]: E1013 05:38:35.240403 3306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:38:35.240455 kubelet[3306]: W1013 05:38:35.240430 3306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:38:35.240455 kubelet[3306]: E1013 05:38:35.240445 3306 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:38:35.241532 kubelet[3306]: E1013 05:38:35.241513 3306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:38:35.241717 kubelet[3306]: W1013 05:38:35.241607 3306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:38:35.241717 kubelet[3306]: E1013 05:38:35.241649 3306 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:38:35.242051 kubelet[3306]: E1013 05:38:35.242028 3306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:38:35.242232 kubelet[3306]: W1013 05:38:35.242140 3306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:38:35.242232 kubelet[3306]: E1013 05:38:35.242158 3306 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:38:35.242633 kubelet[3306]: E1013 05:38:35.242552 3306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:38:35.242633 kubelet[3306]: W1013 05:38:35.242567 3306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:38:35.242633 kubelet[3306]: E1013 05:38:35.242582 3306 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:38:35.243191 kubelet[3306]: E1013 05:38:35.243077 3306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:38:35.243191 kubelet[3306]: W1013 05:38:35.243093 3306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:38:35.243191 kubelet[3306]: E1013 05:38:35.243106 3306 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:38:35.243811 kubelet[3306]: E1013 05:38:35.243791 3306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:38:35.243811 kubelet[3306]: W1013 05:38:35.243808 3306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:38:35.243912 kubelet[3306]: E1013 05:38:35.243824 3306 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:38:35.244650 kubelet[3306]: E1013 05:38:35.244520 3306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:38:35.244650 kubelet[3306]: W1013 05:38:35.244535 3306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:38:35.244650 kubelet[3306]: E1013 05:38:35.244550 3306 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:38:35.245055 kubelet[3306]: E1013 05:38:35.244927 3306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:38:35.245055 kubelet[3306]: W1013 05:38:35.244944 3306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:38:35.245055 kubelet[3306]: E1013 05:38:35.244958 3306 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:38:35.245433 kubelet[3306]: E1013 05:38:35.245399 3306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:38:35.245797 kubelet[3306]: W1013 05:38:35.245538 3306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:38:35.245797 kubelet[3306]: E1013 05:38:35.245560 3306 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:38:35.246107 kubelet[3306]: E1013 05:38:35.246093 3306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:38:35.246194 kubelet[3306]: W1013 05:38:35.246181 3306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:38:35.246277 kubelet[3306]: E1013 05:38:35.246264 3306 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:38:35.277271 containerd[1965]: time="2025-10-13T05:38:35.277078003Z" level=info msg="StartContainer for \"ee58e0096fb4911fe3e867e8049e13ab2365e8244c76a7fb5fb93c2a6e91ee9b\" returns successfully" Oct 13 05:38:35.286954 systemd[1]: cri-containerd-ee58e0096fb4911fe3e867e8049e13ab2365e8244c76a7fb5fb93c2a6e91ee9b.scope: Deactivated successfully. Oct 13 05:38:35.344582 containerd[1965]: time="2025-10-13T05:38:35.343226323Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ee58e0096fb4911fe3e867e8049e13ab2365e8244c76a7fb5fb93c2a6e91ee9b\" id:\"ee58e0096fb4911fe3e867e8049e13ab2365e8244c76a7fb5fb93c2a6e91ee9b\" pid:4190 exited_at:{seconds:1760333915 nanos:298829185}" Oct 13 05:38:35.352837 containerd[1965]: time="2025-10-13T05:38:35.352788765Z" level=info msg="received exit event container_id:\"ee58e0096fb4911fe3e867e8049e13ab2365e8244c76a7fb5fb93c2a6e91ee9b\" id:\"ee58e0096fb4911fe3e867e8049e13ab2365e8244c76a7fb5fb93c2a6e91ee9b\" pid:4190 exited_at:{seconds:1760333915 nanos:298829185}" Oct 13 05:38:35.385861 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ee58e0096fb4911fe3e867e8049e13ab2365e8244c76a7fb5fb93c2a6e91ee9b-rootfs.mount: Deactivated successfully. Oct 13 05:38:36.141092 containerd[1965]: time="2025-10-13T05:38:36.141021810Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Oct 13 05:38:36.779013 kubelet[3306]: E1013 05:38:36.778836 3306 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rx7bk" podUID="96c04204-34c8-461e-8a7a-c15984adac1a" Oct 13 05:38:38.778402 kubelet[3306]: E1013 05:38:38.778351 3306 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rx7bk" podUID="96c04204-34c8-461e-8a7a-c15984adac1a" Oct 13 05:38:40.778848 kubelet[3306]: E1013 05:38:40.778790 3306 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rx7bk" podUID="96c04204-34c8-461e-8a7a-c15984adac1a" Oct 13 05:38:41.767216 containerd[1965]: time="2025-10-13T05:38:41.767102293Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:38:41.769105 containerd[1965]: time="2025-10-13T05:38:41.769055124Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=70440613" Oct 13 05:38:41.771237 containerd[1965]: time="2025-10-13T05:38:41.771181005Z" level=info msg="ImageCreate event name:\"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:38:41.774548 containerd[1965]: time="2025-10-13T05:38:41.774512365Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:38:41.775189 containerd[1965]: time="2025-10-13T05:38:41.775151036Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"71933316\" in 5.634086203s" Oct 13 05:38:41.775277 containerd[1965]: time="2025-10-13T05:38:41.775263777Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\"" Oct 13 05:38:41.822001 containerd[1965]: time="2025-10-13T05:38:41.821929290Z" level=info msg="CreateContainer within sandbox \"82e2503da1c1e4bbcee3e2d777550a59110de929207312d0b3d5f13bcaf731c1\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Oct 13 05:38:41.840196 containerd[1965]: time="2025-10-13T05:38:41.839626570Z" level=info msg="Container 69d82e8b3e26f9e5ea34998cd20b6db23548c86fb083c74d9fe30ce9212e1436: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:38:41.844327 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1203008756.mount: Deactivated successfully. Oct 13 05:38:41.858911 containerd[1965]: time="2025-10-13T05:38:41.858856282Z" level=info msg="CreateContainer within sandbox \"82e2503da1c1e4bbcee3e2d777550a59110de929207312d0b3d5f13bcaf731c1\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"69d82e8b3e26f9e5ea34998cd20b6db23548c86fb083c74d9fe30ce9212e1436\"" Oct 13 05:38:41.861844 containerd[1965]: time="2025-10-13T05:38:41.859645775Z" level=info msg="StartContainer for \"69d82e8b3e26f9e5ea34998cd20b6db23548c86fb083c74d9fe30ce9212e1436\"" Oct 13 05:38:41.862436 containerd[1965]: time="2025-10-13T05:38:41.862341925Z" level=info msg="connecting to shim 69d82e8b3e26f9e5ea34998cd20b6db23548c86fb083c74d9fe30ce9212e1436" address="unix:///run/containerd/s/ec3c98871606db2f3a5cb0656a7de87f8f83381dbd7638c3d88e0b4ab31ee364" protocol=ttrpc version=3 Oct 13 05:38:41.891801 systemd[1]: Started cri-containerd-69d82e8b3e26f9e5ea34998cd20b6db23548c86fb083c74d9fe30ce9212e1436.scope - libcontainer container 69d82e8b3e26f9e5ea34998cd20b6db23548c86fb083c74d9fe30ce9212e1436. Oct 13 05:38:41.940594 containerd[1965]: time="2025-10-13T05:38:41.940552048Z" level=info msg="StartContainer for \"69d82e8b3e26f9e5ea34998cd20b6db23548c86fb083c74d9fe30ce9212e1436\" returns successfully" Oct 13 05:38:42.778439 kubelet[3306]: E1013 05:38:42.778354 3306 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rx7bk" podUID="96c04204-34c8-461e-8a7a-c15984adac1a" Oct 13 05:38:43.062748 systemd[1]: cri-containerd-69d82e8b3e26f9e5ea34998cd20b6db23548c86fb083c74d9fe30ce9212e1436.scope: Deactivated successfully. Oct 13 05:38:43.063002 systemd[1]: cri-containerd-69d82e8b3e26f9e5ea34998cd20b6db23548c86fb083c74d9fe30ce9212e1436.scope: Consumed 535ms CPU time, 162.7M memory peak, 5.5M read from disk, 171.3M written to disk. Oct 13 05:38:43.108848 containerd[1965]: time="2025-10-13T05:38:43.108770989Z" level=info msg="received exit event container_id:\"69d82e8b3e26f9e5ea34998cd20b6db23548c86fb083c74d9fe30ce9212e1436\" id:\"69d82e8b3e26f9e5ea34998cd20b6db23548c86fb083c74d9fe30ce9212e1436\" pid:4268 exited_at:{seconds:1760333923 nanos:108552960}" Oct 13 05:38:43.109219 containerd[1965]: time="2025-10-13T05:38:43.109021964Z" level=info msg="TaskExit event in podsandbox handler container_id:\"69d82e8b3e26f9e5ea34998cd20b6db23548c86fb083c74d9fe30ce9212e1436\" id:\"69d82e8b3e26f9e5ea34998cd20b6db23548c86fb083c74d9fe30ce9212e1436\" pid:4268 exited_at:{seconds:1760333923 nanos:108552960}" Oct 13 05:38:43.149462 kubelet[3306]: I1013 05:38:43.149427 3306 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Oct 13 05:38:43.161091 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-69d82e8b3e26f9e5ea34998cd20b6db23548c86fb083c74d9fe30ce9212e1436-rootfs.mount: Deactivated successfully. Oct 13 05:38:43.270521 systemd[1]: Created slice kubepods-besteffort-pod05ff0b1d_84db_465d_be1a_932a1394d5de.slice - libcontainer container kubepods-besteffort-pod05ff0b1d_84db_465d_be1a_932a1394d5de.slice. Oct 13 05:38:43.288904 systemd[1]: Created slice kubepods-burstable-pod0f8dfa93_d61a_4e61_90cb_78d638d4cb8b.slice - libcontainer container kubepods-burstable-pod0f8dfa93_d61a_4e61_90cb_78d638d4cb8b.slice. Oct 13 05:38:43.293669 kubelet[3306]: I1013 05:38:43.293645 3306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5cv5\" (UniqueName: \"kubernetes.io/projected/0f8dfa93-d61a-4e61-90cb-78d638d4cb8b-kube-api-access-x5cv5\") pod \"coredns-674b8bbfcf-8gw5k\" (UID: \"0f8dfa93-d61a-4e61-90cb-78d638d4cb8b\") " pod="kube-system/coredns-674b8bbfcf-8gw5k" Oct 13 05:38:43.293854 kubelet[3306]: I1013 05:38:43.293840 3306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d6919762-43d3-4224-82c3-07710fad2af5-config-volume\") pod \"coredns-674b8bbfcf-b4hm2\" (UID: \"d6919762-43d3-4224-82c3-07710fad2af5\") " pod="kube-system/coredns-674b8bbfcf-b4hm2" Oct 13 05:38:43.293950 kubelet[3306]: I1013 05:38:43.293938 3306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7jfx\" (UniqueName: \"kubernetes.io/projected/05ff0b1d-84db-465d-be1a-932a1394d5de-kube-api-access-p7jfx\") pod \"whisker-65bc8d9dfd-dsngt\" (UID: \"05ff0b1d-84db-465d-be1a-932a1394d5de\") " pod="calico-system/whisker-65bc8d9dfd-dsngt" Oct 13 05:38:43.294012 kubelet[3306]: I1013 05:38:43.294004 3306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/05ff0b1d-84db-465d-be1a-932a1394d5de-whisker-backend-key-pair\") pod \"whisker-65bc8d9dfd-dsngt\" (UID: \"05ff0b1d-84db-465d-be1a-932a1394d5de\") " pod="calico-system/whisker-65bc8d9dfd-dsngt" Oct 13 05:38:43.294069 kubelet[3306]: I1013 05:38:43.294061 3306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r965f\" (UniqueName: \"kubernetes.io/projected/d6919762-43d3-4224-82c3-07710fad2af5-kube-api-access-r965f\") pod \"coredns-674b8bbfcf-b4hm2\" (UID: \"d6919762-43d3-4224-82c3-07710fad2af5\") " pod="kube-system/coredns-674b8bbfcf-b4hm2" Oct 13 05:38:43.294124 kubelet[3306]: I1013 05:38:43.294116 3306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0f8dfa93-d61a-4e61-90cb-78d638d4cb8b-config-volume\") pod \"coredns-674b8bbfcf-8gw5k\" (UID: \"0f8dfa93-d61a-4e61-90cb-78d638d4cb8b\") " pod="kube-system/coredns-674b8bbfcf-8gw5k" Oct 13 05:38:43.294195 kubelet[3306]: I1013 05:38:43.294185 3306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/05ff0b1d-84db-465d-be1a-932a1394d5de-whisker-ca-bundle\") pod \"whisker-65bc8d9dfd-dsngt\" (UID: \"05ff0b1d-84db-465d-be1a-932a1394d5de\") " pod="calico-system/whisker-65bc8d9dfd-dsngt" Oct 13 05:38:43.298329 containerd[1965]: time="2025-10-13T05:38:43.298285195Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Oct 13 05:38:43.299564 systemd[1]: Created slice kubepods-besteffort-pod61b998a9_69f8_4371_990d_0d5850be9043.slice - libcontainer container kubepods-besteffort-pod61b998a9_69f8_4371_990d_0d5850be9043.slice. Oct 13 05:38:43.311711 systemd[1]: Created slice kubepods-besteffort-pod21eb228e_1a02_4f13_911b_b96a622340c7.slice - libcontainer container kubepods-besteffort-pod21eb228e_1a02_4f13_911b_b96a622340c7.slice. Oct 13 05:38:43.325047 systemd[1]: Created slice kubepods-besteffort-pod5ca3db14_0698_4639_b240_ba20e318c953.slice - libcontainer container kubepods-besteffort-pod5ca3db14_0698_4639_b240_ba20e318c953.slice. Oct 13 05:38:43.336526 systemd[1]: Created slice kubepods-besteffort-pod405beab3_108b_401c_b301_f27eeb7eec5e.slice - libcontainer container kubepods-besteffort-pod405beab3_108b_401c_b301_f27eeb7eec5e.slice. Oct 13 05:38:43.346598 systemd[1]: Created slice kubepods-burstable-podd6919762_43d3_4224_82c3_07710fad2af5.slice - libcontainer container kubepods-burstable-podd6919762_43d3_4224_82c3_07710fad2af5.slice. Oct 13 05:38:43.395104 kubelet[3306]: I1013 05:38:43.395033 3306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qq47g\" (UniqueName: \"kubernetes.io/projected/21eb228e-1a02-4f13-911b-b96a622340c7-kube-api-access-qq47g\") pod \"calico-apiserver-64486dfbd5-dvjbz\" (UID: \"21eb228e-1a02-4f13-911b-b96a622340c7\") " pod="calico-apiserver/calico-apiserver-64486dfbd5-dvjbz" Oct 13 05:38:43.395270 kubelet[3306]: I1013 05:38:43.395139 3306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/405beab3-108b-401c-b301-f27eeb7eec5e-goldmane-ca-bundle\") pod \"goldmane-54d579b49d-jcxzl\" (UID: \"405beab3-108b-401c-b301-f27eeb7eec5e\") " pod="calico-system/goldmane-54d579b49d-jcxzl" Oct 13 05:38:43.395270 kubelet[3306]: I1013 05:38:43.395159 3306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hzmc\" (UniqueName: \"kubernetes.io/projected/61b998a9-69f8-4371-990d-0d5850be9043-kube-api-access-2hzmc\") pod \"calico-kube-controllers-5b559457d7-fhqld\" (UID: \"61b998a9-69f8-4371-990d-0d5850be9043\") " pod="calico-system/calico-kube-controllers-5b559457d7-fhqld" Oct 13 05:38:43.395270 kubelet[3306]: I1013 05:38:43.395176 3306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/21eb228e-1a02-4f13-911b-b96a622340c7-calico-apiserver-certs\") pod \"calico-apiserver-64486dfbd5-dvjbz\" (UID: \"21eb228e-1a02-4f13-911b-b96a622340c7\") " pod="calico-apiserver/calico-apiserver-64486dfbd5-dvjbz" Oct 13 05:38:43.395270 kubelet[3306]: I1013 05:38:43.395205 3306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfllf\" (UniqueName: \"kubernetes.io/projected/5ca3db14-0698-4639-b240-ba20e318c953-kube-api-access-sfllf\") pod \"calico-apiserver-64486dfbd5-dnft8\" (UID: \"5ca3db14-0698-4639-b240-ba20e318c953\") " pod="calico-apiserver/calico-apiserver-64486dfbd5-dnft8" Oct 13 05:38:43.395270 kubelet[3306]: I1013 05:38:43.395222 3306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/61b998a9-69f8-4371-990d-0d5850be9043-tigera-ca-bundle\") pod \"calico-kube-controllers-5b559457d7-fhqld\" (UID: \"61b998a9-69f8-4371-990d-0d5850be9043\") " pod="calico-system/calico-kube-controllers-5b559457d7-fhqld" Oct 13 05:38:43.395575 kubelet[3306]: I1013 05:38:43.395256 3306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/405beab3-108b-401c-b301-f27eeb7eec5e-goldmane-key-pair\") pod \"goldmane-54d579b49d-jcxzl\" (UID: \"405beab3-108b-401c-b301-f27eeb7eec5e\") " pod="calico-system/goldmane-54d579b49d-jcxzl" Oct 13 05:38:43.395575 kubelet[3306]: I1013 05:38:43.395270 3306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m85v7\" (UniqueName: \"kubernetes.io/projected/405beab3-108b-401c-b301-f27eeb7eec5e-kube-api-access-m85v7\") pod \"goldmane-54d579b49d-jcxzl\" (UID: \"405beab3-108b-401c-b301-f27eeb7eec5e\") " pod="calico-system/goldmane-54d579b49d-jcxzl" Oct 13 05:38:43.395575 kubelet[3306]: I1013 05:38:43.395289 3306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/405beab3-108b-401c-b301-f27eeb7eec5e-config\") pod \"goldmane-54d579b49d-jcxzl\" (UID: \"405beab3-108b-401c-b301-f27eeb7eec5e\") " pod="calico-system/goldmane-54d579b49d-jcxzl" Oct 13 05:38:43.395575 kubelet[3306]: I1013 05:38:43.395305 3306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/5ca3db14-0698-4639-b240-ba20e318c953-calico-apiserver-certs\") pod \"calico-apiserver-64486dfbd5-dnft8\" (UID: \"5ca3db14-0698-4639-b240-ba20e318c953\") " pod="calico-apiserver/calico-apiserver-64486dfbd5-dnft8" Oct 13 05:38:43.585925 containerd[1965]: time="2025-10-13T05:38:43.585813167Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-65bc8d9dfd-dsngt,Uid:05ff0b1d-84db-465d-be1a-932a1394d5de,Namespace:calico-system,Attempt:0,}" Oct 13 05:38:43.593943 containerd[1965]: time="2025-10-13T05:38:43.593895180Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-8gw5k,Uid:0f8dfa93-d61a-4e61-90cb-78d638d4cb8b,Namespace:kube-system,Attempt:0,}" Oct 13 05:38:43.626516 containerd[1965]: time="2025-10-13T05:38:43.626251371Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64486dfbd5-dvjbz,Uid:21eb228e-1a02-4f13-911b-b96a622340c7,Namespace:calico-apiserver,Attempt:0,}" Oct 13 05:38:43.643535 containerd[1965]: time="2025-10-13T05:38:43.643455424Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64486dfbd5-dnft8,Uid:5ca3db14-0698-4639-b240-ba20e318c953,Namespace:calico-apiserver,Attempt:0,}" Oct 13 05:38:43.644449 containerd[1965]: time="2025-10-13T05:38:43.643758228Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5b559457d7-fhqld,Uid:61b998a9-69f8-4371-990d-0d5850be9043,Namespace:calico-system,Attempt:0,}" Oct 13 05:38:43.648111 containerd[1965]: time="2025-10-13T05:38:43.647987595Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-jcxzl,Uid:405beab3-108b-401c-b301-f27eeb7eec5e,Namespace:calico-system,Attempt:0,}" Oct 13 05:38:43.664773 containerd[1965]: time="2025-10-13T05:38:43.664728981Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-b4hm2,Uid:d6919762-43d3-4224-82c3-07710fad2af5,Namespace:kube-system,Attempt:0,}" Oct 13 05:38:44.114639 containerd[1965]: time="2025-10-13T05:38:44.114503484Z" level=error msg="Failed to destroy network for sandbox \"21336da183efbc2073e19962c4e2e9d0f6d9cb525d743d2c5882d9a6b55e7a78\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:38:44.116160 containerd[1965]: time="2025-10-13T05:38:44.116107097Z" level=error msg="Failed to destroy network for sandbox \"4d8006a0885c76517bf8f511e8457683f3bdc0850931291125141f090ea9cdce\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:38:44.119438 containerd[1965]: time="2025-10-13T05:38:44.118920516Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-jcxzl,Uid:405beab3-108b-401c-b301-f27eeb7eec5e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"21336da183efbc2073e19962c4e2e9d0f6d9cb525d743d2c5882d9a6b55e7a78\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:38:44.121887 kubelet[3306]: E1013 05:38:44.121815 3306 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"21336da183efbc2073e19962c4e2e9d0f6d9cb525d743d2c5882d9a6b55e7a78\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:38:44.123336 kubelet[3306]: E1013 05:38:44.121925 3306 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"21336da183efbc2073e19962c4e2e9d0f6d9cb525d743d2c5882d9a6b55e7a78\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-jcxzl" Oct 13 05:38:44.123336 kubelet[3306]: E1013 05:38:44.122053 3306 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"21336da183efbc2073e19962c4e2e9d0f6d9cb525d743d2c5882d9a6b55e7a78\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-jcxzl" Oct 13 05:38:44.124045 containerd[1965]: time="2025-10-13T05:38:44.122002192Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64486dfbd5-dnft8,Uid:5ca3db14-0698-4639-b240-ba20e318c953,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d8006a0885c76517bf8f511e8457683f3bdc0850931291125141f090ea9cdce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:38:44.129633 kubelet[3306]: E1013 05:38:44.127675 3306 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-54d579b49d-jcxzl_calico-system(405beab3-108b-401c-b301-f27eeb7eec5e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-54d579b49d-jcxzl_calico-system(405beab3-108b-401c-b301-f27eeb7eec5e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"21336da183efbc2073e19962c4e2e9d0f6d9cb525d743d2c5882d9a6b55e7a78\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-jcxzl" podUID="405beab3-108b-401c-b301-f27eeb7eec5e" Oct 13 05:38:44.141943 containerd[1965]: time="2025-10-13T05:38:44.141873640Z" level=error msg="Failed to destroy network for sandbox \"ee3992599bc9f9484583fc67ff58c3a6c26d4e210b50988e45e8757e8082cd90\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:38:44.144253 containerd[1965]: time="2025-10-13T05:38:44.144201833Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-8gw5k,Uid:0f8dfa93-d61a-4e61-90cb-78d638d4cb8b,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee3992599bc9f9484583fc67ff58c3a6c26d4e210b50988e45e8757e8082cd90\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:38:44.145295 kubelet[3306]: E1013 05:38:44.145122 3306 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee3992599bc9f9484583fc67ff58c3a6c26d4e210b50988e45e8757e8082cd90\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:38:44.145295 kubelet[3306]: E1013 05:38:44.145186 3306 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee3992599bc9f9484583fc67ff58c3a6c26d4e210b50988e45e8757e8082cd90\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-8gw5k" Oct 13 05:38:44.145295 kubelet[3306]: E1013 05:38:44.145215 3306 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee3992599bc9f9484583fc67ff58c3a6c26d4e210b50988e45e8757e8082cd90\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-8gw5k" Oct 13 05:38:44.145558 kubelet[3306]: E1013 05:38:44.145279 3306 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-8gw5k_kube-system(0f8dfa93-d61a-4e61-90cb-78d638d4cb8b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-8gw5k_kube-system(0f8dfa93-d61a-4e61-90cb-78d638d4cb8b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ee3992599bc9f9484583fc67ff58c3a6c26d4e210b50988e45e8757e8082cd90\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-8gw5k" podUID="0f8dfa93-d61a-4e61-90cb-78d638d4cb8b" Oct 13 05:38:44.145558 kubelet[3306]: E1013 05:38:44.145340 3306 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d8006a0885c76517bf8f511e8457683f3bdc0850931291125141f090ea9cdce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:38:44.145558 kubelet[3306]: E1013 05:38:44.145369 3306 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d8006a0885c76517bf8f511e8457683f3bdc0850931291125141f090ea9cdce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-64486dfbd5-dnft8" Oct 13 05:38:44.145728 kubelet[3306]: E1013 05:38:44.145389 3306 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d8006a0885c76517bf8f511e8457683f3bdc0850931291125141f090ea9cdce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-64486dfbd5-dnft8" Oct 13 05:38:44.146740 kubelet[3306]: E1013 05:38:44.146679 3306 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-64486dfbd5-dnft8_calico-apiserver(5ca3db14-0698-4639-b240-ba20e318c953)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-64486dfbd5-dnft8_calico-apiserver(5ca3db14-0698-4639-b240-ba20e318c953)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4d8006a0885c76517bf8f511e8457683f3bdc0850931291125141f090ea9cdce\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-64486dfbd5-dnft8" podUID="5ca3db14-0698-4639-b240-ba20e318c953" Oct 13 05:38:44.190537 containerd[1965]: time="2025-10-13T05:38:44.190466365Z" level=error msg="Failed to destroy network for sandbox \"68e55d03ea7459c4838c57d99a9cc56f79ccde4d336aee7116c468c3af18b0cc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:38:44.196158 containerd[1965]: time="2025-10-13T05:38:44.196112852Z" level=error msg="Failed to destroy network for sandbox \"7a5aa347dcac59dd298ee342b4082d213b3a354dd8e7357ac22a79c235b17639\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:38:44.197876 systemd[1]: run-netns-cni\x2d56fb7c33\x2dfc90\x2d1f09\x2d868e\x2d029d3afa4f9d.mount: Deactivated successfully. Oct 13 05:38:44.198308 containerd[1965]: time="2025-10-13T05:38:44.196753259Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-b4hm2,Uid:d6919762-43d3-4224-82c3-07710fad2af5,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"68e55d03ea7459c4838c57d99a9cc56f79ccde4d336aee7116c468c3af18b0cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:38:44.199931 kubelet[3306]: E1013 05:38:44.199869 3306 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"68e55d03ea7459c4838c57d99a9cc56f79ccde4d336aee7116c468c3af18b0cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:38:44.200271 kubelet[3306]: E1013 05:38:44.200239 3306 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"68e55d03ea7459c4838c57d99a9cc56f79ccde4d336aee7116c468c3af18b0cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-b4hm2" Oct 13 05:38:44.201768 kubelet[3306]: E1013 05:38:44.200719 3306 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"68e55d03ea7459c4838c57d99a9cc56f79ccde4d336aee7116c468c3af18b0cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-b4hm2" Oct 13 05:38:44.201768 kubelet[3306]: E1013 05:38:44.200794 3306 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-b4hm2_kube-system(d6919762-43d3-4224-82c3-07710fad2af5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-b4hm2_kube-system(d6919762-43d3-4224-82c3-07710fad2af5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"68e55d03ea7459c4838c57d99a9cc56f79ccde4d336aee7116c468c3af18b0cc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-b4hm2" podUID="d6919762-43d3-4224-82c3-07710fad2af5" Oct 13 05:38:44.216864 containerd[1965]: time="2025-10-13T05:38:44.216614933Z" level=error msg="Failed to destroy network for sandbox \"d27feea0505a972056a7d1301f0ac996f58fc50051c5d4d1013be8edddd43cb2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:38:44.219820 containerd[1965]: time="2025-10-13T05:38:44.219105039Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5b559457d7-fhqld,Uid:61b998a9-69f8-4371-990d-0d5850be9043,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a5aa347dcac59dd298ee342b4082d213b3a354dd8e7357ac22a79c235b17639\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:38:44.219552 systemd[1]: run-netns-cni\x2d7603b795\x2d1c69\x2d7827\x2d3547\x2d7e39f0c76d3d.mount: Deactivated successfully. Oct 13 05:38:44.220093 kubelet[3306]: E1013 05:38:44.219377 3306 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a5aa347dcac59dd298ee342b4082d213b3a354dd8e7357ac22a79c235b17639\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:38:44.222080 kubelet[3306]: E1013 05:38:44.220245 3306 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a5aa347dcac59dd298ee342b4082d213b3a354dd8e7357ac22a79c235b17639\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5b559457d7-fhqld" Oct 13 05:38:44.222080 kubelet[3306]: E1013 05:38:44.220310 3306 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a5aa347dcac59dd298ee342b4082d213b3a354dd8e7357ac22a79c235b17639\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5b559457d7-fhqld" Oct 13 05:38:44.222080 kubelet[3306]: E1013 05:38:44.220407 3306 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5b559457d7-fhqld_calico-system(61b998a9-69f8-4371-990d-0d5850be9043)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5b559457d7-fhqld_calico-system(61b998a9-69f8-4371-990d-0d5850be9043)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7a5aa347dcac59dd298ee342b4082d213b3a354dd8e7357ac22a79c235b17639\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5b559457d7-fhqld" podUID="61b998a9-69f8-4371-990d-0d5850be9043" Oct 13 05:38:44.227467 containerd[1965]: time="2025-10-13T05:38:44.225264281Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64486dfbd5-dvjbz,Uid:21eb228e-1a02-4f13-911b-b96a622340c7,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d27feea0505a972056a7d1301f0ac996f58fc50051c5d4d1013be8edddd43cb2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:38:44.227616 kubelet[3306]: E1013 05:38:44.225897 3306 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d27feea0505a972056a7d1301f0ac996f58fc50051c5d4d1013be8edddd43cb2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:38:44.227616 kubelet[3306]: E1013 05:38:44.225949 3306 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d27feea0505a972056a7d1301f0ac996f58fc50051c5d4d1013be8edddd43cb2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-64486dfbd5-dvjbz" Oct 13 05:38:44.227616 kubelet[3306]: E1013 05:38:44.225978 3306 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d27feea0505a972056a7d1301f0ac996f58fc50051c5d4d1013be8edddd43cb2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-64486dfbd5-dvjbz" Oct 13 05:38:44.228219 kubelet[3306]: E1013 05:38:44.226039 3306 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-64486dfbd5-dvjbz_calico-apiserver(21eb228e-1a02-4f13-911b-b96a622340c7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-64486dfbd5-dvjbz_calico-apiserver(21eb228e-1a02-4f13-911b-b96a622340c7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d27feea0505a972056a7d1301f0ac996f58fc50051c5d4d1013be8edddd43cb2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-64486dfbd5-dvjbz" podUID="21eb228e-1a02-4f13-911b-b96a622340c7" Oct 13 05:38:44.228119 systemd[1]: run-netns-cni\x2dd743ee1f\x2dcdb7\x2d8a9a\x2d3fc4\x2d080e97574e28.mount: Deactivated successfully. Oct 13 05:38:44.232222 containerd[1965]: time="2025-10-13T05:38:44.232178226Z" level=error msg="Failed to destroy network for sandbox \"8f5529a4464b104fe6e30e38c284848c7965662144c2a3d41fdf63e9c9deea03\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:38:44.235103 systemd[1]: run-netns-cni\x2d80f58b3c\x2ded86\x2df016\x2d431c\x2db357d903b663.mount: Deactivated successfully. Oct 13 05:38:44.238534 containerd[1965]: time="2025-10-13T05:38:44.235776966Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-65bc8d9dfd-dsngt,Uid:05ff0b1d-84db-465d-be1a-932a1394d5de,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f5529a4464b104fe6e30e38c284848c7965662144c2a3d41fdf63e9c9deea03\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:38:44.238634 kubelet[3306]: E1013 05:38:44.236156 3306 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f5529a4464b104fe6e30e38c284848c7965662144c2a3d41fdf63e9c9deea03\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:38:44.238634 kubelet[3306]: E1013 05:38:44.236203 3306 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f5529a4464b104fe6e30e38c284848c7965662144c2a3d41fdf63e9c9deea03\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-65bc8d9dfd-dsngt" Oct 13 05:38:44.238634 kubelet[3306]: E1013 05:38:44.236231 3306 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f5529a4464b104fe6e30e38c284848c7965662144c2a3d41fdf63e9c9deea03\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-65bc8d9dfd-dsngt" Oct 13 05:38:44.238750 kubelet[3306]: E1013 05:38:44.236293 3306 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-65bc8d9dfd-dsngt_calico-system(05ff0b1d-84db-465d-be1a-932a1394d5de)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-65bc8d9dfd-dsngt_calico-system(05ff0b1d-84db-465d-be1a-932a1394d5de)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8f5529a4464b104fe6e30e38c284848c7965662144c2a3d41fdf63e9c9deea03\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-65bc8d9dfd-dsngt" podUID="05ff0b1d-84db-465d-be1a-932a1394d5de" Oct 13 05:38:44.793251 systemd[1]: Created slice kubepods-besteffort-pod96c04204_34c8_461e_8a7a_c15984adac1a.slice - libcontainer container kubepods-besteffort-pod96c04204_34c8_461e_8a7a_c15984adac1a.slice. Oct 13 05:38:44.795807 containerd[1965]: time="2025-10-13T05:38:44.795760826Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rx7bk,Uid:96c04204-34c8-461e-8a7a-c15984adac1a,Namespace:calico-system,Attempt:0,}" Oct 13 05:38:44.857580 containerd[1965]: time="2025-10-13T05:38:44.857525123Z" level=error msg="Failed to destroy network for sandbox \"0d316448895e9cc27edb6274f755c5e508c3e1516eb5598ed578ee1c2e4ff6be\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:38:44.864471 containerd[1965]: time="2025-10-13T05:38:44.859991937Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rx7bk,Uid:96c04204-34c8-461e-8a7a-c15984adac1a,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d316448895e9cc27edb6274f755c5e508c3e1516eb5598ed578ee1c2e4ff6be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:38:44.864775 kubelet[3306]: E1013 05:38:44.860272 3306 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d316448895e9cc27edb6274f755c5e508c3e1516eb5598ed578ee1c2e4ff6be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:38:44.864775 kubelet[3306]: E1013 05:38:44.860348 3306 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d316448895e9cc27edb6274f755c5e508c3e1516eb5598ed578ee1c2e4ff6be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rx7bk" Oct 13 05:38:44.864775 kubelet[3306]: E1013 05:38:44.860369 3306 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d316448895e9cc27edb6274f755c5e508c3e1516eb5598ed578ee1c2e4ff6be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rx7bk" Oct 13 05:38:44.864948 kubelet[3306]: E1013 05:38:44.860448 3306 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-rx7bk_calico-system(96c04204-34c8-461e-8a7a-c15984adac1a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-rx7bk_calico-system(96c04204-34c8-461e-8a7a-c15984adac1a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0d316448895e9cc27edb6274f755c5e508c3e1516eb5598ed578ee1c2e4ff6be\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-rx7bk" podUID="96c04204-34c8-461e-8a7a-c15984adac1a" Oct 13 05:38:51.174037 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount116763041.mount: Deactivated successfully. Oct 13 05:38:51.215752 containerd[1965]: time="2025-10-13T05:38:51.215273743Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:38:51.233298 containerd[1965]: time="2025-10-13T05:38:51.233242992Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=157078339" Oct 13 05:38:51.234138 containerd[1965]: time="2025-10-13T05:38:51.234082259Z" level=info msg="ImageCreate event name:\"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:38:51.244616 containerd[1965]: time="2025-10-13T05:38:51.244101879Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:38:51.244616 containerd[1965]: time="2025-10-13T05:38:51.244497909Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"157078201\" in 7.946168556s" Oct 13 05:38:51.244616 containerd[1965]: time="2025-10-13T05:38:51.244531886Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\"" Oct 13 05:38:51.294113 containerd[1965]: time="2025-10-13T05:38:51.294064798Z" level=info msg="CreateContainer within sandbox \"82e2503da1c1e4bbcee3e2d777550a59110de929207312d0b3d5f13bcaf731c1\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Oct 13 05:38:51.338439 containerd[1965]: time="2025-10-13T05:38:51.334663443Z" level=info msg="Container d4d1d98d05de31cee74e6ce6b876e9f1ae8242f0f139e9c84651334047a25d3b: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:38:51.338262 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2943085755.mount: Deactivated successfully. Oct 13 05:38:51.366512 containerd[1965]: time="2025-10-13T05:38:51.366463879Z" level=info msg="CreateContainer within sandbox \"82e2503da1c1e4bbcee3e2d777550a59110de929207312d0b3d5f13bcaf731c1\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"d4d1d98d05de31cee74e6ce6b876e9f1ae8242f0f139e9c84651334047a25d3b\"" Oct 13 05:38:51.367232 containerd[1965]: time="2025-10-13T05:38:51.367125131Z" level=info msg="StartContainer for \"d4d1d98d05de31cee74e6ce6b876e9f1ae8242f0f139e9c84651334047a25d3b\"" Oct 13 05:38:51.374393 containerd[1965]: time="2025-10-13T05:38:51.374330321Z" level=info msg="connecting to shim d4d1d98d05de31cee74e6ce6b876e9f1ae8242f0f139e9c84651334047a25d3b" address="unix:///run/containerd/s/ec3c98871606db2f3a5cb0656a7de87f8f83381dbd7638c3d88e0b4ab31ee364" protocol=ttrpc version=3 Oct 13 05:38:51.505751 systemd[1]: Started cri-containerd-d4d1d98d05de31cee74e6ce6b876e9f1ae8242f0f139e9c84651334047a25d3b.scope - libcontainer container d4d1d98d05de31cee74e6ce6b876e9f1ae8242f0f139e9c84651334047a25d3b. Oct 13 05:38:51.600977 containerd[1965]: time="2025-10-13T05:38:51.600923712Z" level=info msg="StartContainer for \"d4d1d98d05de31cee74e6ce6b876e9f1ae8242f0f139e9c84651334047a25d3b\" returns successfully" Oct 13 05:38:51.728625 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Oct 13 05:38:51.728781 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Oct 13 05:38:52.291920 kubelet[3306]: I1013 05:38:52.291865 3306 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p7jfx\" (UniqueName: \"kubernetes.io/projected/05ff0b1d-84db-465d-be1a-932a1394d5de-kube-api-access-p7jfx\") pod \"05ff0b1d-84db-465d-be1a-932a1394d5de\" (UID: \"05ff0b1d-84db-465d-be1a-932a1394d5de\") " Oct 13 05:38:52.291920 kubelet[3306]: I1013 05:38:52.291925 3306 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/05ff0b1d-84db-465d-be1a-932a1394d5de-whisker-backend-key-pair\") pod \"05ff0b1d-84db-465d-be1a-932a1394d5de\" (UID: \"05ff0b1d-84db-465d-be1a-932a1394d5de\") " Oct 13 05:38:52.292473 kubelet[3306]: I1013 05:38:52.291952 3306 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/05ff0b1d-84db-465d-be1a-932a1394d5de-whisker-ca-bundle\") pod \"05ff0b1d-84db-465d-be1a-932a1394d5de\" (UID: \"05ff0b1d-84db-465d-be1a-932a1394d5de\") " Oct 13 05:38:52.302277 systemd[1]: var-lib-kubelet-pods-05ff0b1d\x2d84db\x2d465d\x2dbe1a\x2d932a1394d5de-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dp7jfx.mount: Deactivated successfully. Oct 13 05:38:52.308433 kubelet[3306]: I1013 05:38:52.307579 3306 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/05ff0b1d-84db-465d-be1a-932a1394d5de-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "05ff0b1d-84db-465d-be1a-932a1394d5de" (UID: "05ff0b1d-84db-465d-be1a-932a1394d5de"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Oct 13 05:38:52.308433 kubelet[3306]: I1013 05:38:52.307637 3306 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05ff0b1d-84db-465d-be1a-932a1394d5de-kube-api-access-p7jfx" (OuterVolumeSpecName: "kube-api-access-p7jfx") pod "05ff0b1d-84db-465d-be1a-932a1394d5de" (UID: "05ff0b1d-84db-465d-be1a-932a1394d5de"). InnerVolumeSpecName "kube-api-access-p7jfx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Oct 13 05:38:52.309580 kubelet[3306]: I1013 05:38:52.309536 3306 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05ff0b1d-84db-465d-be1a-932a1394d5de-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "05ff0b1d-84db-465d-be1a-932a1394d5de" (UID: "05ff0b1d-84db-465d-be1a-932a1394d5de"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Oct 13 05:38:52.311490 systemd[1]: var-lib-kubelet-pods-05ff0b1d\x2d84db\x2d465d\x2dbe1a\x2d932a1394d5de-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Oct 13 05:38:52.364290 systemd[1]: Removed slice kubepods-besteffort-pod05ff0b1d_84db_465d_be1a_932a1394d5de.slice - libcontainer container kubepods-besteffort-pod05ff0b1d_84db_465d_be1a_932a1394d5de.slice. Oct 13 05:38:52.385632 kubelet[3306]: I1013 05:38:52.383891 3306 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-wpggc" podStartSLOduration=2.240969767 podStartE2EDuration="22.383872719s" podCreationTimestamp="2025-10-13 05:38:30 +0000 UTC" firstStartedPulling="2025-10-13 05:38:31.106733811 +0000 UTC m=+51.466158576" lastFinishedPulling="2025-10-13 05:38:51.249636777 +0000 UTC m=+71.609061528" observedRunningTime="2025-10-13 05:38:52.382848738 +0000 UTC m=+72.742273502" watchObservedRunningTime="2025-10-13 05:38:52.383872719 +0000 UTC m=+72.743297491" Oct 13 05:38:52.392712 kubelet[3306]: I1013 05:38:52.392676 3306 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-p7jfx\" (UniqueName: \"kubernetes.io/projected/05ff0b1d-84db-465d-be1a-932a1394d5de-kube-api-access-p7jfx\") on node \"ip-172-31-26-130\" DevicePath \"\"" Oct 13 05:38:52.392712 kubelet[3306]: I1013 05:38:52.392707 3306 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/05ff0b1d-84db-465d-be1a-932a1394d5de-whisker-backend-key-pair\") on node \"ip-172-31-26-130\" DevicePath \"\"" Oct 13 05:38:52.392712 kubelet[3306]: I1013 05:38:52.392720 3306 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/05ff0b1d-84db-465d-be1a-932a1394d5de-whisker-ca-bundle\") on node \"ip-172-31-26-130\" DevicePath \"\"" Oct 13 05:38:52.576606 systemd[1]: Created slice kubepods-besteffort-pod04a7fd25_9e61_44bc_ba54_698f2becb85d.slice - libcontainer container kubepods-besteffort-pod04a7fd25_9e61_44bc_ba54_698f2becb85d.slice. Oct 13 05:38:52.694166 kubelet[3306]: I1013 05:38:52.694114 3306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/04a7fd25-9e61-44bc-ba54-698f2becb85d-whisker-backend-key-pair\") pod \"whisker-7f6f7f6bf7-47lc6\" (UID: \"04a7fd25-9e61-44bc-ba54-698f2becb85d\") " pod="calico-system/whisker-7f6f7f6bf7-47lc6" Oct 13 05:38:52.694166 kubelet[3306]: I1013 05:38:52.694171 3306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/04a7fd25-9e61-44bc-ba54-698f2becb85d-whisker-ca-bundle\") pod \"whisker-7f6f7f6bf7-47lc6\" (UID: \"04a7fd25-9e61-44bc-ba54-698f2becb85d\") " pod="calico-system/whisker-7f6f7f6bf7-47lc6" Oct 13 05:38:52.694378 kubelet[3306]: I1013 05:38:52.694208 3306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dct2\" (UniqueName: \"kubernetes.io/projected/04a7fd25-9e61-44bc-ba54-698f2becb85d-kube-api-access-7dct2\") pod \"whisker-7f6f7f6bf7-47lc6\" (UID: \"04a7fd25-9e61-44bc-ba54-698f2becb85d\") " pod="calico-system/whisker-7f6f7f6bf7-47lc6" Oct 13 05:38:52.880989 containerd[1965]: time="2025-10-13T05:38:52.880642359Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7f6f7f6bf7-47lc6,Uid:04a7fd25-9e61-44bc-ba54-698f2becb85d,Namespace:calico-system,Attempt:0,}" Oct 13 05:38:53.486632 (udev-worker)[4565]: Network interface NamePolicy= disabled on kernel command line. Oct 13 05:38:53.494645 systemd-networkd[1573]: cali977101da529: Link UP Oct 13 05:38:53.495097 systemd-networkd[1573]: cali977101da529: Gained carrier Oct 13 05:38:53.553921 containerd[1965]: 2025-10-13 05:38:52.916 [INFO][4594] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 13 05:38:53.553921 containerd[1965]: 2025-10-13 05:38:52.982 [INFO][4594] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--26--130-k8s-whisker--7f6f7f6bf7--47lc6-eth0 whisker-7f6f7f6bf7- calico-system 04a7fd25-9e61-44bc-ba54-698f2becb85d 969 0 2025-10-13 05:38:52 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:7f6f7f6bf7 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-26-130 whisker-7f6f7f6bf7-47lc6 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali977101da529 [] [] }} ContainerID="daed6bda17b2b632a1a68b8e61ae44ea16f05137ad2110474d924e7baec19f69" Namespace="calico-system" Pod="whisker-7f6f7f6bf7-47lc6" WorkloadEndpoint="ip--172--31--26--130-k8s-whisker--7f6f7f6bf7--47lc6-" Oct 13 05:38:53.553921 containerd[1965]: 2025-10-13 05:38:52.982 [INFO][4594] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="daed6bda17b2b632a1a68b8e61ae44ea16f05137ad2110474d924e7baec19f69" Namespace="calico-system" Pod="whisker-7f6f7f6bf7-47lc6" WorkloadEndpoint="ip--172--31--26--130-k8s-whisker--7f6f7f6bf7--47lc6-eth0" Oct 13 05:38:53.553921 containerd[1965]: 2025-10-13 05:38:53.371 [INFO][4605] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="daed6bda17b2b632a1a68b8e61ae44ea16f05137ad2110474d924e7baec19f69" HandleID="k8s-pod-network.daed6bda17b2b632a1a68b8e61ae44ea16f05137ad2110474d924e7baec19f69" Workload="ip--172--31--26--130-k8s-whisker--7f6f7f6bf7--47lc6-eth0" Oct 13 05:38:53.558036 containerd[1965]: 2025-10-13 05:38:53.374 [INFO][4605] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="daed6bda17b2b632a1a68b8e61ae44ea16f05137ad2110474d924e7baec19f69" HandleID="k8s-pod-network.daed6bda17b2b632a1a68b8e61ae44ea16f05137ad2110474d924e7baec19f69" Workload="ip--172--31--26--130-k8s-whisker--7f6f7f6bf7--47lc6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004eea0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-26-130", "pod":"whisker-7f6f7f6bf7-47lc6", "timestamp":"2025-10-13 05:38:53.371052703 +0000 UTC"}, Hostname:"ip-172-31-26-130", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 05:38:53.558036 containerd[1965]: 2025-10-13 05:38:53.374 [INFO][4605] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 05:38:53.558036 containerd[1965]: 2025-10-13 05:38:53.375 [INFO][4605] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 05:38:53.558036 containerd[1965]: 2025-10-13 05:38:53.375 [INFO][4605] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-26-130' Oct 13 05:38:53.558036 containerd[1965]: 2025-10-13 05:38:53.397 [INFO][4605] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.daed6bda17b2b632a1a68b8e61ae44ea16f05137ad2110474d924e7baec19f69" host="ip-172-31-26-130" Oct 13 05:38:53.558036 containerd[1965]: 2025-10-13 05:38:53.409 [INFO][4605] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-26-130" Oct 13 05:38:53.558036 containerd[1965]: 2025-10-13 05:38:53.415 [INFO][4605] ipam/ipam.go 511: Trying affinity for 192.168.25.128/26 host="ip-172-31-26-130" Oct 13 05:38:53.558036 containerd[1965]: 2025-10-13 05:38:53.417 [INFO][4605] ipam/ipam.go 158: Attempting to load block cidr=192.168.25.128/26 host="ip-172-31-26-130" Oct 13 05:38:53.558036 containerd[1965]: 2025-10-13 05:38:53.420 [INFO][4605] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.25.128/26 host="ip-172-31-26-130" Oct 13 05:38:53.561568 containerd[1965]: 2025-10-13 05:38:53.420 [INFO][4605] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.25.128/26 handle="k8s-pod-network.daed6bda17b2b632a1a68b8e61ae44ea16f05137ad2110474d924e7baec19f69" host="ip-172-31-26-130" Oct 13 05:38:53.561568 containerd[1965]: 2025-10-13 05:38:53.422 [INFO][4605] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.daed6bda17b2b632a1a68b8e61ae44ea16f05137ad2110474d924e7baec19f69 Oct 13 05:38:53.561568 containerd[1965]: 2025-10-13 05:38:53.426 [INFO][4605] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.25.128/26 handle="k8s-pod-network.daed6bda17b2b632a1a68b8e61ae44ea16f05137ad2110474d924e7baec19f69" host="ip-172-31-26-130" Oct 13 05:38:53.561568 containerd[1965]: 2025-10-13 05:38:53.457 [INFO][4605] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.25.129/26] block=192.168.25.128/26 handle="k8s-pod-network.daed6bda17b2b632a1a68b8e61ae44ea16f05137ad2110474d924e7baec19f69" host="ip-172-31-26-130" Oct 13 05:38:53.561568 containerd[1965]: 2025-10-13 05:38:53.457 [INFO][4605] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.25.129/26] handle="k8s-pod-network.daed6bda17b2b632a1a68b8e61ae44ea16f05137ad2110474d924e7baec19f69" host="ip-172-31-26-130" Oct 13 05:38:53.561568 containerd[1965]: 2025-10-13 05:38:53.457 [INFO][4605] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 05:38:53.561568 containerd[1965]: 2025-10-13 05:38:53.457 [INFO][4605] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.25.129/26] IPv6=[] ContainerID="daed6bda17b2b632a1a68b8e61ae44ea16f05137ad2110474d924e7baec19f69" HandleID="k8s-pod-network.daed6bda17b2b632a1a68b8e61ae44ea16f05137ad2110474d924e7baec19f69" Workload="ip--172--31--26--130-k8s-whisker--7f6f7f6bf7--47lc6-eth0" Oct 13 05:38:53.562469 containerd[1965]: 2025-10-13 05:38:53.464 [INFO][4594] cni-plugin/k8s.go 418: Populated endpoint ContainerID="daed6bda17b2b632a1a68b8e61ae44ea16f05137ad2110474d924e7baec19f69" Namespace="calico-system" Pod="whisker-7f6f7f6bf7-47lc6" WorkloadEndpoint="ip--172--31--26--130-k8s-whisker--7f6f7f6bf7--47lc6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--130-k8s-whisker--7f6f7f6bf7--47lc6-eth0", GenerateName:"whisker-7f6f7f6bf7-", Namespace:"calico-system", SelfLink:"", UID:"04a7fd25-9e61-44bc-ba54-698f2becb85d", ResourceVersion:"969", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 38, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7f6f7f6bf7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-130", ContainerID:"", Pod:"whisker-7f6f7f6bf7-47lc6", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.25.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali977101da529", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:38:53.562469 containerd[1965]: 2025-10-13 05:38:53.464 [INFO][4594] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.25.129/32] ContainerID="daed6bda17b2b632a1a68b8e61ae44ea16f05137ad2110474d924e7baec19f69" Namespace="calico-system" Pod="whisker-7f6f7f6bf7-47lc6" WorkloadEndpoint="ip--172--31--26--130-k8s-whisker--7f6f7f6bf7--47lc6-eth0" Oct 13 05:38:53.562581 containerd[1965]: 2025-10-13 05:38:53.464 [INFO][4594] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali977101da529 ContainerID="daed6bda17b2b632a1a68b8e61ae44ea16f05137ad2110474d924e7baec19f69" Namespace="calico-system" Pod="whisker-7f6f7f6bf7-47lc6" WorkloadEndpoint="ip--172--31--26--130-k8s-whisker--7f6f7f6bf7--47lc6-eth0" Oct 13 05:38:53.562581 containerd[1965]: 2025-10-13 05:38:53.493 [INFO][4594] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="daed6bda17b2b632a1a68b8e61ae44ea16f05137ad2110474d924e7baec19f69" Namespace="calico-system" Pod="whisker-7f6f7f6bf7-47lc6" WorkloadEndpoint="ip--172--31--26--130-k8s-whisker--7f6f7f6bf7--47lc6-eth0" Oct 13 05:38:53.562631 containerd[1965]: 2025-10-13 05:38:53.493 [INFO][4594] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="daed6bda17b2b632a1a68b8e61ae44ea16f05137ad2110474d924e7baec19f69" Namespace="calico-system" Pod="whisker-7f6f7f6bf7-47lc6" WorkloadEndpoint="ip--172--31--26--130-k8s-whisker--7f6f7f6bf7--47lc6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--130-k8s-whisker--7f6f7f6bf7--47lc6-eth0", GenerateName:"whisker-7f6f7f6bf7-", Namespace:"calico-system", SelfLink:"", UID:"04a7fd25-9e61-44bc-ba54-698f2becb85d", ResourceVersion:"969", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 38, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7f6f7f6bf7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-130", ContainerID:"daed6bda17b2b632a1a68b8e61ae44ea16f05137ad2110474d924e7baec19f69", Pod:"whisker-7f6f7f6bf7-47lc6", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.25.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali977101da529", MAC:"0e:9c:4c:d5:61:d5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:38:53.562689 containerd[1965]: 2025-10-13 05:38:53.535 [INFO][4594] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="daed6bda17b2b632a1a68b8e61ae44ea16f05137ad2110474d924e7baec19f69" Namespace="calico-system" Pod="whisker-7f6f7f6bf7-47lc6" WorkloadEndpoint="ip--172--31--26--130-k8s-whisker--7f6f7f6bf7--47lc6-eth0" Oct 13 05:38:53.801978 kubelet[3306]: I1013 05:38:53.801363 3306 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05ff0b1d-84db-465d-be1a-932a1394d5de" path="/var/lib/kubelet/pods/05ff0b1d-84db-465d-be1a-932a1394d5de/volumes" Oct 13 05:38:53.913834 containerd[1965]: time="2025-10-13T05:38:53.913754309Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d4d1d98d05de31cee74e6ce6b876e9f1ae8242f0f139e9c84651334047a25d3b\" id:\"824e944ae795abe170e5b60f3c10e9b0417f640384e024ebe7cbd36e958dd404\" pid:4735 exit_status:1 exited_at:{seconds:1760333933 nanos:911807963}" Oct 13 05:38:53.946989 containerd[1965]: time="2025-10-13T05:38:53.946929371Z" level=info msg="connecting to shim daed6bda17b2b632a1a68b8e61ae44ea16f05137ad2110474d924e7baec19f69" address="unix:///run/containerd/s/139cd33e47be5d5ba1025ff08d372a12b8fc2501d3398fae07cb65f2dda73084" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:38:54.001688 systemd[1]: Started cri-containerd-daed6bda17b2b632a1a68b8e61ae44ea16f05137ad2110474d924e7baec19f69.scope - libcontainer container daed6bda17b2b632a1a68b8e61ae44ea16f05137ad2110474d924e7baec19f69. Oct 13 05:38:54.126643 containerd[1965]: time="2025-10-13T05:38:54.126509858Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7f6f7f6bf7-47lc6,Uid:04a7fd25-9e61-44bc-ba54-698f2becb85d,Namespace:calico-system,Attempt:0,} returns sandbox id \"daed6bda17b2b632a1a68b8e61ae44ea16f05137ad2110474d924e7baec19f69\"" Oct 13 05:38:54.167100 containerd[1965]: time="2025-10-13T05:38:54.167043397Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Oct 13 05:38:54.310541 systemd-networkd[1573]: vxlan.calico: Link UP Oct 13 05:38:54.310550 systemd-networkd[1573]: vxlan.calico: Gained carrier Oct 13 05:38:54.335764 (udev-worker)[4563]: Network interface NamePolicy= disabled on kernel command line. Oct 13 05:38:54.488099 containerd[1965]: time="2025-10-13T05:38:54.487968378Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d4d1d98d05de31cee74e6ce6b876e9f1ae8242f0f139e9c84651334047a25d3b\" id:\"6f06676c4a7a0d1a4fee264fc59b3a6bc980075a157ee5d503bafa9a65f9e120\" pid:4860 exit_status:1 exited_at:{seconds:1760333934 nanos:487612286}" Oct 13 05:38:55.049645 systemd-networkd[1573]: cali977101da529: Gained IPv6LL Oct 13 05:38:55.644200 containerd[1965]: time="2025-10-13T05:38:55.644148813Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:38:55.646020 containerd[1965]: time="2025-10-13T05:38:55.645984039Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4661291" Oct 13 05:38:55.648213 containerd[1965]: time="2025-10-13T05:38:55.648156658Z" level=info msg="ImageCreate event name:\"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:38:55.651429 containerd[1965]: time="2025-10-13T05:38:55.651352551Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:38:55.652390 containerd[1965]: time="2025-10-13T05:38:55.651927502Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"6153986\" in 1.48483776s" Oct 13 05:38:55.652390 containerd[1965]: time="2025-10-13T05:38:55.651957508Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\"" Oct 13 05:38:55.676180 containerd[1965]: time="2025-10-13T05:38:55.676138624Z" level=info msg="CreateContainer within sandbox \"daed6bda17b2b632a1a68b8e61ae44ea16f05137ad2110474d924e7baec19f69\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Oct 13 05:38:55.690856 containerd[1965]: time="2025-10-13T05:38:55.688873882Z" level=info msg="Container 0e24b80f9a87e0df81e256a004c4583ac2d533b63519fdbed3c26d2ec3526af2: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:38:55.705426 containerd[1965]: time="2025-10-13T05:38:55.705378992Z" level=info msg="CreateContainer within sandbox \"daed6bda17b2b632a1a68b8e61ae44ea16f05137ad2110474d924e7baec19f69\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"0e24b80f9a87e0df81e256a004c4583ac2d533b63519fdbed3c26d2ec3526af2\"" Oct 13 05:38:55.706280 containerd[1965]: time="2025-10-13T05:38:55.706026599Z" level=info msg="StartContainer for \"0e24b80f9a87e0df81e256a004c4583ac2d533b63519fdbed3c26d2ec3526af2\"" Oct 13 05:38:55.707760 containerd[1965]: time="2025-10-13T05:38:55.707719874Z" level=info msg="connecting to shim 0e24b80f9a87e0df81e256a004c4583ac2d533b63519fdbed3c26d2ec3526af2" address="unix:///run/containerd/s/139cd33e47be5d5ba1025ff08d372a12b8fc2501d3398fae07cb65f2dda73084" protocol=ttrpc version=3 Oct 13 05:38:55.732651 systemd[1]: Started cri-containerd-0e24b80f9a87e0df81e256a004c4583ac2d533b63519fdbed3c26d2ec3526af2.scope - libcontainer container 0e24b80f9a87e0df81e256a004c4583ac2d533b63519fdbed3c26d2ec3526af2. Oct 13 05:38:55.780182 containerd[1965]: time="2025-10-13T05:38:55.780124414Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-b4hm2,Uid:d6919762-43d3-4224-82c3-07710fad2af5,Namespace:kube-system,Attempt:0,}" Oct 13 05:38:55.783440 containerd[1965]: time="2025-10-13T05:38:55.783295179Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64486dfbd5-dvjbz,Uid:21eb228e-1a02-4f13-911b-b96a622340c7,Namespace:calico-apiserver,Attempt:0,}" Oct 13 05:38:55.827087 containerd[1965]: time="2025-10-13T05:38:55.827030114Z" level=info msg="StartContainer for \"0e24b80f9a87e0df81e256a004c4583ac2d533b63519fdbed3c26d2ec3526af2\" returns successfully" Oct 13 05:38:55.834342 containerd[1965]: time="2025-10-13T05:38:55.834057995Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Oct 13 05:38:55.884601 systemd-networkd[1573]: vxlan.calico: Gained IPv6LL Oct 13 05:38:56.002952 (udev-worker)[4845]: Network interface NamePolicy= disabled on kernel command line. Oct 13 05:38:56.005206 systemd-networkd[1573]: cali34a66be1c95: Link UP Oct 13 05:38:56.005447 systemd-networkd[1573]: cali34a66be1c95: Gained carrier Oct 13 05:38:56.026259 containerd[1965]: 2025-10-13 05:38:55.900 [INFO][4937] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--26--130-k8s-calico--apiserver--64486dfbd5--dvjbz-eth0 calico-apiserver-64486dfbd5- calico-apiserver 21eb228e-1a02-4f13-911b-b96a622340c7 901 0 2025-10-13 05:38:26 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:64486dfbd5 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-26-130 calico-apiserver-64486dfbd5-dvjbz eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali34a66be1c95 [] [] }} ContainerID="98cb5b0fe93a3ac952870837979e227b8efaa3ecd75e31b55f5768a66d69b44c" Namespace="calico-apiserver" Pod="calico-apiserver-64486dfbd5-dvjbz" WorkloadEndpoint="ip--172--31--26--130-k8s-calico--apiserver--64486dfbd5--dvjbz-" Oct 13 05:38:56.026259 containerd[1965]: 2025-10-13 05:38:55.900 [INFO][4937] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="98cb5b0fe93a3ac952870837979e227b8efaa3ecd75e31b55f5768a66d69b44c" Namespace="calico-apiserver" Pod="calico-apiserver-64486dfbd5-dvjbz" WorkloadEndpoint="ip--172--31--26--130-k8s-calico--apiserver--64486dfbd5--dvjbz-eth0" Oct 13 05:38:56.026259 containerd[1965]: 2025-10-13 05:38:55.955 [INFO][4965] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="98cb5b0fe93a3ac952870837979e227b8efaa3ecd75e31b55f5768a66d69b44c" HandleID="k8s-pod-network.98cb5b0fe93a3ac952870837979e227b8efaa3ecd75e31b55f5768a66d69b44c" Workload="ip--172--31--26--130-k8s-calico--apiserver--64486dfbd5--dvjbz-eth0" Oct 13 05:38:56.026520 containerd[1965]: 2025-10-13 05:38:55.955 [INFO][4965] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="98cb5b0fe93a3ac952870837979e227b8efaa3ecd75e31b55f5768a66d69b44c" HandleID="k8s-pod-network.98cb5b0fe93a3ac952870837979e227b8efaa3ecd75e31b55f5768a66d69b44c" Workload="ip--172--31--26--130-k8s-calico--apiserver--64486dfbd5--dvjbz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024fa60), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-26-130", "pod":"calico-apiserver-64486dfbd5-dvjbz", "timestamp":"2025-10-13 05:38:55.955650038 +0000 UTC"}, Hostname:"ip-172-31-26-130", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 05:38:56.026520 containerd[1965]: 2025-10-13 05:38:55.956 [INFO][4965] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 05:38:56.026520 containerd[1965]: 2025-10-13 05:38:55.956 [INFO][4965] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 05:38:56.026520 containerd[1965]: 2025-10-13 05:38:55.956 [INFO][4965] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-26-130' Oct 13 05:38:56.026520 containerd[1965]: 2025-10-13 05:38:55.964 [INFO][4965] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.98cb5b0fe93a3ac952870837979e227b8efaa3ecd75e31b55f5768a66d69b44c" host="ip-172-31-26-130" Oct 13 05:38:56.026520 containerd[1965]: 2025-10-13 05:38:55.969 [INFO][4965] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-26-130" Oct 13 05:38:56.026520 containerd[1965]: 2025-10-13 05:38:55.974 [INFO][4965] ipam/ipam.go 511: Trying affinity for 192.168.25.128/26 host="ip-172-31-26-130" Oct 13 05:38:56.026520 containerd[1965]: 2025-10-13 05:38:55.977 [INFO][4965] ipam/ipam.go 158: Attempting to load block cidr=192.168.25.128/26 host="ip-172-31-26-130" Oct 13 05:38:56.026520 containerd[1965]: 2025-10-13 05:38:55.979 [INFO][4965] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.25.128/26 host="ip-172-31-26-130" Oct 13 05:38:56.027203 containerd[1965]: 2025-10-13 05:38:55.979 [INFO][4965] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.25.128/26 handle="k8s-pod-network.98cb5b0fe93a3ac952870837979e227b8efaa3ecd75e31b55f5768a66d69b44c" host="ip-172-31-26-130" Oct 13 05:38:56.027203 containerd[1965]: 2025-10-13 05:38:55.981 [INFO][4965] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.98cb5b0fe93a3ac952870837979e227b8efaa3ecd75e31b55f5768a66d69b44c Oct 13 05:38:56.027203 containerd[1965]: 2025-10-13 05:38:55.988 [INFO][4965] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.25.128/26 handle="k8s-pod-network.98cb5b0fe93a3ac952870837979e227b8efaa3ecd75e31b55f5768a66d69b44c" host="ip-172-31-26-130" Oct 13 05:38:56.027203 containerd[1965]: 2025-10-13 05:38:55.994 [INFO][4965] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.25.130/26] block=192.168.25.128/26 handle="k8s-pod-network.98cb5b0fe93a3ac952870837979e227b8efaa3ecd75e31b55f5768a66d69b44c" host="ip-172-31-26-130" Oct 13 05:38:56.027203 containerd[1965]: 2025-10-13 05:38:55.994 [INFO][4965] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.25.130/26] handle="k8s-pod-network.98cb5b0fe93a3ac952870837979e227b8efaa3ecd75e31b55f5768a66d69b44c" host="ip-172-31-26-130" Oct 13 05:38:56.027203 containerd[1965]: 2025-10-13 05:38:55.994 [INFO][4965] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 05:38:56.027203 containerd[1965]: 2025-10-13 05:38:55.994 [INFO][4965] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.25.130/26] IPv6=[] ContainerID="98cb5b0fe93a3ac952870837979e227b8efaa3ecd75e31b55f5768a66d69b44c" HandleID="k8s-pod-network.98cb5b0fe93a3ac952870837979e227b8efaa3ecd75e31b55f5768a66d69b44c" Workload="ip--172--31--26--130-k8s-calico--apiserver--64486dfbd5--dvjbz-eth0" Oct 13 05:38:56.027578 containerd[1965]: 2025-10-13 05:38:55.999 [INFO][4937] cni-plugin/k8s.go 418: Populated endpoint ContainerID="98cb5b0fe93a3ac952870837979e227b8efaa3ecd75e31b55f5768a66d69b44c" Namespace="calico-apiserver" Pod="calico-apiserver-64486dfbd5-dvjbz" WorkloadEndpoint="ip--172--31--26--130-k8s-calico--apiserver--64486dfbd5--dvjbz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--130-k8s-calico--apiserver--64486dfbd5--dvjbz-eth0", GenerateName:"calico-apiserver-64486dfbd5-", Namespace:"calico-apiserver", SelfLink:"", UID:"21eb228e-1a02-4f13-911b-b96a622340c7", ResourceVersion:"901", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 38, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"64486dfbd5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-130", ContainerID:"", Pod:"calico-apiserver-64486dfbd5-dvjbz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.25.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali34a66be1c95", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:38:56.027684 containerd[1965]: 2025-10-13 05:38:56.000 [INFO][4937] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.25.130/32] ContainerID="98cb5b0fe93a3ac952870837979e227b8efaa3ecd75e31b55f5768a66d69b44c" Namespace="calico-apiserver" Pod="calico-apiserver-64486dfbd5-dvjbz" WorkloadEndpoint="ip--172--31--26--130-k8s-calico--apiserver--64486dfbd5--dvjbz-eth0" Oct 13 05:38:56.027684 containerd[1965]: 2025-10-13 05:38:56.000 [INFO][4937] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali34a66be1c95 ContainerID="98cb5b0fe93a3ac952870837979e227b8efaa3ecd75e31b55f5768a66d69b44c" Namespace="calico-apiserver" Pod="calico-apiserver-64486dfbd5-dvjbz" WorkloadEndpoint="ip--172--31--26--130-k8s-calico--apiserver--64486dfbd5--dvjbz-eth0" Oct 13 05:38:56.027684 containerd[1965]: 2025-10-13 05:38:56.005 [INFO][4937] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="98cb5b0fe93a3ac952870837979e227b8efaa3ecd75e31b55f5768a66d69b44c" Namespace="calico-apiserver" Pod="calico-apiserver-64486dfbd5-dvjbz" WorkloadEndpoint="ip--172--31--26--130-k8s-calico--apiserver--64486dfbd5--dvjbz-eth0" Oct 13 05:38:56.027798 containerd[1965]: 2025-10-13 05:38:56.005 [INFO][4937] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="98cb5b0fe93a3ac952870837979e227b8efaa3ecd75e31b55f5768a66d69b44c" Namespace="calico-apiserver" Pod="calico-apiserver-64486dfbd5-dvjbz" WorkloadEndpoint="ip--172--31--26--130-k8s-calico--apiserver--64486dfbd5--dvjbz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--130-k8s-calico--apiserver--64486dfbd5--dvjbz-eth0", GenerateName:"calico-apiserver-64486dfbd5-", Namespace:"calico-apiserver", SelfLink:"", UID:"21eb228e-1a02-4f13-911b-b96a622340c7", ResourceVersion:"901", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 38, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"64486dfbd5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-130", ContainerID:"98cb5b0fe93a3ac952870837979e227b8efaa3ecd75e31b55f5768a66d69b44c", Pod:"calico-apiserver-64486dfbd5-dvjbz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.25.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali34a66be1c95", MAC:"0a:20:4f:d5:4b:ea", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:38:56.027912 containerd[1965]: 2025-10-13 05:38:56.017 [INFO][4937] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="98cb5b0fe93a3ac952870837979e227b8efaa3ecd75e31b55f5768a66d69b44c" Namespace="calico-apiserver" Pod="calico-apiserver-64486dfbd5-dvjbz" WorkloadEndpoint="ip--172--31--26--130-k8s-calico--apiserver--64486dfbd5--dvjbz-eth0" Oct 13 05:38:56.077135 containerd[1965]: time="2025-10-13T05:38:56.076984093Z" level=info msg="connecting to shim 98cb5b0fe93a3ac952870837979e227b8efaa3ecd75e31b55f5768a66d69b44c" address="unix:///run/containerd/s/4674ca39defb0896ccba4e29adbca54bee268bc4c652b498ae91eb6b43c60b87" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:38:56.124949 systemd[1]: Started cri-containerd-98cb5b0fe93a3ac952870837979e227b8efaa3ecd75e31b55f5768a66d69b44c.scope - libcontainer container 98cb5b0fe93a3ac952870837979e227b8efaa3ecd75e31b55f5768a66d69b44c. Oct 13 05:38:56.156652 systemd-networkd[1573]: calie578b1257b5: Link UP Oct 13 05:38:56.159264 systemd-networkd[1573]: calie578b1257b5: Gained carrier Oct 13 05:38:56.184433 containerd[1965]: 2025-10-13 05:38:55.909 [INFO][4936] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--26--130-k8s-coredns--674b8bbfcf--b4hm2-eth0 coredns-674b8bbfcf- kube-system d6919762-43d3-4224-82c3-07710fad2af5 902 0 2025-10-13 05:37:45 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-26-130 coredns-674b8bbfcf-b4hm2 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calie578b1257b5 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="8486f67b13758f34ce6a8d9977368078c97c1a247dbc6a302f1607c40e66a0c5" Namespace="kube-system" Pod="coredns-674b8bbfcf-b4hm2" WorkloadEndpoint="ip--172--31--26--130-k8s-coredns--674b8bbfcf--b4hm2-" Oct 13 05:38:56.184433 containerd[1965]: 2025-10-13 05:38:55.909 [INFO][4936] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8486f67b13758f34ce6a8d9977368078c97c1a247dbc6a302f1607c40e66a0c5" Namespace="kube-system" Pod="coredns-674b8bbfcf-b4hm2" WorkloadEndpoint="ip--172--31--26--130-k8s-coredns--674b8bbfcf--b4hm2-eth0" Oct 13 05:38:56.184433 containerd[1965]: 2025-10-13 05:38:55.959 [INFO][4970] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8486f67b13758f34ce6a8d9977368078c97c1a247dbc6a302f1607c40e66a0c5" HandleID="k8s-pod-network.8486f67b13758f34ce6a8d9977368078c97c1a247dbc6a302f1607c40e66a0c5" Workload="ip--172--31--26--130-k8s-coredns--674b8bbfcf--b4hm2-eth0" Oct 13 05:38:56.184790 containerd[1965]: 2025-10-13 05:38:55.959 [INFO][4970] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8486f67b13758f34ce6a8d9977368078c97c1a247dbc6a302f1607c40e66a0c5" HandleID="k8s-pod-network.8486f67b13758f34ce6a8d9977368078c97c1a247dbc6a302f1607c40e66a0c5" Workload="ip--172--31--26--130-k8s-coredns--674b8bbfcf--b4hm2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5d80), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-26-130", "pod":"coredns-674b8bbfcf-b4hm2", "timestamp":"2025-10-13 05:38:55.959450654 +0000 UTC"}, Hostname:"ip-172-31-26-130", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 05:38:56.184790 containerd[1965]: 2025-10-13 05:38:55.959 [INFO][4970] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 05:38:56.184790 containerd[1965]: 2025-10-13 05:38:55.995 [INFO][4970] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 05:38:56.184790 containerd[1965]: 2025-10-13 05:38:55.995 [INFO][4970] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-26-130' Oct 13 05:38:56.184790 containerd[1965]: 2025-10-13 05:38:56.070 [INFO][4970] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8486f67b13758f34ce6a8d9977368078c97c1a247dbc6a302f1607c40e66a0c5" host="ip-172-31-26-130" Oct 13 05:38:56.184790 containerd[1965]: 2025-10-13 05:38:56.088 [INFO][4970] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-26-130" Oct 13 05:38:56.184790 containerd[1965]: 2025-10-13 05:38:56.103 [INFO][4970] ipam/ipam.go 511: Trying affinity for 192.168.25.128/26 host="ip-172-31-26-130" Oct 13 05:38:56.184790 containerd[1965]: 2025-10-13 05:38:56.109 [INFO][4970] ipam/ipam.go 158: Attempting to load block cidr=192.168.25.128/26 host="ip-172-31-26-130" Oct 13 05:38:56.184790 containerd[1965]: 2025-10-13 05:38:56.114 [INFO][4970] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.25.128/26 host="ip-172-31-26-130" Oct 13 05:38:56.185591 containerd[1965]: 2025-10-13 05:38:56.114 [INFO][4970] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.25.128/26 handle="k8s-pod-network.8486f67b13758f34ce6a8d9977368078c97c1a247dbc6a302f1607c40e66a0c5" host="ip-172-31-26-130" Oct 13 05:38:56.185591 containerd[1965]: 2025-10-13 05:38:56.119 [INFO][4970] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.8486f67b13758f34ce6a8d9977368078c97c1a247dbc6a302f1607c40e66a0c5 Oct 13 05:38:56.185591 containerd[1965]: 2025-10-13 05:38:56.129 [INFO][4970] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.25.128/26 handle="k8s-pod-network.8486f67b13758f34ce6a8d9977368078c97c1a247dbc6a302f1607c40e66a0c5" host="ip-172-31-26-130" Oct 13 05:38:56.185591 containerd[1965]: 2025-10-13 05:38:56.142 [INFO][4970] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.25.131/26] block=192.168.25.128/26 handle="k8s-pod-network.8486f67b13758f34ce6a8d9977368078c97c1a247dbc6a302f1607c40e66a0c5" host="ip-172-31-26-130" Oct 13 05:38:56.185591 containerd[1965]: 2025-10-13 05:38:56.143 [INFO][4970] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.25.131/26] handle="k8s-pod-network.8486f67b13758f34ce6a8d9977368078c97c1a247dbc6a302f1607c40e66a0c5" host="ip-172-31-26-130" Oct 13 05:38:56.185591 containerd[1965]: 2025-10-13 05:38:56.143 [INFO][4970] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 05:38:56.185591 containerd[1965]: 2025-10-13 05:38:56.143 [INFO][4970] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.25.131/26] IPv6=[] ContainerID="8486f67b13758f34ce6a8d9977368078c97c1a247dbc6a302f1607c40e66a0c5" HandleID="k8s-pod-network.8486f67b13758f34ce6a8d9977368078c97c1a247dbc6a302f1607c40e66a0c5" Workload="ip--172--31--26--130-k8s-coredns--674b8bbfcf--b4hm2-eth0" Oct 13 05:38:56.186048 containerd[1965]: 2025-10-13 05:38:56.150 [INFO][4936] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8486f67b13758f34ce6a8d9977368078c97c1a247dbc6a302f1607c40e66a0c5" Namespace="kube-system" Pod="coredns-674b8bbfcf-b4hm2" WorkloadEndpoint="ip--172--31--26--130-k8s-coredns--674b8bbfcf--b4hm2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--130-k8s-coredns--674b8bbfcf--b4hm2-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"d6919762-43d3-4224-82c3-07710fad2af5", ResourceVersion:"902", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 37, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-130", ContainerID:"", Pod:"coredns-674b8bbfcf-b4hm2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.25.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie578b1257b5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:38:56.186048 containerd[1965]: 2025-10-13 05:38:56.150 [INFO][4936] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.25.131/32] ContainerID="8486f67b13758f34ce6a8d9977368078c97c1a247dbc6a302f1607c40e66a0c5" Namespace="kube-system" Pod="coredns-674b8bbfcf-b4hm2" WorkloadEndpoint="ip--172--31--26--130-k8s-coredns--674b8bbfcf--b4hm2-eth0" Oct 13 05:38:56.186048 containerd[1965]: 2025-10-13 05:38:56.150 [INFO][4936] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie578b1257b5 ContainerID="8486f67b13758f34ce6a8d9977368078c97c1a247dbc6a302f1607c40e66a0c5" Namespace="kube-system" Pod="coredns-674b8bbfcf-b4hm2" WorkloadEndpoint="ip--172--31--26--130-k8s-coredns--674b8bbfcf--b4hm2-eth0" Oct 13 05:38:56.186048 containerd[1965]: 2025-10-13 05:38:56.163 [INFO][4936] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8486f67b13758f34ce6a8d9977368078c97c1a247dbc6a302f1607c40e66a0c5" Namespace="kube-system" Pod="coredns-674b8bbfcf-b4hm2" WorkloadEndpoint="ip--172--31--26--130-k8s-coredns--674b8bbfcf--b4hm2-eth0" Oct 13 05:38:56.186048 containerd[1965]: 2025-10-13 05:38:56.164 [INFO][4936] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8486f67b13758f34ce6a8d9977368078c97c1a247dbc6a302f1607c40e66a0c5" Namespace="kube-system" Pod="coredns-674b8bbfcf-b4hm2" WorkloadEndpoint="ip--172--31--26--130-k8s-coredns--674b8bbfcf--b4hm2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--130-k8s-coredns--674b8bbfcf--b4hm2-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"d6919762-43d3-4224-82c3-07710fad2af5", ResourceVersion:"902", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 37, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-130", ContainerID:"8486f67b13758f34ce6a8d9977368078c97c1a247dbc6a302f1607c40e66a0c5", Pod:"coredns-674b8bbfcf-b4hm2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.25.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie578b1257b5", MAC:"a2:e9:5a:d1:b0:ed", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:38:56.186048 containerd[1965]: 2025-10-13 05:38:56.180 [INFO][4936] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8486f67b13758f34ce6a8d9977368078c97c1a247dbc6a302f1607c40e66a0c5" Namespace="kube-system" Pod="coredns-674b8bbfcf-b4hm2" WorkloadEndpoint="ip--172--31--26--130-k8s-coredns--674b8bbfcf--b4hm2-eth0" Oct 13 05:38:56.240752 containerd[1965]: time="2025-10-13T05:38:56.240550771Z" level=info msg="connecting to shim 8486f67b13758f34ce6a8d9977368078c97c1a247dbc6a302f1607c40e66a0c5" address="unix:///run/containerd/s/7ee7003f8613c496f396612df39c4092abb222c5b956fcb3634df9d74eba6397" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:38:56.248037 containerd[1965]: time="2025-10-13T05:38:56.247954673Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64486dfbd5-dvjbz,Uid:21eb228e-1a02-4f13-911b-b96a622340c7,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"98cb5b0fe93a3ac952870837979e227b8efaa3ecd75e31b55f5768a66d69b44c\"" Oct 13 05:38:56.284607 systemd[1]: Started cri-containerd-8486f67b13758f34ce6a8d9977368078c97c1a247dbc6a302f1607c40e66a0c5.scope - libcontainer container 8486f67b13758f34ce6a8d9977368078c97c1a247dbc6a302f1607c40e66a0c5. Oct 13 05:38:56.342337 containerd[1965]: time="2025-10-13T05:38:56.342297850Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-b4hm2,Uid:d6919762-43d3-4224-82c3-07710fad2af5,Namespace:kube-system,Attempt:0,} returns sandbox id \"8486f67b13758f34ce6a8d9977368078c97c1a247dbc6a302f1607c40e66a0c5\"" Oct 13 05:38:56.349877 containerd[1965]: time="2025-10-13T05:38:56.349832964Z" level=info msg="CreateContainer within sandbox \"8486f67b13758f34ce6a8d9977368078c97c1a247dbc6a302f1607c40e66a0c5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 13 05:38:56.402301 containerd[1965]: time="2025-10-13T05:38:56.402250559Z" level=info msg="Container ebfb360b54cecb83bd5e8db43bd6e9614522d892166b9ffe9e87b329b94fcf8e: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:38:56.416960 containerd[1965]: time="2025-10-13T05:38:56.416903378Z" level=info msg="CreateContainer within sandbox \"8486f67b13758f34ce6a8d9977368078c97c1a247dbc6a302f1607c40e66a0c5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ebfb360b54cecb83bd5e8db43bd6e9614522d892166b9ffe9e87b329b94fcf8e\"" Oct 13 05:38:56.418144 containerd[1965]: time="2025-10-13T05:38:56.418111774Z" level=info msg="StartContainer for \"ebfb360b54cecb83bd5e8db43bd6e9614522d892166b9ffe9e87b329b94fcf8e\"" Oct 13 05:38:56.419525 containerd[1965]: time="2025-10-13T05:38:56.419491694Z" level=info msg="connecting to shim ebfb360b54cecb83bd5e8db43bd6e9614522d892166b9ffe9e87b329b94fcf8e" address="unix:///run/containerd/s/7ee7003f8613c496f396612df39c4092abb222c5b956fcb3634df9d74eba6397" protocol=ttrpc version=3 Oct 13 05:38:56.437669 systemd[1]: Started cri-containerd-ebfb360b54cecb83bd5e8db43bd6e9614522d892166b9ffe9e87b329b94fcf8e.scope - libcontainer container ebfb360b54cecb83bd5e8db43bd6e9614522d892166b9ffe9e87b329b94fcf8e. Oct 13 05:38:56.503740 containerd[1965]: time="2025-10-13T05:38:56.503687389Z" level=info msg="StartContainer for \"ebfb360b54cecb83bd5e8db43bd6e9614522d892166b9ffe9e87b329b94fcf8e\" returns successfully" Oct 13 05:38:56.779650 containerd[1965]: time="2025-10-13T05:38:56.779595999Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64486dfbd5-dnft8,Uid:5ca3db14-0698-4639-b240-ba20e318c953,Namespace:calico-apiserver,Attempt:0,}" Oct 13 05:38:56.780299 containerd[1965]: time="2025-10-13T05:38:56.780127750Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5b559457d7-fhqld,Uid:61b998a9-69f8-4371-990d-0d5850be9043,Namespace:calico-system,Attempt:0,}" Oct 13 05:38:56.780299 containerd[1965]: time="2025-10-13T05:38:56.780240475Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-jcxzl,Uid:405beab3-108b-401c-b301-f27eeb7eec5e,Namespace:calico-system,Attempt:0,}" Oct 13 05:38:57.048552 systemd-networkd[1573]: cali7c330adfd78: Link UP Oct 13 05:38:57.050645 systemd-networkd[1573]: cali7c330adfd78: Gained carrier Oct 13 05:38:57.071976 containerd[1965]: 2025-10-13 05:38:56.923 [INFO][5127] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--26--130-k8s-goldmane--54d579b49d--jcxzl-eth0 goldmane-54d579b49d- calico-system 405beab3-108b-401c-b301-f27eeb7eec5e 898 0 2025-10-13 05:38:30 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:54d579b49d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-26-130 goldmane-54d579b49d-jcxzl eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali7c330adfd78 [] [] }} ContainerID="4c867dca72f6303d526af1d0ea682bcfb6ee69b80182350b71f3dba76986b6db" Namespace="calico-system" Pod="goldmane-54d579b49d-jcxzl" WorkloadEndpoint="ip--172--31--26--130-k8s-goldmane--54d579b49d--jcxzl-" Oct 13 05:38:57.071976 containerd[1965]: 2025-10-13 05:38:56.924 [INFO][5127] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4c867dca72f6303d526af1d0ea682bcfb6ee69b80182350b71f3dba76986b6db" Namespace="calico-system" Pod="goldmane-54d579b49d-jcxzl" WorkloadEndpoint="ip--172--31--26--130-k8s-goldmane--54d579b49d--jcxzl-eth0" Oct 13 05:38:57.071976 containerd[1965]: 2025-10-13 05:38:56.974 [INFO][5159] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4c867dca72f6303d526af1d0ea682bcfb6ee69b80182350b71f3dba76986b6db" HandleID="k8s-pod-network.4c867dca72f6303d526af1d0ea682bcfb6ee69b80182350b71f3dba76986b6db" Workload="ip--172--31--26--130-k8s-goldmane--54d579b49d--jcxzl-eth0" Oct 13 05:38:57.071976 containerd[1965]: 2025-10-13 05:38:56.975 [INFO][5159] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4c867dca72f6303d526af1d0ea682bcfb6ee69b80182350b71f3dba76986b6db" HandleID="k8s-pod-network.4c867dca72f6303d526af1d0ea682bcfb6ee69b80182350b71f3dba76986b6db" Workload="ip--172--31--26--130-k8s-goldmane--54d579b49d--jcxzl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d58e0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-26-130", "pod":"goldmane-54d579b49d-jcxzl", "timestamp":"2025-10-13 05:38:56.974891045 +0000 UTC"}, Hostname:"ip-172-31-26-130", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 05:38:57.071976 containerd[1965]: 2025-10-13 05:38:56.975 [INFO][5159] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 05:38:57.071976 containerd[1965]: 2025-10-13 05:38:56.975 [INFO][5159] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 05:38:57.071976 containerd[1965]: 2025-10-13 05:38:56.975 [INFO][5159] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-26-130' Oct 13 05:38:57.071976 containerd[1965]: 2025-10-13 05:38:56.988 [INFO][5159] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4c867dca72f6303d526af1d0ea682bcfb6ee69b80182350b71f3dba76986b6db" host="ip-172-31-26-130" Oct 13 05:38:57.071976 containerd[1965]: 2025-10-13 05:38:56.998 [INFO][5159] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-26-130" Oct 13 05:38:57.071976 containerd[1965]: 2025-10-13 05:38:57.013 [INFO][5159] ipam/ipam.go 511: Trying affinity for 192.168.25.128/26 host="ip-172-31-26-130" Oct 13 05:38:57.071976 containerd[1965]: 2025-10-13 05:38:57.016 [INFO][5159] ipam/ipam.go 158: Attempting to load block cidr=192.168.25.128/26 host="ip-172-31-26-130" Oct 13 05:38:57.071976 containerd[1965]: 2025-10-13 05:38:57.019 [INFO][5159] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.25.128/26 host="ip-172-31-26-130" Oct 13 05:38:57.071976 containerd[1965]: 2025-10-13 05:38:57.019 [INFO][5159] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.25.128/26 handle="k8s-pod-network.4c867dca72f6303d526af1d0ea682bcfb6ee69b80182350b71f3dba76986b6db" host="ip-172-31-26-130" Oct 13 05:38:57.071976 containerd[1965]: 2025-10-13 05:38:57.022 [INFO][5159] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4c867dca72f6303d526af1d0ea682bcfb6ee69b80182350b71f3dba76986b6db Oct 13 05:38:57.071976 containerd[1965]: 2025-10-13 05:38:57.030 [INFO][5159] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.25.128/26 handle="k8s-pod-network.4c867dca72f6303d526af1d0ea682bcfb6ee69b80182350b71f3dba76986b6db" host="ip-172-31-26-130" Oct 13 05:38:57.071976 containerd[1965]: 2025-10-13 05:38:57.037 [INFO][5159] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.25.132/26] block=192.168.25.128/26 handle="k8s-pod-network.4c867dca72f6303d526af1d0ea682bcfb6ee69b80182350b71f3dba76986b6db" host="ip-172-31-26-130" Oct 13 05:38:57.071976 containerd[1965]: 2025-10-13 05:38:57.037 [INFO][5159] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.25.132/26] handle="k8s-pod-network.4c867dca72f6303d526af1d0ea682bcfb6ee69b80182350b71f3dba76986b6db" host="ip-172-31-26-130" Oct 13 05:38:57.071976 containerd[1965]: 2025-10-13 05:38:57.038 [INFO][5159] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 05:38:57.071976 containerd[1965]: 2025-10-13 05:38:57.038 [INFO][5159] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.25.132/26] IPv6=[] ContainerID="4c867dca72f6303d526af1d0ea682bcfb6ee69b80182350b71f3dba76986b6db" HandleID="k8s-pod-network.4c867dca72f6303d526af1d0ea682bcfb6ee69b80182350b71f3dba76986b6db" Workload="ip--172--31--26--130-k8s-goldmane--54d579b49d--jcxzl-eth0" Oct 13 05:38:57.073900 containerd[1965]: 2025-10-13 05:38:57.042 [INFO][5127] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4c867dca72f6303d526af1d0ea682bcfb6ee69b80182350b71f3dba76986b6db" Namespace="calico-system" Pod="goldmane-54d579b49d-jcxzl" WorkloadEndpoint="ip--172--31--26--130-k8s-goldmane--54d579b49d--jcxzl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--130-k8s-goldmane--54d579b49d--jcxzl-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"405beab3-108b-401c-b301-f27eeb7eec5e", ResourceVersion:"898", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 38, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-130", ContainerID:"", Pod:"goldmane-54d579b49d-jcxzl", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.25.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali7c330adfd78", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:38:57.073900 containerd[1965]: 2025-10-13 05:38:57.042 [INFO][5127] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.25.132/32] ContainerID="4c867dca72f6303d526af1d0ea682bcfb6ee69b80182350b71f3dba76986b6db" Namespace="calico-system" Pod="goldmane-54d579b49d-jcxzl" WorkloadEndpoint="ip--172--31--26--130-k8s-goldmane--54d579b49d--jcxzl-eth0" Oct 13 05:38:57.073900 containerd[1965]: 2025-10-13 05:38:57.042 [INFO][5127] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7c330adfd78 ContainerID="4c867dca72f6303d526af1d0ea682bcfb6ee69b80182350b71f3dba76986b6db" Namespace="calico-system" Pod="goldmane-54d579b49d-jcxzl" WorkloadEndpoint="ip--172--31--26--130-k8s-goldmane--54d579b49d--jcxzl-eth0" Oct 13 05:38:57.073900 containerd[1965]: 2025-10-13 05:38:57.051 [INFO][5127] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4c867dca72f6303d526af1d0ea682bcfb6ee69b80182350b71f3dba76986b6db" Namespace="calico-system" Pod="goldmane-54d579b49d-jcxzl" WorkloadEndpoint="ip--172--31--26--130-k8s-goldmane--54d579b49d--jcxzl-eth0" Oct 13 05:38:57.073900 containerd[1965]: 2025-10-13 05:38:57.052 [INFO][5127] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4c867dca72f6303d526af1d0ea682bcfb6ee69b80182350b71f3dba76986b6db" Namespace="calico-system" Pod="goldmane-54d579b49d-jcxzl" WorkloadEndpoint="ip--172--31--26--130-k8s-goldmane--54d579b49d--jcxzl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--130-k8s-goldmane--54d579b49d--jcxzl-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"405beab3-108b-401c-b301-f27eeb7eec5e", ResourceVersion:"898", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 38, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-130", ContainerID:"4c867dca72f6303d526af1d0ea682bcfb6ee69b80182350b71f3dba76986b6db", Pod:"goldmane-54d579b49d-jcxzl", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.25.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali7c330adfd78", MAC:"12:ff:b2:9a:76:89", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:38:57.073900 containerd[1965]: 2025-10-13 05:38:57.068 [INFO][5127] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4c867dca72f6303d526af1d0ea682bcfb6ee69b80182350b71f3dba76986b6db" Namespace="calico-system" Pod="goldmane-54d579b49d-jcxzl" WorkloadEndpoint="ip--172--31--26--130-k8s-goldmane--54d579b49d--jcxzl-eth0" Oct 13 05:38:57.133621 containerd[1965]: time="2025-10-13T05:38:57.133577320Z" level=info msg="connecting to shim 4c867dca72f6303d526af1d0ea682bcfb6ee69b80182350b71f3dba76986b6db" address="unix:///run/containerd/s/dc1f112bea60ebf8d82525e1d7586513370fb41faa1cedb664d60db72b6d716e" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:38:57.157352 systemd-networkd[1573]: cali29a628e980e: Link UP Oct 13 05:38:57.159322 systemd-networkd[1573]: cali29a628e980e: Gained carrier Oct 13 05:38:57.181942 systemd[1]: Started cri-containerd-4c867dca72f6303d526af1d0ea682bcfb6ee69b80182350b71f3dba76986b6db.scope - libcontainer container 4c867dca72f6303d526af1d0ea682bcfb6ee69b80182350b71f3dba76986b6db. Oct 13 05:38:57.197667 containerd[1965]: 2025-10-13 05:38:56.912 [INFO][5114] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--26--130-k8s-calico--apiserver--64486dfbd5--dnft8-eth0 calico-apiserver-64486dfbd5- calico-apiserver 5ca3db14-0698-4639-b240-ba20e318c953 904 0 2025-10-13 05:38:26 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:64486dfbd5 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-26-130 calico-apiserver-64486dfbd5-dnft8 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali29a628e980e [] [] }} ContainerID="cb3aa7fdd09485f7489a6f8b13eac892b73b74b898bdcda04bf70ff928299692" Namespace="calico-apiserver" Pod="calico-apiserver-64486dfbd5-dnft8" WorkloadEndpoint="ip--172--31--26--130-k8s-calico--apiserver--64486dfbd5--dnft8-" Oct 13 05:38:57.197667 containerd[1965]: 2025-10-13 05:38:56.913 [INFO][5114] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cb3aa7fdd09485f7489a6f8b13eac892b73b74b898bdcda04bf70ff928299692" Namespace="calico-apiserver" Pod="calico-apiserver-64486dfbd5-dnft8" WorkloadEndpoint="ip--172--31--26--130-k8s-calico--apiserver--64486dfbd5--dnft8-eth0" Oct 13 05:38:57.197667 containerd[1965]: 2025-10-13 05:38:57.011 [INFO][5152] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cb3aa7fdd09485f7489a6f8b13eac892b73b74b898bdcda04bf70ff928299692" HandleID="k8s-pod-network.cb3aa7fdd09485f7489a6f8b13eac892b73b74b898bdcda04bf70ff928299692" Workload="ip--172--31--26--130-k8s-calico--apiserver--64486dfbd5--dnft8-eth0" Oct 13 05:38:57.197667 containerd[1965]: 2025-10-13 05:38:57.012 [INFO][5152] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="cb3aa7fdd09485f7489a6f8b13eac892b73b74b898bdcda04bf70ff928299692" HandleID="k8s-pod-network.cb3aa7fdd09485f7489a6f8b13eac892b73b74b898bdcda04bf70ff928299692" Workload="ip--172--31--26--130-k8s-calico--apiserver--64486dfbd5--dnft8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f460), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-26-130", "pod":"calico-apiserver-64486dfbd5-dnft8", "timestamp":"2025-10-13 05:38:57.011602829 +0000 UTC"}, Hostname:"ip-172-31-26-130", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 05:38:57.197667 containerd[1965]: 2025-10-13 05:38:57.012 [INFO][5152] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 05:38:57.197667 containerd[1965]: 2025-10-13 05:38:57.037 [INFO][5152] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 05:38:57.197667 containerd[1965]: 2025-10-13 05:38:57.037 [INFO][5152] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-26-130' Oct 13 05:38:57.197667 containerd[1965]: 2025-10-13 05:38:57.088 [INFO][5152] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.cb3aa7fdd09485f7489a6f8b13eac892b73b74b898bdcda04bf70ff928299692" host="ip-172-31-26-130" Oct 13 05:38:57.197667 containerd[1965]: 2025-10-13 05:38:57.097 [INFO][5152] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-26-130" Oct 13 05:38:57.197667 containerd[1965]: 2025-10-13 05:38:57.104 [INFO][5152] ipam/ipam.go 511: Trying affinity for 192.168.25.128/26 host="ip-172-31-26-130" Oct 13 05:38:57.197667 containerd[1965]: 2025-10-13 05:38:57.107 [INFO][5152] ipam/ipam.go 158: Attempting to load block cidr=192.168.25.128/26 host="ip-172-31-26-130" Oct 13 05:38:57.197667 containerd[1965]: 2025-10-13 05:38:57.116 [INFO][5152] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.25.128/26 host="ip-172-31-26-130" Oct 13 05:38:57.197667 containerd[1965]: 2025-10-13 05:38:57.117 [INFO][5152] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.25.128/26 handle="k8s-pod-network.cb3aa7fdd09485f7489a6f8b13eac892b73b74b898bdcda04bf70ff928299692" host="ip-172-31-26-130" Oct 13 05:38:57.197667 containerd[1965]: 2025-10-13 05:38:57.121 [INFO][5152] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.cb3aa7fdd09485f7489a6f8b13eac892b73b74b898bdcda04bf70ff928299692 Oct 13 05:38:57.197667 containerd[1965]: 2025-10-13 05:38:57.130 [INFO][5152] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.25.128/26 handle="k8s-pod-network.cb3aa7fdd09485f7489a6f8b13eac892b73b74b898bdcda04bf70ff928299692" host="ip-172-31-26-130" Oct 13 05:38:57.197667 containerd[1965]: 2025-10-13 05:38:57.141 [INFO][5152] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.25.133/26] block=192.168.25.128/26 handle="k8s-pod-network.cb3aa7fdd09485f7489a6f8b13eac892b73b74b898bdcda04bf70ff928299692" host="ip-172-31-26-130" Oct 13 05:38:57.197667 containerd[1965]: 2025-10-13 05:38:57.141 [INFO][5152] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.25.133/26] handle="k8s-pod-network.cb3aa7fdd09485f7489a6f8b13eac892b73b74b898bdcda04bf70ff928299692" host="ip-172-31-26-130" Oct 13 05:38:57.197667 containerd[1965]: 2025-10-13 05:38:57.141 [INFO][5152] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 05:38:57.197667 containerd[1965]: 2025-10-13 05:38:57.141 [INFO][5152] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.25.133/26] IPv6=[] ContainerID="cb3aa7fdd09485f7489a6f8b13eac892b73b74b898bdcda04bf70ff928299692" HandleID="k8s-pod-network.cb3aa7fdd09485f7489a6f8b13eac892b73b74b898bdcda04bf70ff928299692" Workload="ip--172--31--26--130-k8s-calico--apiserver--64486dfbd5--dnft8-eth0" Oct 13 05:38:57.200756 containerd[1965]: 2025-10-13 05:38:57.146 [INFO][5114] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cb3aa7fdd09485f7489a6f8b13eac892b73b74b898bdcda04bf70ff928299692" Namespace="calico-apiserver" Pod="calico-apiserver-64486dfbd5-dnft8" WorkloadEndpoint="ip--172--31--26--130-k8s-calico--apiserver--64486dfbd5--dnft8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--130-k8s-calico--apiserver--64486dfbd5--dnft8-eth0", GenerateName:"calico-apiserver-64486dfbd5-", Namespace:"calico-apiserver", SelfLink:"", UID:"5ca3db14-0698-4639-b240-ba20e318c953", ResourceVersion:"904", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 38, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"64486dfbd5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-130", ContainerID:"", Pod:"calico-apiserver-64486dfbd5-dnft8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.25.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali29a628e980e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:38:57.200756 containerd[1965]: 2025-10-13 05:38:57.147 [INFO][5114] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.25.133/32] ContainerID="cb3aa7fdd09485f7489a6f8b13eac892b73b74b898bdcda04bf70ff928299692" Namespace="calico-apiserver" Pod="calico-apiserver-64486dfbd5-dnft8" WorkloadEndpoint="ip--172--31--26--130-k8s-calico--apiserver--64486dfbd5--dnft8-eth0" Oct 13 05:38:57.200756 containerd[1965]: 2025-10-13 05:38:57.149 [INFO][5114] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali29a628e980e ContainerID="cb3aa7fdd09485f7489a6f8b13eac892b73b74b898bdcda04bf70ff928299692" Namespace="calico-apiserver" Pod="calico-apiserver-64486dfbd5-dnft8" WorkloadEndpoint="ip--172--31--26--130-k8s-calico--apiserver--64486dfbd5--dnft8-eth0" Oct 13 05:38:57.200756 containerd[1965]: 2025-10-13 05:38:57.163 [INFO][5114] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cb3aa7fdd09485f7489a6f8b13eac892b73b74b898bdcda04bf70ff928299692" Namespace="calico-apiserver" Pod="calico-apiserver-64486dfbd5-dnft8" WorkloadEndpoint="ip--172--31--26--130-k8s-calico--apiserver--64486dfbd5--dnft8-eth0" Oct 13 05:38:57.200756 containerd[1965]: 2025-10-13 05:38:57.166 [INFO][5114] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cb3aa7fdd09485f7489a6f8b13eac892b73b74b898bdcda04bf70ff928299692" Namespace="calico-apiserver" Pod="calico-apiserver-64486dfbd5-dnft8" WorkloadEndpoint="ip--172--31--26--130-k8s-calico--apiserver--64486dfbd5--dnft8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--130-k8s-calico--apiserver--64486dfbd5--dnft8-eth0", GenerateName:"calico-apiserver-64486dfbd5-", Namespace:"calico-apiserver", SelfLink:"", UID:"5ca3db14-0698-4639-b240-ba20e318c953", ResourceVersion:"904", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 38, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"64486dfbd5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-130", ContainerID:"cb3aa7fdd09485f7489a6f8b13eac892b73b74b898bdcda04bf70ff928299692", Pod:"calico-apiserver-64486dfbd5-dnft8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.25.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali29a628e980e", MAC:"ce:2d:41:37:3f:f5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:38:57.200756 containerd[1965]: 2025-10-13 05:38:57.189 [INFO][5114] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cb3aa7fdd09485f7489a6f8b13eac892b73b74b898bdcda04bf70ff928299692" Namespace="calico-apiserver" Pod="calico-apiserver-64486dfbd5-dnft8" WorkloadEndpoint="ip--172--31--26--130-k8s-calico--apiserver--64486dfbd5--dnft8-eth0" Oct 13 05:38:57.248194 systemd-networkd[1573]: cali47be432459c: Link UP Oct 13 05:38:57.248406 systemd-networkd[1573]: cali47be432459c: Gained carrier Oct 13 05:38:57.273579 containerd[1965]: 2025-10-13 05:38:56.921 [INFO][5122] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--26--130-k8s-calico--kube--controllers--5b559457d7--fhqld-eth0 calico-kube-controllers-5b559457d7- calico-system 61b998a9-69f8-4371-990d-0d5850be9043 903 0 2025-10-13 05:38:31 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5b559457d7 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-26-130 calico-kube-controllers-5b559457d7-fhqld eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali47be432459c [] [] }} ContainerID="0677c06405abbb849ec05368bf5e1d5cb4c2123fc75fa7698c9480bea77beb30" Namespace="calico-system" Pod="calico-kube-controllers-5b559457d7-fhqld" WorkloadEndpoint="ip--172--31--26--130-k8s-calico--kube--controllers--5b559457d7--fhqld-" Oct 13 05:38:57.273579 containerd[1965]: 2025-10-13 05:38:56.921 [INFO][5122] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0677c06405abbb849ec05368bf5e1d5cb4c2123fc75fa7698c9480bea77beb30" Namespace="calico-system" Pod="calico-kube-controllers-5b559457d7-fhqld" WorkloadEndpoint="ip--172--31--26--130-k8s-calico--kube--controllers--5b559457d7--fhqld-eth0" Oct 13 05:38:57.273579 containerd[1965]: 2025-10-13 05:38:57.023 [INFO][5157] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0677c06405abbb849ec05368bf5e1d5cb4c2123fc75fa7698c9480bea77beb30" HandleID="k8s-pod-network.0677c06405abbb849ec05368bf5e1d5cb4c2123fc75fa7698c9480bea77beb30" Workload="ip--172--31--26--130-k8s-calico--kube--controllers--5b559457d7--fhqld-eth0" Oct 13 05:38:57.273579 containerd[1965]: 2025-10-13 05:38:57.023 [INFO][5157] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0677c06405abbb849ec05368bf5e1d5cb4c2123fc75fa7698c9480bea77beb30" HandleID="k8s-pod-network.0677c06405abbb849ec05368bf5e1d5cb4c2123fc75fa7698c9480bea77beb30" Workload="ip--172--31--26--130-k8s-calico--kube--controllers--5b559457d7--fhqld-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002db530), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-26-130", "pod":"calico-kube-controllers-5b559457d7-fhqld", "timestamp":"2025-10-13 05:38:57.023290385 +0000 UTC"}, Hostname:"ip-172-31-26-130", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 05:38:57.273579 containerd[1965]: 2025-10-13 05:38:57.023 [INFO][5157] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 05:38:57.273579 containerd[1965]: 2025-10-13 05:38:57.141 [INFO][5157] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 05:38:57.273579 containerd[1965]: 2025-10-13 05:38:57.141 [INFO][5157] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-26-130' Oct 13 05:38:57.273579 containerd[1965]: 2025-10-13 05:38:57.190 [INFO][5157] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0677c06405abbb849ec05368bf5e1d5cb4c2123fc75fa7698c9480bea77beb30" host="ip-172-31-26-130" Oct 13 05:38:57.273579 containerd[1965]: 2025-10-13 05:38:57.198 [INFO][5157] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-26-130" Oct 13 05:38:57.273579 containerd[1965]: 2025-10-13 05:38:57.206 [INFO][5157] ipam/ipam.go 511: Trying affinity for 192.168.25.128/26 host="ip-172-31-26-130" Oct 13 05:38:57.273579 containerd[1965]: 2025-10-13 05:38:57.210 [INFO][5157] ipam/ipam.go 158: Attempting to load block cidr=192.168.25.128/26 host="ip-172-31-26-130" Oct 13 05:38:57.273579 containerd[1965]: 2025-10-13 05:38:57.214 [INFO][5157] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.25.128/26 host="ip-172-31-26-130" Oct 13 05:38:57.273579 containerd[1965]: 2025-10-13 05:38:57.214 [INFO][5157] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.25.128/26 handle="k8s-pod-network.0677c06405abbb849ec05368bf5e1d5cb4c2123fc75fa7698c9480bea77beb30" host="ip-172-31-26-130" Oct 13 05:38:57.273579 containerd[1965]: 2025-10-13 05:38:57.217 [INFO][5157] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.0677c06405abbb849ec05368bf5e1d5cb4c2123fc75fa7698c9480bea77beb30 Oct 13 05:38:57.273579 containerd[1965]: 2025-10-13 05:38:57.228 [INFO][5157] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.25.128/26 handle="k8s-pod-network.0677c06405abbb849ec05368bf5e1d5cb4c2123fc75fa7698c9480bea77beb30" host="ip-172-31-26-130" Oct 13 05:38:57.273579 containerd[1965]: 2025-10-13 05:38:57.239 [INFO][5157] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.25.134/26] block=192.168.25.128/26 handle="k8s-pod-network.0677c06405abbb849ec05368bf5e1d5cb4c2123fc75fa7698c9480bea77beb30" host="ip-172-31-26-130" Oct 13 05:38:57.273579 containerd[1965]: 2025-10-13 05:38:57.239 [INFO][5157] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.25.134/26] handle="k8s-pod-network.0677c06405abbb849ec05368bf5e1d5cb4c2123fc75fa7698c9480bea77beb30" host="ip-172-31-26-130" Oct 13 05:38:57.273579 containerd[1965]: 2025-10-13 05:38:57.239 [INFO][5157] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 05:38:57.273579 containerd[1965]: 2025-10-13 05:38:57.239 [INFO][5157] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.25.134/26] IPv6=[] ContainerID="0677c06405abbb849ec05368bf5e1d5cb4c2123fc75fa7698c9480bea77beb30" HandleID="k8s-pod-network.0677c06405abbb849ec05368bf5e1d5cb4c2123fc75fa7698c9480bea77beb30" Workload="ip--172--31--26--130-k8s-calico--kube--controllers--5b559457d7--fhqld-eth0" Oct 13 05:38:57.275272 containerd[1965]: 2025-10-13 05:38:57.243 [INFO][5122] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0677c06405abbb849ec05368bf5e1d5cb4c2123fc75fa7698c9480bea77beb30" Namespace="calico-system" Pod="calico-kube-controllers-5b559457d7-fhqld" WorkloadEndpoint="ip--172--31--26--130-k8s-calico--kube--controllers--5b559457d7--fhqld-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--130-k8s-calico--kube--controllers--5b559457d7--fhqld-eth0", GenerateName:"calico-kube-controllers-5b559457d7-", Namespace:"calico-system", SelfLink:"", UID:"61b998a9-69f8-4371-990d-0d5850be9043", ResourceVersion:"903", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 38, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5b559457d7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-130", ContainerID:"", Pod:"calico-kube-controllers-5b559457d7-fhqld", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.25.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali47be432459c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:38:57.275272 containerd[1965]: 2025-10-13 05:38:57.243 [INFO][5122] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.25.134/32] ContainerID="0677c06405abbb849ec05368bf5e1d5cb4c2123fc75fa7698c9480bea77beb30" Namespace="calico-system" Pod="calico-kube-controllers-5b559457d7-fhqld" WorkloadEndpoint="ip--172--31--26--130-k8s-calico--kube--controllers--5b559457d7--fhqld-eth0" Oct 13 05:38:57.275272 containerd[1965]: 2025-10-13 05:38:57.244 [INFO][5122] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali47be432459c ContainerID="0677c06405abbb849ec05368bf5e1d5cb4c2123fc75fa7698c9480bea77beb30" Namespace="calico-system" Pod="calico-kube-controllers-5b559457d7-fhqld" WorkloadEndpoint="ip--172--31--26--130-k8s-calico--kube--controllers--5b559457d7--fhqld-eth0" Oct 13 05:38:57.275272 containerd[1965]: 2025-10-13 05:38:57.249 [INFO][5122] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0677c06405abbb849ec05368bf5e1d5cb4c2123fc75fa7698c9480bea77beb30" Namespace="calico-system" Pod="calico-kube-controllers-5b559457d7-fhqld" WorkloadEndpoint="ip--172--31--26--130-k8s-calico--kube--controllers--5b559457d7--fhqld-eth0" Oct 13 05:38:57.275272 containerd[1965]: 2025-10-13 05:38:57.249 [INFO][5122] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0677c06405abbb849ec05368bf5e1d5cb4c2123fc75fa7698c9480bea77beb30" Namespace="calico-system" Pod="calico-kube-controllers-5b559457d7-fhqld" WorkloadEndpoint="ip--172--31--26--130-k8s-calico--kube--controllers--5b559457d7--fhqld-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--130-k8s-calico--kube--controllers--5b559457d7--fhqld-eth0", GenerateName:"calico-kube-controllers-5b559457d7-", Namespace:"calico-system", SelfLink:"", UID:"61b998a9-69f8-4371-990d-0d5850be9043", ResourceVersion:"903", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 38, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5b559457d7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-130", ContainerID:"0677c06405abbb849ec05368bf5e1d5cb4c2123fc75fa7698c9480bea77beb30", Pod:"calico-kube-controllers-5b559457d7-fhqld", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.25.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali47be432459c", MAC:"ce:1e:78:fa:4b:d4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:38:57.275272 containerd[1965]: 2025-10-13 05:38:57.263 [INFO][5122] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0677c06405abbb849ec05368bf5e1d5cb4c2123fc75fa7698c9480bea77beb30" Namespace="calico-system" Pod="calico-kube-controllers-5b559457d7-fhqld" WorkloadEndpoint="ip--172--31--26--130-k8s-calico--kube--controllers--5b559457d7--fhqld-eth0" Oct 13 05:38:57.308717 containerd[1965]: time="2025-10-13T05:38:57.308671059Z" level=info msg="connecting to shim cb3aa7fdd09485f7489a6f8b13eac892b73b74b898bdcda04bf70ff928299692" address="unix:///run/containerd/s/7e880bdf349fce77fda84a683d47d72edeb3490399c5985cb52a2e70bc0bfdd0" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:38:57.364659 containerd[1965]: time="2025-10-13T05:38:57.364615342Z" level=info msg="connecting to shim 0677c06405abbb849ec05368bf5e1d5cb4c2123fc75fa7698c9480bea77beb30" address="unix:///run/containerd/s/37a29acd17aa504f6c5411ebc5e590e28fc66eaf88aa422328914ed027d0d8bc" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:38:57.365356 systemd[1]: Started cri-containerd-cb3aa7fdd09485f7489a6f8b13eac892b73b74b898bdcda04bf70ff928299692.scope - libcontainer container cb3aa7fdd09485f7489a6f8b13eac892b73b74b898bdcda04bf70ff928299692. Oct 13 05:38:57.441925 systemd[1]: Started cri-containerd-0677c06405abbb849ec05368bf5e1d5cb4c2123fc75fa7698c9480bea77beb30.scope - libcontainer container 0677c06405abbb849ec05368bf5e1d5cb4c2123fc75fa7698c9480bea77beb30. Oct 13 05:38:57.519449 containerd[1965]: time="2025-10-13T05:38:57.519371718Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-jcxzl,Uid:405beab3-108b-401c-b301-f27eeb7eec5e,Namespace:calico-system,Attempt:0,} returns sandbox id \"4c867dca72f6303d526af1d0ea682bcfb6ee69b80182350b71f3dba76986b6db\"" Oct 13 05:38:57.609692 systemd-networkd[1573]: calie578b1257b5: Gained IPv6LL Oct 13 05:38:57.727765 containerd[1965]: time="2025-10-13T05:38:57.727210203Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64486dfbd5-dnft8,Uid:5ca3db14-0698-4639-b240-ba20e318c953,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"cb3aa7fdd09485f7489a6f8b13eac892b73b74b898bdcda04bf70ff928299692\"" Oct 13 05:38:57.738486 systemd-networkd[1573]: cali34a66be1c95: Gained IPv6LL Oct 13 05:38:57.752656 containerd[1965]: time="2025-10-13T05:38:57.751972913Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5b559457d7-fhqld,Uid:61b998a9-69f8-4371-990d-0d5850be9043,Namespace:calico-system,Attempt:0,} returns sandbox id \"0677c06405abbb849ec05368bf5e1d5cb4c2123fc75fa7698c9480bea77beb30\"" Oct 13 05:38:57.780298 containerd[1965]: time="2025-10-13T05:38:57.780212187Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rx7bk,Uid:96c04204-34c8-461e-8a7a-c15984adac1a,Namespace:calico-system,Attempt:0,}" Oct 13 05:38:58.123872 systemd-networkd[1573]: calif3b33c40317: Link UP Oct 13 05:38:58.124149 systemd-networkd[1573]: calif3b33c40317: Gained carrier Oct 13 05:38:58.157603 containerd[1965]: 2025-10-13 05:38:57.927 [INFO][5345] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--26--130-k8s-csi--node--driver--rx7bk-eth0 csi-node-driver- calico-system 96c04204-34c8-461e-8a7a-c15984adac1a 778 0 2025-10-13 05:38:30 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6c96d95cc7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-26-130 csi-node-driver-rx7bk eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calif3b33c40317 [] [] }} ContainerID="a08126d47c773fac203bcfbebc1b3207fe1f29e7ce59417445fd2a423fe17796" Namespace="calico-system" Pod="csi-node-driver-rx7bk" WorkloadEndpoint="ip--172--31--26--130-k8s-csi--node--driver--rx7bk-" Oct 13 05:38:58.157603 containerd[1965]: 2025-10-13 05:38:57.928 [INFO][5345] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a08126d47c773fac203bcfbebc1b3207fe1f29e7ce59417445fd2a423fe17796" Namespace="calico-system" Pod="csi-node-driver-rx7bk" WorkloadEndpoint="ip--172--31--26--130-k8s-csi--node--driver--rx7bk-eth0" Oct 13 05:38:58.157603 containerd[1965]: 2025-10-13 05:38:58.045 [INFO][5360] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a08126d47c773fac203bcfbebc1b3207fe1f29e7ce59417445fd2a423fe17796" HandleID="k8s-pod-network.a08126d47c773fac203bcfbebc1b3207fe1f29e7ce59417445fd2a423fe17796" Workload="ip--172--31--26--130-k8s-csi--node--driver--rx7bk-eth0" Oct 13 05:38:58.157603 containerd[1965]: 2025-10-13 05:38:58.046 [INFO][5360] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a08126d47c773fac203bcfbebc1b3207fe1f29e7ce59417445fd2a423fe17796" HandleID="k8s-pod-network.a08126d47c773fac203bcfbebc1b3207fe1f29e7ce59417445fd2a423fe17796" Workload="ip--172--31--26--130-k8s-csi--node--driver--rx7bk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000333da0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-26-130", "pod":"csi-node-driver-rx7bk", "timestamp":"2025-10-13 05:38:58.044935388 +0000 UTC"}, Hostname:"ip-172-31-26-130", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 05:38:58.157603 containerd[1965]: 2025-10-13 05:38:58.047 [INFO][5360] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 05:38:58.157603 containerd[1965]: 2025-10-13 05:38:58.048 [INFO][5360] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 05:38:58.157603 containerd[1965]: 2025-10-13 05:38:58.049 [INFO][5360] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-26-130' Oct 13 05:38:58.157603 containerd[1965]: 2025-10-13 05:38:58.061 [INFO][5360] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a08126d47c773fac203bcfbebc1b3207fe1f29e7ce59417445fd2a423fe17796" host="ip-172-31-26-130" Oct 13 05:38:58.157603 containerd[1965]: 2025-10-13 05:38:58.068 [INFO][5360] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-26-130" Oct 13 05:38:58.157603 containerd[1965]: 2025-10-13 05:38:58.081 [INFO][5360] ipam/ipam.go 511: Trying affinity for 192.168.25.128/26 host="ip-172-31-26-130" Oct 13 05:38:58.157603 containerd[1965]: 2025-10-13 05:38:58.084 [INFO][5360] ipam/ipam.go 158: Attempting to load block cidr=192.168.25.128/26 host="ip-172-31-26-130" Oct 13 05:38:58.157603 containerd[1965]: 2025-10-13 05:38:58.089 [INFO][5360] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.25.128/26 host="ip-172-31-26-130" Oct 13 05:38:58.157603 containerd[1965]: 2025-10-13 05:38:58.089 [INFO][5360] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.25.128/26 handle="k8s-pod-network.a08126d47c773fac203bcfbebc1b3207fe1f29e7ce59417445fd2a423fe17796" host="ip-172-31-26-130" Oct 13 05:38:58.157603 containerd[1965]: 2025-10-13 05:38:58.093 [INFO][5360] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a08126d47c773fac203bcfbebc1b3207fe1f29e7ce59417445fd2a423fe17796 Oct 13 05:38:58.157603 containerd[1965]: 2025-10-13 05:38:58.100 [INFO][5360] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.25.128/26 handle="k8s-pod-network.a08126d47c773fac203bcfbebc1b3207fe1f29e7ce59417445fd2a423fe17796" host="ip-172-31-26-130" Oct 13 05:38:58.157603 containerd[1965]: 2025-10-13 05:38:58.113 [INFO][5360] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.25.135/26] block=192.168.25.128/26 handle="k8s-pod-network.a08126d47c773fac203bcfbebc1b3207fe1f29e7ce59417445fd2a423fe17796" host="ip-172-31-26-130" Oct 13 05:38:58.157603 containerd[1965]: 2025-10-13 05:38:58.113 [INFO][5360] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.25.135/26] handle="k8s-pod-network.a08126d47c773fac203bcfbebc1b3207fe1f29e7ce59417445fd2a423fe17796" host="ip-172-31-26-130" Oct 13 05:38:58.157603 containerd[1965]: 2025-10-13 05:38:58.113 [INFO][5360] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 05:38:58.157603 containerd[1965]: 2025-10-13 05:38:58.113 [INFO][5360] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.25.135/26] IPv6=[] ContainerID="a08126d47c773fac203bcfbebc1b3207fe1f29e7ce59417445fd2a423fe17796" HandleID="k8s-pod-network.a08126d47c773fac203bcfbebc1b3207fe1f29e7ce59417445fd2a423fe17796" Workload="ip--172--31--26--130-k8s-csi--node--driver--rx7bk-eth0" Oct 13 05:38:58.161448 containerd[1965]: 2025-10-13 05:38:58.116 [INFO][5345] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a08126d47c773fac203bcfbebc1b3207fe1f29e7ce59417445fd2a423fe17796" Namespace="calico-system" Pod="csi-node-driver-rx7bk" WorkloadEndpoint="ip--172--31--26--130-k8s-csi--node--driver--rx7bk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--130-k8s-csi--node--driver--rx7bk-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"96c04204-34c8-461e-8a7a-c15984adac1a", ResourceVersion:"778", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 38, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-130", ContainerID:"", Pod:"csi-node-driver-rx7bk", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.25.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif3b33c40317", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:38:58.161448 containerd[1965]: 2025-10-13 05:38:58.116 [INFO][5345] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.25.135/32] ContainerID="a08126d47c773fac203bcfbebc1b3207fe1f29e7ce59417445fd2a423fe17796" Namespace="calico-system" Pod="csi-node-driver-rx7bk" WorkloadEndpoint="ip--172--31--26--130-k8s-csi--node--driver--rx7bk-eth0" Oct 13 05:38:58.161448 containerd[1965]: 2025-10-13 05:38:58.117 [INFO][5345] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif3b33c40317 ContainerID="a08126d47c773fac203bcfbebc1b3207fe1f29e7ce59417445fd2a423fe17796" Namespace="calico-system" Pod="csi-node-driver-rx7bk" WorkloadEndpoint="ip--172--31--26--130-k8s-csi--node--driver--rx7bk-eth0" Oct 13 05:38:58.161448 containerd[1965]: 2025-10-13 05:38:58.121 [INFO][5345] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a08126d47c773fac203bcfbebc1b3207fe1f29e7ce59417445fd2a423fe17796" Namespace="calico-system" Pod="csi-node-driver-rx7bk" WorkloadEndpoint="ip--172--31--26--130-k8s-csi--node--driver--rx7bk-eth0" Oct 13 05:38:58.161448 containerd[1965]: 2025-10-13 05:38:58.122 [INFO][5345] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a08126d47c773fac203bcfbebc1b3207fe1f29e7ce59417445fd2a423fe17796" Namespace="calico-system" Pod="csi-node-driver-rx7bk" WorkloadEndpoint="ip--172--31--26--130-k8s-csi--node--driver--rx7bk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--130-k8s-csi--node--driver--rx7bk-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"96c04204-34c8-461e-8a7a-c15984adac1a", ResourceVersion:"778", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 38, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-130", ContainerID:"a08126d47c773fac203bcfbebc1b3207fe1f29e7ce59417445fd2a423fe17796", Pod:"csi-node-driver-rx7bk", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.25.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif3b33c40317", MAC:"4e:0e:f9:0f:8e:25", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:38:58.161448 containerd[1965]: 2025-10-13 05:38:58.144 [INFO][5345] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a08126d47c773fac203bcfbebc1b3207fe1f29e7ce59417445fd2a423fe17796" Namespace="calico-system" Pod="csi-node-driver-rx7bk" WorkloadEndpoint="ip--172--31--26--130-k8s-csi--node--driver--rx7bk-eth0" Oct 13 05:38:58.185949 systemd-networkd[1573]: cali29a628e980e: Gained IPv6LL Oct 13 05:38:58.270195 kubelet[3306]: I1013 05:38:58.262993 3306 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-b4hm2" podStartSLOduration=73.182374228 podStartE2EDuration="1m13.182374228s" podCreationTimestamp="2025-10-13 05:37:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 05:38:57.545970986 +0000 UTC m=+77.905395759" watchObservedRunningTime="2025-10-13 05:38:58.182374228 +0000 UTC m=+78.541799005" Oct 13 05:38:58.308366 containerd[1965]: time="2025-10-13T05:38:58.308310865Z" level=info msg="connecting to shim a08126d47c773fac203bcfbebc1b3207fe1f29e7ce59417445fd2a423fe17796" address="unix:///run/containerd/s/287885458e21bbdac183ad7f9a52e75609293eacbc2693ade6934fab54862d50" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:38:58.388925 systemd[1]: Started cri-containerd-a08126d47c773fac203bcfbebc1b3207fe1f29e7ce59417445fd2a423fe17796.scope - libcontainer container a08126d47c773fac203bcfbebc1b3207fe1f29e7ce59417445fd2a423fe17796. Oct 13 05:38:58.490135 containerd[1965]: time="2025-10-13T05:38:58.490082791Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rx7bk,Uid:96c04204-34c8-461e-8a7a-c15984adac1a,Namespace:calico-system,Attempt:0,} returns sandbox id \"a08126d47c773fac203bcfbebc1b3207fe1f29e7ce59417445fd2a423fe17796\"" Oct 13 05:38:58.505739 systemd-networkd[1573]: cali47be432459c: Gained IPv6LL Oct 13 05:38:58.779687 containerd[1965]: time="2025-10-13T05:38:58.779642326Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-8gw5k,Uid:0f8dfa93-d61a-4e61-90cb-78d638d4cb8b,Namespace:kube-system,Attempt:0,}" Oct 13 05:38:58.953703 systemd-networkd[1573]: cali7c330adfd78: Gained IPv6LL Oct 13 05:38:59.059343 systemd-networkd[1573]: calidc7e812006f: Link UP Oct 13 05:38:59.061134 systemd-networkd[1573]: calidc7e812006f: Gained carrier Oct 13 05:38:59.107054 containerd[1965]: 2025-10-13 05:38:58.903 [INFO][5421] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--26--130-k8s-coredns--674b8bbfcf--8gw5k-eth0 coredns-674b8bbfcf- kube-system 0f8dfa93-d61a-4e61-90cb-78d638d4cb8b 900 0 2025-10-13 05:37:45 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-26-130 coredns-674b8bbfcf-8gw5k eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calidc7e812006f [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="5d2f92b44445d45858c7aedaee8d9c4c135e96fbacd32bcfc2da34bc210ea6db" Namespace="kube-system" Pod="coredns-674b8bbfcf-8gw5k" WorkloadEndpoint="ip--172--31--26--130-k8s-coredns--674b8bbfcf--8gw5k-" Oct 13 05:38:59.107054 containerd[1965]: 2025-10-13 05:38:58.903 [INFO][5421] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5d2f92b44445d45858c7aedaee8d9c4c135e96fbacd32bcfc2da34bc210ea6db" Namespace="kube-system" Pod="coredns-674b8bbfcf-8gw5k" WorkloadEndpoint="ip--172--31--26--130-k8s-coredns--674b8bbfcf--8gw5k-eth0" Oct 13 05:38:59.107054 containerd[1965]: 2025-10-13 05:38:58.991 [INFO][5434] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5d2f92b44445d45858c7aedaee8d9c4c135e96fbacd32bcfc2da34bc210ea6db" HandleID="k8s-pod-network.5d2f92b44445d45858c7aedaee8d9c4c135e96fbacd32bcfc2da34bc210ea6db" Workload="ip--172--31--26--130-k8s-coredns--674b8bbfcf--8gw5k-eth0" Oct 13 05:38:59.107054 containerd[1965]: 2025-10-13 05:38:58.991 [INFO][5434] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5d2f92b44445d45858c7aedaee8d9c4c135e96fbacd32bcfc2da34bc210ea6db" HandleID="k8s-pod-network.5d2f92b44445d45858c7aedaee8d9c4c135e96fbacd32bcfc2da34bc210ea6db" Workload="ip--172--31--26--130-k8s-coredns--674b8bbfcf--8gw5k-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f620), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-26-130", "pod":"coredns-674b8bbfcf-8gw5k", "timestamp":"2025-10-13 05:38:58.990990813 +0000 UTC"}, Hostname:"ip-172-31-26-130", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 05:38:59.107054 containerd[1965]: 2025-10-13 05:38:58.991 [INFO][5434] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 05:38:59.107054 containerd[1965]: 2025-10-13 05:38:58.991 [INFO][5434] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 05:38:59.107054 containerd[1965]: 2025-10-13 05:38:58.991 [INFO][5434] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-26-130' Oct 13 05:38:59.107054 containerd[1965]: 2025-10-13 05:38:59.001 [INFO][5434] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5d2f92b44445d45858c7aedaee8d9c4c135e96fbacd32bcfc2da34bc210ea6db" host="ip-172-31-26-130" Oct 13 05:38:59.107054 containerd[1965]: 2025-10-13 05:38:59.007 [INFO][5434] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-26-130" Oct 13 05:38:59.107054 containerd[1965]: 2025-10-13 05:38:59.014 [INFO][5434] ipam/ipam.go 511: Trying affinity for 192.168.25.128/26 host="ip-172-31-26-130" Oct 13 05:38:59.107054 containerd[1965]: 2025-10-13 05:38:59.017 [INFO][5434] ipam/ipam.go 158: Attempting to load block cidr=192.168.25.128/26 host="ip-172-31-26-130" Oct 13 05:38:59.107054 containerd[1965]: 2025-10-13 05:38:59.022 [INFO][5434] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.25.128/26 host="ip-172-31-26-130" Oct 13 05:38:59.107054 containerd[1965]: 2025-10-13 05:38:59.022 [INFO][5434] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.25.128/26 handle="k8s-pod-network.5d2f92b44445d45858c7aedaee8d9c4c135e96fbacd32bcfc2da34bc210ea6db" host="ip-172-31-26-130" Oct 13 05:38:59.107054 containerd[1965]: 2025-10-13 05:38:59.027 [INFO][5434] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.5d2f92b44445d45858c7aedaee8d9c4c135e96fbacd32bcfc2da34bc210ea6db Oct 13 05:38:59.107054 containerd[1965]: 2025-10-13 05:38:59.035 [INFO][5434] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.25.128/26 handle="k8s-pod-network.5d2f92b44445d45858c7aedaee8d9c4c135e96fbacd32bcfc2da34bc210ea6db" host="ip-172-31-26-130" Oct 13 05:38:59.107054 containerd[1965]: 2025-10-13 05:38:59.046 [INFO][5434] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.25.136/26] block=192.168.25.128/26 handle="k8s-pod-network.5d2f92b44445d45858c7aedaee8d9c4c135e96fbacd32bcfc2da34bc210ea6db" host="ip-172-31-26-130" Oct 13 05:38:59.107054 containerd[1965]: 2025-10-13 05:38:59.047 [INFO][5434] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.25.136/26] handle="k8s-pod-network.5d2f92b44445d45858c7aedaee8d9c4c135e96fbacd32bcfc2da34bc210ea6db" host="ip-172-31-26-130" Oct 13 05:38:59.107054 containerd[1965]: 2025-10-13 05:38:59.047 [INFO][5434] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 05:38:59.107054 containerd[1965]: 2025-10-13 05:38:59.048 [INFO][5434] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.25.136/26] IPv6=[] ContainerID="5d2f92b44445d45858c7aedaee8d9c4c135e96fbacd32bcfc2da34bc210ea6db" HandleID="k8s-pod-network.5d2f92b44445d45858c7aedaee8d9c4c135e96fbacd32bcfc2da34bc210ea6db" Workload="ip--172--31--26--130-k8s-coredns--674b8bbfcf--8gw5k-eth0" Oct 13 05:38:59.113641 containerd[1965]: 2025-10-13 05:38:59.053 [INFO][5421] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5d2f92b44445d45858c7aedaee8d9c4c135e96fbacd32bcfc2da34bc210ea6db" Namespace="kube-system" Pod="coredns-674b8bbfcf-8gw5k" WorkloadEndpoint="ip--172--31--26--130-k8s-coredns--674b8bbfcf--8gw5k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--130-k8s-coredns--674b8bbfcf--8gw5k-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"0f8dfa93-d61a-4e61-90cb-78d638d4cb8b", ResourceVersion:"900", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 37, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-130", ContainerID:"", Pod:"coredns-674b8bbfcf-8gw5k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.25.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidc7e812006f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:38:59.113641 containerd[1965]: 2025-10-13 05:38:59.053 [INFO][5421] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.25.136/32] ContainerID="5d2f92b44445d45858c7aedaee8d9c4c135e96fbacd32bcfc2da34bc210ea6db" Namespace="kube-system" Pod="coredns-674b8bbfcf-8gw5k" WorkloadEndpoint="ip--172--31--26--130-k8s-coredns--674b8bbfcf--8gw5k-eth0" Oct 13 05:38:59.113641 containerd[1965]: 2025-10-13 05:38:59.053 [INFO][5421] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidc7e812006f ContainerID="5d2f92b44445d45858c7aedaee8d9c4c135e96fbacd32bcfc2da34bc210ea6db" Namespace="kube-system" Pod="coredns-674b8bbfcf-8gw5k" WorkloadEndpoint="ip--172--31--26--130-k8s-coredns--674b8bbfcf--8gw5k-eth0" Oct 13 05:38:59.113641 containerd[1965]: 2025-10-13 05:38:59.062 [INFO][5421] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5d2f92b44445d45858c7aedaee8d9c4c135e96fbacd32bcfc2da34bc210ea6db" Namespace="kube-system" Pod="coredns-674b8bbfcf-8gw5k" WorkloadEndpoint="ip--172--31--26--130-k8s-coredns--674b8bbfcf--8gw5k-eth0" Oct 13 05:38:59.113641 containerd[1965]: 2025-10-13 05:38:59.068 [INFO][5421] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5d2f92b44445d45858c7aedaee8d9c4c135e96fbacd32bcfc2da34bc210ea6db" Namespace="kube-system" Pod="coredns-674b8bbfcf-8gw5k" WorkloadEndpoint="ip--172--31--26--130-k8s-coredns--674b8bbfcf--8gw5k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--130-k8s-coredns--674b8bbfcf--8gw5k-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"0f8dfa93-d61a-4e61-90cb-78d638d4cb8b", ResourceVersion:"900", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 37, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-130", ContainerID:"5d2f92b44445d45858c7aedaee8d9c4c135e96fbacd32bcfc2da34bc210ea6db", Pod:"coredns-674b8bbfcf-8gw5k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.25.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidc7e812006f", MAC:"ce:08:a9:19:08:08", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:38:59.113641 containerd[1965]: 2025-10-13 05:38:59.100 [INFO][5421] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5d2f92b44445d45858c7aedaee8d9c4c135e96fbacd32bcfc2da34bc210ea6db" Namespace="kube-system" Pod="coredns-674b8bbfcf-8gw5k" WorkloadEndpoint="ip--172--31--26--130-k8s-coredns--674b8bbfcf--8gw5k-eth0" Oct 13 05:38:59.238376 containerd[1965]: time="2025-10-13T05:38:59.238307106Z" level=info msg="connecting to shim 5d2f92b44445d45858c7aedaee8d9c4c135e96fbacd32bcfc2da34bc210ea6db" address="unix:///run/containerd/s/3350bc0102117f2112777d0c585e7cf995e34e9bc1ea9b4cfe66a68ac6d64a67" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:38:59.276701 systemd[1]: Started cri-containerd-5d2f92b44445d45858c7aedaee8d9c4c135e96fbacd32bcfc2da34bc210ea6db.scope - libcontainer container 5d2f92b44445d45858c7aedaee8d9c4c135e96fbacd32bcfc2da34bc210ea6db. Oct 13 05:38:59.446201 containerd[1965]: time="2025-10-13T05:38:59.445510010Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-8gw5k,Uid:0f8dfa93-d61a-4e61-90cb-78d638d4cb8b,Namespace:kube-system,Attempt:0,} returns sandbox id \"5d2f92b44445d45858c7aedaee8d9c4c135e96fbacd32bcfc2da34bc210ea6db\"" Oct 13 05:38:59.466392 containerd[1965]: time="2025-10-13T05:38:59.465334910Z" level=info msg="CreateContainer within sandbox \"5d2f92b44445d45858c7aedaee8d9c4c135e96fbacd32bcfc2da34bc210ea6db\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 13 05:38:59.487172 containerd[1965]: time="2025-10-13T05:38:59.486071086Z" level=info msg="Container 4340c4bbe01b160ecce1aa18158a2093a82fdbc4daa19833d83188309146b472: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:38:59.509014 containerd[1965]: time="2025-10-13T05:38:59.508844940Z" level=info msg="CreateContainer within sandbox \"5d2f92b44445d45858c7aedaee8d9c4c135e96fbacd32bcfc2da34bc210ea6db\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4340c4bbe01b160ecce1aa18158a2093a82fdbc4daa19833d83188309146b472\"" Oct 13 05:38:59.510670 containerd[1965]: time="2025-10-13T05:38:59.510585436Z" level=info msg="StartContainer for \"4340c4bbe01b160ecce1aa18158a2093a82fdbc4daa19833d83188309146b472\"" Oct 13 05:38:59.511964 containerd[1965]: time="2025-10-13T05:38:59.511889222Z" level=info msg="connecting to shim 4340c4bbe01b160ecce1aa18158a2093a82fdbc4daa19833d83188309146b472" address="unix:///run/containerd/s/3350bc0102117f2112777d0c585e7cf995e34e9bc1ea9b4cfe66a68ac6d64a67" protocol=ttrpc version=3 Oct 13 05:38:59.566714 systemd[1]: Started cri-containerd-4340c4bbe01b160ecce1aa18158a2093a82fdbc4daa19833d83188309146b472.scope - libcontainer container 4340c4bbe01b160ecce1aa18158a2093a82fdbc4daa19833d83188309146b472. Oct 13 05:38:59.629997 containerd[1965]: time="2025-10-13T05:38:59.629845933Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:38:59.634266 containerd[1965]: time="2025-10-13T05:38:59.634234481Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=33085545" Oct 13 05:38:59.635615 containerd[1965]: time="2025-10-13T05:38:59.635565784Z" level=info msg="ImageCreate event name:\"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:38:59.639818 containerd[1965]: time="2025-10-13T05:38:59.639778907Z" level=info msg="StartContainer for \"4340c4bbe01b160ecce1aa18158a2093a82fdbc4daa19833d83188309146b472\" returns successfully" Oct 13 05:38:59.641395 containerd[1965]: time="2025-10-13T05:38:59.641297731Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:38:59.641727 containerd[1965]: time="2025-10-13T05:38:59.641704727Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"33085375\" in 3.807541278s" Oct 13 05:38:59.641727 containerd[1965]: time="2025-10-13T05:38:59.641730434Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\"" Oct 13 05:38:59.643434 containerd[1965]: time="2025-10-13T05:38:59.643392343Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Oct 13 05:38:59.648971 containerd[1965]: time="2025-10-13T05:38:59.648929816Z" level=info msg="CreateContainer within sandbox \"daed6bda17b2b632a1a68b8e61ae44ea16f05137ad2110474d924e7baec19f69\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Oct 13 05:38:59.666610 containerd[1965]: time="2025-10-13T05:38:59.666028846Z" level=info msg="Container 5c6fc862041bed3b8581d6751530c3774a6b8ee68bab8daa62adcb7cb48ed61a: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:38:59.681223 containerd[1965]: time="2025-10-13T05:38:59.681181893Z" level=info msg="CreateContainer within sandbox \"daed6bda17b2b632a1a68b8e61ae44ea16f05137ad2110474d924e7baec19f69\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"5c6fc862041bed3b8581d6751530c3774a6b8ee68bab8daa62adcb7cb48ed61a\"" Oct 13 05:38:59.683613 containerd[1965]: time="2025-10-13T05:38:59.683581856Z" level=info msg="StartContainer for \"5c6fc862041bed3b8581d6751530c3774a6b8ee68bab8daa62adcb7cb48ed61a\"" Oct 13 05:38:59.686240 containerd[1965]: time="2025-10-13T05:38:59.685981689Z" level=info msg="connecting to shim 5c6fc862041bed3b8581d6751530c3774a6b8ee68bab8daa62adcb7cb48ed61a" address="unix:///run/containerd/s/139cd33e47be5d5ba1025ff08d372a12b8fc2501d3398fae07cb65f2dda73084" protocol=ttrpc version=3 Oct 13 05:38:59.708840 systemd[1]: Started cri-containerd-5c6fc862041bed3b8581d6751530c3774a6b8ee68bab8daa62adcb7cb48ed61a.scope - libcontainer container 5c6fc862041bed3b8581d6751530c3774a6b8ee68bab8daa62adcb7cb48ed61a. Oct 13 05:38:59.786622 systemd-networkd[1573]: calif3b33c40317: Gained IPv6LL Oct 13 05:38:59.799114 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1407229273.mount: Deactivated successfully. Oct 13 05:38:59.857699 containerd[1965]: time="2025-10-13T05:38:59.857629964Z" level=info msg="StartContainer for \"5c6fc862041bed3b8581d6751530c3774a6b8ee68bab8daa62adcb7cb48ed61a\" returns successfully" Oct 13 05:39:00.169758 systemd-networkd[1573]: calidc7e812006f: Gained IPv6LL Oct 13 05:39:00.520102 kubelet[3306]: I1013 05:39:00.519752 3306 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-7f6f7f6bf7-47lc6" podStartSLOduration=3.0254738899999998 podStartE2EDuration="8.519728136s" podCreationTimestamp="2025-10-13 05:38:52 +0000 UTC" firstStartedPulling="2025-10-13 05:38:54.14862934 +0000 UTC m=+74.508054092" lastFinishedPulling="2025-10-13 05:38:59.642883587 +0000 UTC m=+80.002308338" observedRunningTime="2025-10-13 05:39:00.49626979 +0000 UTC m=+80.855694580" watchObservedRunningTime="2025-10-13 05:39:00.519728136 +0000 UTC m=+80.879152909" Oct 13 05:39:02.749722 ntpd[1941]: Listen normally on 6 vxlan.calico 192.168.25.128:123 Oct 13 05:39:02.749795 ntpd[1941]: Listen normally on 7 cali977101da529 [fe80::ecee:eeff:feee:eeee%4]:123 Oct 13 05:39:02.754708 ntpd[1941]: 13 Oct 05:39:02 ntpd[1941]: Listen normally on 6 vxlan.calico 192.168.25.128:123 Oct 13 05:39:02.754708 ntpd[1941]: 13 Oct 05:39:02 ntpd[1941]: Listen normally on 7 cali977101da529 [fe80::ecee:eeff:feee:eeee%4]:123 Oct 13 05:39:02.754708 ntpd[1941]: 13 Oct 05:39:02 ntpd[1941]: Listen normally on 8 vxlan.calico [fe80::64df:69ff:fe1a:f30e%5]:123 Oct 13 05:39:02.754708 ntpd[1941]: 13 Oct 05:39:02 ntpd[1941]: Listen normally on 9 cali34a66be1c95 [fe80::ecee:eeff:feee:eeee%8]:123 Oct 13 05:39:02.754708 ntpd[1941]: 13 Oct 05:39:02 ntpd[1941]: Listen normally on 10 calie578b1257b5 [fe80::ecee:eeff:feee:eeee%9]:123 Oct 13 05:39:02.754708 ntpd[1941]: 13 Oct 05:39:02 ntpd[1941]: Listen normally on 11 cali7c330adfd78 [fe80::ecee:eeff:feee:eeee%10]:123 Oct 13 05:39:02.754708 ntpd[1941]: 13 Oct 05:39:02 ntpd[1941]: Listen normally on 12 cali29a628e980e [fe80::ecee:eeff:feee:eeee%11]:123 Oct 13 05:39:02.754708 ntpd[1941]: 13 Oct 05:39:02 ntpd[1941]: Listen normally on 13 cali47be432459c [fe80::ecee:eeff:feee:eeee%12]:123 Oct 13 05:39:02.754708 ntpd[1941]: 13 Oct 05:39:02 ntpd[1941]: Listen normally on 14 calif3b33c40317 [fe80::ecee:eeff:feee:eeee%13]:123 Oct 13 05:39:02.754708 ntpd[1941]: 13 Oct 05:39:02 ntpd[1941]: Listen normally on 15 calidc7e812006f [fe80::ecee:eeff:feee:eeee%14]:123 Oct 13 05:39:02.749825 ntpd[1941]: Listen normally on 8 vxlan.calico [fe80::64df:69ff:fe1a:f30e%5]:123 Oct 13 05:39:02.749854 ntpd[1941]: Listen normally on 9 cali34a66be1c95 [fe80::ecee:eeff:feee:eeee%8]:123 Oct 13 05:39:02.749885 ntpd[1941]: Listen normally on 10 calie578b1257b5 [fe80::ecee:eeff:feee:eeee%9]:123 Oct 13 05:39:02.749910 ntpd[1941]: Listen normally on 11 cali7c330adfd78 [fe80::ecee:eeff:feee:eeee%10]:123 Oct 13 05:39:02.749938 ntpd[1941]: Listen normally on 12 cali29a628e980e [fe80::ecee:eeff:feee:eeee%11]:123 Oct 13 05:39:02.749972 ntpd[1941]: Listen normally on 13 cali47be432459c [fe80::ecee:eeff:feee:eeee%12]:123 Oct 13 05:39:02.750000 ntpd[1941]: Listen normally on 14 calif3b33c40317 [fe80::ecee:eeff:feee:eeee%13]:123 Oct 13 05:39:02.750027 ntpd[1941]: Listen normally on 15 calidc7e812006f [fe80::ecee:eeff:feee:eeee%14]:123 Oct 13 05:39:04.111741 containerd[1965]: time="2025-10-13T05:39:04.111674647Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:39:04.113479 containerd[1965]: time="2025-10-13T05:39:04.113432586Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=47333864" Oct 13 05:39:04.116246 containerd[1965]: time="2025-10-13T05:39:04.116182479Z" level=info msg="ImageCreate event name:\"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:39:04.119319 containerd[1965]: time="2025-10-13T05:39:04.119140132Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:39:04.120100 containerd[1965]: time="2025-10-13T05:39:04.119973555Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 4.476531448s" Oct 13 05:39:04.120100 containerd[1965]: time="2025-10-13T05:39:04.120006019Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Oct 13 05:39:04.121212 containerd[1965]: time="2025-10-13T05:39:04.121080011Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Oct 13 05:39:04.128668 containerd[1965]: time="2025-10-13T05:39:04.128635435Z" level=info msg="CreateContainer within sandbox \"98cb5b0fe93a3ac952870837979e227b8efaa3ecd75e31b55f5768a66d69b44c\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Oct 13 05:39:04.148544 containerd[1965]: time="2025-10-13T05:39:04.148488484Z" level=info msg="Container 6f16f329edef8acc04d593357d4b62c5c580c945afeaef3e80385671d4c8abe8: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:39:04.202705 containerd[1965]: time="2025-10-13T05:39:04.201437197Z" level=info msg="CreateContainer within sandbox \"98cb5b0fe93a3ac952870837979e227b8efaa3ecd75e31b55f5768a66d69b44c\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"6f16f329edef8acc04d593357d4b62c5c580c945afeaef3e80385671d4c8abe8\"" Oct 13 05:39:04.202705 containerd[1965]: time="2025-10-13T05:39:04.202144467Z" level=info msg="StartContainer for \"6f16f329edef8acc04d593357d4b62c5c580c945afeaef3e80385671d4c8abe8\"" Oct 13 05:39:04.204000 containerd[1965]: time="2025-10-13T05:39:04.203962894Z" level=info msg="connecting to shim 6f16f329edef8acc04d593357d4b62c5c580c945afeaef3e80385671d4c8abe8" address="unix:///run/containerd/s/4674ca39defb0896ccba4e29adbca54bee268bc4c652b498ae91eb6b43c60b87" protocol=ttrpc version=3 Oct 13 05:39:04.272632 systemd[1]: Started cri-containerd-6f16f329edef8acc04d593357d4b62c5c580c945afeaef3e80385671d4c8abe8.scope - libcontainer container 6f16f329edef8acc04d593357d4b62c5c580c945afeaef3e80385671d4c8abe8. Oct 13 05:39:04.353442 containerd[1965]: time="2025-10-13T05:39:04.352494708Z" level=info msg="StartContainer for \"6f16f329edef8acc04d593357d4b62c5c580c945afeaef3e80385671d4c8abe8\" returns successfully" Oct 13 05:39:04.563131 kubelet[3306]: I1013 05:39:04.563006 3306 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-8gw5k" podStartSLOduration=79.556903093 podStartE2EDuration="1m19.556903093s" podCreationTimestamp="2025-10-13 05:37:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 05:39:00.52307274 +0000 UTC m=+80.882497515" watchObservedRunningTime="2025-10-13 05:39:04.556903093 +0000 UTC m=+84.916327866" Oct 13 05:39:05.996008 kubelet[3306]: I1013 05:39:05.995952 3306 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-64486dfbd5-dvjbz" podStartSLOduration=32.126211331 podStartE2EDuration="39.995937703s" podCreationTimestamp="2025-10-13 05:38:26 +0000 UTC" firstStartedPulling="2025-10-13 05:38:56.251187549 +0000 UTC m=+76.610612299" lastFinishedPulling="2025-10-13 05:39:04.12091392 +0000 UTC m=+84.480338671" observedRunningTime="2025-10-13 05:39:04.563864952 +0000 UTC m=+84.923289742" watchObservedRunningTime="2025-10-13 05:39:05.995937703 +0000 UTC m=+86.355362495" Oct 13 05:39:08.293900 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3901790113.mount: Deactivated successfully. Oct 13 05:39:09.322524 containerd[1965]: time="2025-10-13T05:39:09.321688917Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:39:09.327979 containerd[1965]: time="2025-10-13T05:39:09.326283671Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=66357526" Oct 13 05:39:09.357202 containerd[1965]: time="2025-10-13T05:39:09.357163319Z" level=info msg="ImageCreate event name:\"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:39:09.362703 containerd[1965]: time="2025-10-13T05:39:09.361632877Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:39:09.362703 containerd[1965]: time="2025-10-13T05:39:09.362300185Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"66357372\" in 5.241195552s" Oct 13 05:39:09.362703 containerd[1965]: time="2025-10-13T05:39:09.362323974Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\"" Oct 13 05:39:09.366844 containerd[1965]: time="2025-10-13T05:39:09.366275591Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Oct 13 05:39:09.381548 containerd[1965]: time="2025-10-13T05:39:09.381502534Z" level=info msg="CreateContainer within sandbox \"4c867dca72f6303d526af1d0ea682bcfb6ee69b80182350b71f3dba76986b6db\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Oct 13 05:39:09.453972 containerd[1965]: time="2025-10-13T05:39:09.440553571Z" level=info msg="Container c81d6828ce542b7bd85950830a125e8e99dfc0a17e085396359e888ecb5d9fda: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:39:09.477597 systemd[1]: Started sshd@7-172.31.26.130:22-139.178.89.65:52414.service - OpenSSH per-connection server daemon (139.178.89.65:52414). Oct 13 05:39:09.649623 containerd[1965]: time="2025-10-13T05:39:09.649550360Z" level=info msg="CreateContainer within sandbox \"4c867dca72f6303d526af1d0ea682bcfb6ee69b80182350b71f3dba76986b6db\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"c81d6828ce542b7bd85950830a125e8e99dfc0a17e085396359e888ecb5d9fda\"" Oct 13 05:39:09.652301 containerd[1965]: time="2025-10-13T05:39:09.652262659Z" level=info msg="StartContainer for \"c81d6828ce542b7bd85950830a125e8e99dfc0a17e085396359e888ecb5d9fda\"" Oct 13 05:39:09.659191 containerd[1965]: time="2025-10-13T05:39:09.658779070Z" level=info msg="connecting to shim c81d6828ce542b7bd85950830a125e8e99dfc0a17e085396359e888ecb5d9fda" address="unix:///run/containerd/s/dc1f112bea60ebf8d82525e1d7586513370fb41faa1cedb664d60db72b6d716e" protocol=ttrpc version=3 Oct 13 05:39:09.725961 systemd[1]: Started cri-containerd-c81d6828ce542b7bd85950830a125e8e99dfc0a17e085396359e888ecb5d9fda.scope - libcontainer container c81d6828ce542b7bd85950830a125e8e99dfc0a17e085396359e888ecb5d9fda. Oct 13 05:39:09.970523 sshd[5660]: Accepted publickey for core from 139.178.89.65 port 52414 ssh2: RSA SHA256:BpThGCEpsxsVRhv7+/XjjOxxuXXUG8jXPfh6ni97mp0 Oct 13 05:39:09.977828 sshd-session[5660]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:39:10.003350 systemd-logind[1948]: New session 8 of user core. Oct 13 05:39:10.008653 systemd[1]: Started session-8.scope - Session 8 of User core. Oct 13 05:39:10.220511 containerd[1965]: time="2025-10-13T05:39:10.216847129Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:39:10.220511 containerd[1965]: time="2025-10-13T05:39:10.216920156Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Oct 13 05:39:10.255462 containerd[1965]: time="2025-10-13T05:39:10.252602271Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 884.941686ms" Oct 13 05:39:10.255462 containerd[1965]: time="2025-10-13T05:39:10.252740335Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Oct 13 05:39:10.333571 containerd[1965]: time="2025-10-13T05:39:10.333512025Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Oct 13 05:39:10.355914 containerd[1965]: time="2025-10-13T05:39:10.355807211Z" level=info msg="StartContainer for \"c81d6828ce542b7bd85950830a125e8e99dfc0a17e085396359e888ecb5d9fda\" returns successfully" Oct 13 05:39:10.819304 containerd[1965]: time="2025-10-13T05:39:10.818977985Z" level=info msg="CreateContainer within sandbox \"cb3aa7fdd09485f7489a6f8b13eac892b73b74b898bdcda04bf70ff928299692\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Oct 13 05:39:10.842128 containerd[1965]: time="2025-10-13T05:39:10.841858648Z" level=info msg="Container 22e079244c3b2429bdb933e3d409327998779d794c2c8d471f1e07ccc43d8b1b: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:39:10.857546 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1785182310.mount: Deactivated successfully. Oct 13 05:39:10.869742 containerd[1965]: time="2025-10-13T05:39:10.869204061Z" level=info msg="CreateContainer within sandbox \"cb3aa7fdd09485f7489a6f8b13eac892b73b74b898bdcda04bf70ff928299692\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"22e079244c3b2429bdb933e3d409327998779d794c2c8d471f1e07ccc43d8b1b\"" Oct 13 05:39:10.952965 containerd[1965]: time="2025-10-13T05:39:10.951014128Z" level=info msg="StartContainer for \"22e079244c3b2429bdb933e3d409327998779d794c2c8d471f1e07ccc43d8b1b\"" Oct 13 05:39:10.958755 containerd[1965]: time="2025-10-13T05:39:10.957606744Z" level=info msg="connecting to shim 22e079244c3b2429bdb933e3d409327998779d794c2c8d471f1e07ccc43d8b1b" address="unix:///run/containerd/s/7e880bdf349fce77fda84a683d47d72edeb3490399c5985cb52a2e70bc0bfdd0" protocol=ttrpc version=3 Oct 13 05:39:11.043661 systemd[1]: Started cri-containerd-22e079244c3b2429bdb933e3d409327998779d794c2c8d471f1e07ccc43d8b1b.scope - libcontainer container 22e079244c3b2429bdb933e3d409327998779d794c2c8d471f1e07ccc43d8b1b. Oct 13 05:39:11.232067 kubelet[3306]: I1013 05:39:11.225781 3306 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-54d579b49d-jcxzl" podStartSLOduration=29.336528751 podStartE2EDuration="41.160310271s" podCreationTimestamp="2025-10-13 05:38:30 +0000 UTC" firstStartedPulling="2025-10-13 05:38:57.541823295 +0000 UTC m=+77.901248066" lastFinishedPulling="2025-10-13 05:39:09.365604836 +0000 UTC m=+89.725029586" observedRunningTime="2025-10-13 05:39:11.158900592 +0000 UTC m=+91.518325367" watchObservedRunningTime="2025-10-13 05:39:11.160310271 +0000 UTC m=+91.519735044" Oct 13 05:39:11.383819 containerd[1965]: time="2025-10-13T05:39:11.383704268Z" level=info msg="StartContainer for \"22e079244c3b2429bdb933e3d409327998779d794c2c8d471f1e07ccc43d8b1b\" returns successfully" Oct 13 05:39:11.645571 sshd[5681]: Connection closed by 139.178.89.65 port 52414 Oct 13 05:39:11.644922 sshd-session[5660]: pam_unix(sshd:session): session closed for user core Oct 13 05:39:11.676375 systemd[1]: sshd@7-172.31.26.130:22-139.178.89.65:52414.service: Deactivated successfully. Oct 13 05:39:11.680124 systemd[1]: session-8.scope: Deactivated successfully. Oct 13 05:39:11.683192 systemd-logind[1948]: Session 8 logged out. Waiting for processes to exit. Oct 13 05:39:11.686027 systemd-logind[1948]: Removed session 8. Oct 13 05:39:12.394838 containerd[1965]: time="2025-10-13T05:39:12.394788067Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c81d6828ce542b7bd85950830a125e8e99dfc0a17e085396359e888ecb5d9fda\" id:\"981d39617151fa1bdc4af2ed6b7fdda1db2209585303725da29d78667c98e878\" pid:5752 exit_status:1 exited_at:{seconds:1760333952 nanos:333802401}" Oct 13 05:39:12.766688 containerd[1965]: time="2025-10-13T05:39:12.766648299Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c81d6828ce542b7bd85950830a125e8e99dfc0a17e085396359e888ecb5d9fda\" id:\"9a59f87ea3acc5a9df752ca3f32aecfc395aafd12b8a3d84e16587232365a349\" pid:5785 exit_status:1 exited_at:{seconds:1760333952 nanos:763625919}" Oct 13 05:39:13.564612 containerd[1965]: time="2025-10-13T05:39:13.563873267Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c81d6828ce542b7bd85950830a125e8e99dfc0a17e085396359e888ecb5d9fda\" id:\"bae6bec03f2fb984421a65b205edab216410c42b42c269ce61e4850be43aaba0\" pid:5812 exit_status:1 exited_at:{seconds:1760333953 nanos:562901023}" Oct 13 05:39:14.356864 kubelet[3306]: I1013 05:39:14.356675 3306 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 13 05:39:14.812462 kubelet[3306]: I1013 05:39:14.808027 3306 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-64486dfbd5-dnft8" podStartSLOduration=36.244709664 podStartE2EDuration="48.808006185s" podCreationTimestamp="2025-10-13 05:38:26 +0000 UTC" firstStartedPulling="2025-10-13 05:38:57.733219608 +0000 UTC m=+78.092644365" lastFinishedPulling="2025-10-13 05:39:10.296516121 +0000 UTC m=+90.655940886" observedRunningTime="2025-10-13 05:39:12.341126941 +0000 UTC m=+92.700551741" watchObservedRunningTime="2025-10-13 05:39:14.808006185 +0000 UTC m=+95.167430959" Oct 13 05:39:14.934676 containerd[1965]: time="2025-10-13T05:39:14.934620267Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:39:14.937268 containerd[1965]: time="2025-10-13T05:39:14.937199616Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=51277746" Oct 13 05:39:14.977257 containerd[1965]: time="2025-10-13T05:39:14.976422002Z" level=info msg="ImageCreate event name:\"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:39:14.982261 containerd[1965]: time="2025-10-13T05:39:14.982219789Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:39:14.982716 containerd[1965]: time="2025-10-13T05:39:14.982685888Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"52770417\" in 4.648611985s" Oct 13 05:39:14.982803 containerd[1965]: time="2025-10-13T05:39:14.982723288Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\"" Oct 13 05:39:15.000037 containerd[1965]: time="2025-10-13T05:39:14.998151648Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Oct 13 05:39:15.128711 containerd[1965]: time="2025-10-13T05:39:15.128600816Z" level=info msg="CreateContainer within sandbox \"0677c06405abbb849ec05368bf5e1d5cb4c2123fc75fa7698c9480bea77beb30\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Oct 13 05:39:15.268448 containerd[1965]: time="2025-10-13T05:39:15.268141764Z" level=info msg="Container 300fc49b68422de63f5712afbc455dfea1fdfe2e2ffe1c435e35ec592b5fc380: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:39:15.357183 containerd[1965]: time="2025-10-13T05:39:15.356967150Z" level=info msg="CreateContainer within sandbox \"0677c06405abbb849ec05368bf5e1d5cb4c2123fc75fa7698c9480bea77beb30\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"300fc49b68422de63f5712afbc455dfea1fdfe2e2ffe1c435e35ec592b5fc380\"" Oct 13 05:39:15.360095 containerd[1965]: time="2025-10-13T05:39:15.358990843Z" level=info msg="StartContainer for \"300fc49b68422de63f5712afbc455dfea1fdfe2e2ffe1c435e35ec592b5fc380\"" Oct 13 05:39:15.428849 containerd[1965]: time="2025-10-13T05:39:15.428752461Z" level=info msg="connecting to shim 300fc49b68422de63f5712afbc455dfea1fdfe2e2ffe1c435e35ec592b5fc380" address="unix:///run/containerd/s/37a29acd17aa504f6c5411ebc5e590e28fc66eaf88aa422328914ed027d0d8bc" protocol=ttrpc version=3 Oct 13 05:39:15.480868 systemd[1]: Started cri-containerd-300fc49b68422de63f5712afbc455dfea1fdfe2e2ffe1c435e35ec592b5fc380.scope - libcontainer container 300fc49b68422de63f5712afbc455dfea1fdfe2e2ffe1c435e35ec592b5fc380. Oct 13 05:39:15.847599 containerd[1965]: time="2025-10-13T05:39:15.847537196Z" level=info msg="StartContainer for \"300fc49b68422de63f5712afbc455dfea1fdfe2e2ffe1c435e35ec592b5fc380\" returns successfully" Oct 13 05:39:16.471629 kubelet[3306]: I1013 05:39:16.471458 3306 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5b559457d7-fhqld" podStartSLOduration=28.237773673 podStartE2EDuration="45.471409077s" podCreationTimestamp="2025-10-13 05:38:31 +0000 UTC" firstStartedPulling="2025-10-13 05:38:57.755674652 +0000 UTC m=+78.115099403" lastFinishedPulling="2025-10-13 05:39:14.989310056 +0000 UTC m=+95.348734807" observedRunningTime="2025-10-13 05:39:16.471004176 +0000 UTC m=+96.830428947" watchObservedRunningTime="2025-10-13 05:39:16.471409077 +0000 UTC m=+96.830833875" Oct 13 05:39:16.595676 containerd[1965]: time="2025-10-13T05:39:16.595629629Z" level=info msg="TaskExit event in podsandbox handler container_id:\"300fc49b68422de63f5712afbc455dfea1fdfe2e2ffe1c435e35ec592b5fc380\" id:\"8151267a41559724a90f7d36ad751fb672b49311e54a2734cdaa5a7c3fa068ad\" pid:5883 exited_at:{seconds:1760333956 nanos:587212360}" Oct 13 05:39:16.713752 systemd[1]: Started sshd@8-172.31.26.130:22-139.178.89.65:38604.service - OpenSSH per-connection server daemon (139.178.89.65:38604). Oct 13 05:39:16.964655 sshd[5893]: Accepted publickey for core from 139.178.89.65 port 38604 ssh2: RSA SHA256:BpThGCEpsxsVRhv7+/XjjOxxuXXUG8jXPfh6ni97mp0 Oct 13 05:39:16.968580 sshd-session[5893]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:39:16.975336 systemd-logind[1948]: New session 9 of user core. Oct 13 05:39:16.983636 systemd[1]: Started session-9.scope - Session 9 of User core. Oct 13 05:39:18.107015 sshd[5896]: Connection closed by 139.178.89.65 port 38604 Oct 13 05:39:18.108087 sshd-session[5893]: pam_unix(sshd:session): session closed for user core Oct 13 05:39:18.123525 systemd[1]: sshd@8-172.31.26.130:22-139.178.89.65:38604.service: Deactivated successfully. Oct 13 05:39:18.127209 systemd[1]: session-9.scope: Deactivated successfully. Oct 13 05:39:18.128717 systemd-logind[1948]: Session 9 logged out. Waiting for processes to exit. Oct 13 05:39:18.131873 systemd-logind[1948]: Removed session 9. Oct 13 05:39:18.484629 containerd[1965]: time="2025-10-13T05:39:18.484129136Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:39:18.487405 containerd[1965]: time="2025-10-13T05:39:18.486732676Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8760527" Oct 13 05:39:18.489587 containerd[1965]: time="2025-10-13T05:39:18.489507203Z" level=info msg="ImageCreate event name:\"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:39:18.494177 containerd[1965]: time="2025-10-13T05:39:18.494133497Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:39:18.497262 containerd[1965]: time="2025-10-13T05:39:18.495337526Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"10253230\" in 3.497145778s" Oct 13 05:39:18.497653 containerd[1965]: time="2025-10-13T05:39:18.497453897Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\"" Oct 13 05:39:18.521235 containerd[1965]: time="2025-10-13T05:39:18.521171895Z" level=info msg="CreateContainer within sandbox \"a08126d47c773fac203bcfbebc1b3207fe1f29e7ce59417445fd2a423fe17796\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Oct 13 05:39:18.572602 containerd[1965]: time="2025-10-13T05:39:18.572557771Z" level=info msg="Container 1fb8aa8cd77fe4f3c773e9fd48ab19a0bd64ccce7c9a8cfbb9275db7d22b49c8: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:39:18.598940 containerd[1965]: time="2025-10-13T05:39:18.598893798Z" level=info msg="CreateContainer within sandbox \"a08126d47c773fac203bcfbebc1b3207fe1f29e7ce59417445fd2a423fe17796\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"1fb8aa8cd77fe4f3c773e9fd48ab19a0bd64ccce7c9a8cfbb9275db7d22b49c8\"" Oct 13 05:39:18.600014 containerd[1965]: time="2025-10-13T05:39:18.599906868Z" level=info msg="StartContainer for \"1fb8aa8cd77fe4f3c773e9fd48ab19a0bd64ccce7c9a8cfbb9275db7d22b49c8\"" Oct 13 05:39:18.602734 containerd[1965]: time="2025-10-13T05:39:18.602668659Z" level=info msg="connecting to shim 1fb8aa8cd77fe4f3c773e9fd48ab19a0bd64ccce7c9a8cfbb9275db7d22b49c8" address="unix:///run/containerd/s/287885458e21bbdac183ad7f9a52e75609293eacbc2693ade6934fab54862d50" protocol=ttrpc version=3 Oct 13 05:39:18.657100 systemd[1]: Started cri-containerd-1fb8aa8cd77fe4f3c773e9fd48ab19a0bd64ccce7c9a8cfbb9275db7d22b49c8.scope - libcontainer container 1fb8aa8cd77fe4f3c773e9fd48ab19a0bd64ccce7c9a8cfbb9275db7d22b49c8. Oct 13 05:39:18.719709 containerd[1965]: time="2025-10-13T05:39:18.719591847Z" level=info msg="StartContainer for \"1fb8aa8cd77fe4f3c773e9fd48ab19a0bd64ccce7c9a8cfbb9275db7d22b49c8\" returns successfully" Oct 13 05:39:18.836581 containerd[1965]: time="2025-10-13T05:39:18.836468012Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Oct 13 05:39:21.407371 containerd[1965]: time="2025-10-13T05:39:21.407311380Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:39:21.409464 containerd[1965]: time="2025-10-13T05:39:21.409402414Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=14698542" Oct 13 05:39:21.412531 containerd[1965]: time="2025-10-13T05:39:21.412151593Z" level=info msg="ImageCreate event name:\"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:39:21.415930 containerd[1965]: time="2025-10-13T05:39:21.415882575Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:39:21.416716 containerd[1965]: time="2025-10-13T05:39:21.416679427Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"16191197\" in 2.580165329s" Oct 13 05:39:21.416716 containerd[1965]: time="2025-10-13T05:39:21.416718358Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\"" Oct 13 05:39:21.423692 containerd[1965]: time="2025-10-13T05:39:21.423641858Z" level=info msg="CreateContainer within sandbox \"a08126d47c773fac203bcfbebc1b3207fe1f29e7ce59417445fd2a423fe17796\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Oct 13 05:39:21.438463 containerd[1965]: time="2025-10-13T05:39:21.438171502Z" level=info msg="Container edf2e82210440380a2afc7b72d370e5e8bcd303e3ca5cb4fb1b69d99c4268df6: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:39:21.456810 containerd[1965]: time="2025-10-13T05:39:21.456767847Z" level=info msg="CreateContainer within sandbox \"a08126d47c773fac203bcfbebc1b3207fe1f29e7ce59417445fd2a423fe17796\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"edf2e82210440380a2afc7b72d370e5e8bcd303e3ca5cb4fb1b69d99c4268df6\"" Oct 13 05:39:21.457743 containerd[1965]: time="2025-10-13T05:39:21.457689089Z" level=info msg="StartContainer for \"edf2e82210440380a2afc7b72d370e5e8bcd303e3ca5cb4fb1b69d99c4268df6\"" Oct 13 05:39:21.460096 containerd[1965]: time="2025-10-13T05:39:21.460055988Z" level=info msg="connecting to shim edf2e82210440380a2afc7b72d370e5e8bcd303e3ca5cb4fb1b69d99c4268df6" address="unix:///run/containerd/s/287885458e21bbdac183ad7f9a52e75609293eacbc2693ade6934fab54862d50" protocol=ttrpc version=3 Oct 13 05:39:21.498818 systemd[1]: Started cri-containerd-edf2e82210440380a2afc7b72d370e5e8bcd303e3ca5cb4fb1b69d99c4268df6.scope - libcontainer container edf2e82210440380a2afc7b72d370e5e8bcd303e3ca5cb4fb1b69d99c4268df6. Oct 13 05:39:21.598658 containerd[1965]: time="2025-10-13T05:39:21.596436345Z" level=info msg="StartContainer for \"edf2e82210440380a2afc7b72d370e5e8bcd303e3ca5cb4fb1b69d99c4268df6\" returns successfully" Oct 13 05:39:22.238025 kubelet[3306]: I1013 05:39:22.225399 3306 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Oct 13 05:39:22.241491 kubelet[3306]: I1013 05:39:22.239605 3306 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Oct 13 05:39:22.610638 kubelet[3306]: I1013 05:39:22.595531 3306 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-rx7bk" podStartSLOduration=29.644711093 podStartE2EDuration="52.570375672s" podCreationTimestamp="2025-10-13 05:38:30 +0000 UTC" firstStartedPulling="2025-10-13 05:38:58.492155714 +0000 UTC m=+78.851580465" lastFinishedPulling="2025-10-13 05:39:21.41782029 +0000 UTC m=+101.777245044" observedRunningTime="2025-10-13 05:39:22.567044329 +0000 UTC m=+102.926469102" watchObservedRunningTime="2025-10-13 05:39:22.570375672 +0000 UTC m=+102.929800444" Oct 13 05:39:23.155332 systemd[1]: Started sshd@9-172.31.26.130:22-139.178.89.65:49740.service - OpenSSH per-connection server daemon (139.178.89.65:49740). Oct 13 05:39:23.446041 sshd[5983]: Accepted publickey for core from 139.178.89.65 port 49740 ssh2: RSA SHA256:BpThGCEpsxsVRhv7+/XjjOxxuXXUG8jXPfh6ni97mp0 Oct 13 05:39:23.449598 sshd-session[5983]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:39:23.458509 systemd-logind[1948]: New session 10 of user core. Oct 13 05:39:23.463649 systemd[1]: Started session-10.scope - Session 10 of User core. Oct 13 05:39:24.228628 sshd[5986]: Connection closed by 139.178.89.65 port 49740 Oct 13 05:39:24.229683 sshd-session[5983]: pam_unix(sshd:session): session closed for user core Oct 13 05:39:24.243926 systemd-logind[1948]: Session 10 logged out. Waiting for processes to exit. Oct 13 05:39:24.244951 systemd[1]: sshd@9-172.31.26.130:22-139.178.89.65:49740.service: Deactivated successfully. Oct 13 05:39:24.248609 systemd[1]: session-10.scope: Deactivated successfully. Oct 13 05:39:24.262564 systemd-logind[1948]: Removed session 10. Oct 13 05:39:24.264723 systemd[1]: Started sshd@10-172.31.26.130:22-139.178.89.65:49756.service - OpenSSH per-connection server daemon (139.178.89.65:49756). Oct 13 05:39:24.457097 sshd[5999]: Accepted publickey for core from 139.178.89.65 port 49756 ssh2: RSA SHA256:BpThGCEpsxsVRhv7+/XjjOxxuXXUG8jXPfh6ni97mp0 Oct 13 05:39:24.458310 sshd-session[5999]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:39:24.464546 systemd-logind[1948]: New session 11 of user core. Oct 13 05:39:24.469531 systemd[1]: Started session-11.scope - Session 11 of User core. Oct 13 05:39:24.826644 sshd[6022]: Connection closed by 139.178.89.65 port 49756 Oct 13 05:39:24.828050 sshd-session[5999]: pam_unix(sshd:session): session closed for user core Oct 13 05:39:24.837931 systemd[1]: sshd@10-172.31.26.130:22-139.178.89.65:49756.service: Deactivated successfully. Oct 13 05:39:24.843737 systemd[1]: session-11.scope: Deactivated successfully. Oct 13 05:39:24.845877 systemd-logind[1948]: Session 11 logged out. Waiting for processes to exit. Oct 13 05:39:24.868616 systemd-logind[1948]: Removed session 11. Oct 13 05:39:24.871159 systemd[1]: Started sshd@11-172.31.26.130:22-139.178.89.65:49760.service - OpenSSH per-connection server daemon (139.178.89.65:49760). Oct 13 05:39:25.168609 sshd[6036]: Accepted publickey for core from 139.178.89.65 port 49760 ssh2: RSA SHA256:BpThGCEpsxsVRhv7+/XjjOxxuXXUG8jXPfh6ni97mp0 Oct 13 05:39:25.180032 sshd-session[6036]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:39:25.188555 systemd-logind[1948]: New session 12 of user core. Oct 13 05:39:25.196693 systemd[1]: Started session-12.scope - Session 12 of User core. Oct 13 05:39:25.254634 containerd[1965]: time="2025-10-13T05:39:25.254163208Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d4d1d98d05de31cee74e6ce6b876e9f1ae8242f0f139e9c84651334047a25d3b\" id:\"c2d41135699728e68bd789667dcaa8212b9d61ee3bac3ef663acf75a0755bb06\" pid:6015 exited_at:{seconds:1760333965 nanos:208507192}" Oct 13 05:39:25.762673 sshd[6040]: Connection closed by 139.178.89.65 port 49760 Oct 13 05:39:25.763356 sshd-session[6036]: pam_unix(sshd:session): session closed for user core Oct 13 05:39:25.772381 systemd[1]: sshd@11-172.31.26.130:22-139.178.89.65:49760.service: Deactivated successfully. Oct 13 05:39:25.774579 systemd[1]: session-12.scope: Deactivated successfully. Oct 13 05:39:25.775654 systemd-logind[1948]: Session 12 logged out. Waiting for processes to exit. Oct 13 05:39:25.777530 systemd-logind[1948]: Removed session 12. Oct 13 05:39:30.795808 systemd[1]: Started sshd@12-172.31.26.130:22-139.178.89.65:49772.service - OpenSSH per-connection server daemon (139.178.89.65:49772). Oct 13 05:39:31.016015 sshd[6057]: Accepted publickey for core from 139.178.89.65 port 49772 ssh2: RSA SHA256:BpThGCEpsxsVRhv7+/XjjOxxuXXUG8jXPfh6ni97mp0 Oct 13 05:39:31.018129 sshd-session[6057]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:39:31.024226 systemd-logind[1948]: New session 13 of user core. Oct 13 05:39:31.031010 systemd[1]: Started session-13.scope - Session 13 of User core. Oct 13 05:39:31.376440 sshd[6060]: Connection closed by 139.178.89.65 port 49772 Oct 13 05:39:31.376991 sshd-session[6057]: pam_unix(sshd:session): session closed for user core Oct 13 05:39:31.381946 systemd-logind[1948]: Session 13 logged out. Waiting for processes to exit. Oct 13 05:39:31.385655 systemd[1]: sshd@12-172.31.26.130:22-139.178.89.65:49772.service: Deactivated successfully. Oct 13 05:39:31.389400 systemd[1]: session-13.scope: Deactivated successfully. Oct 13 05:39:31.393204 systemd-logind[1948]: Removed session 13. Oct 13 05:39:36.412765 systemd[1]: Started sshd@13-172.31.26.130:22-139.178.89.65:41858.service - OpenSSH per-connection server daemon (139.178.89.65:41858). Oct 13 05:39:36.678214 sshd[6078]: Accepted publickey for core from 139.178.89.65 port 41858 ssh2: RSA SHA256:BpThGCEpsxsVRhv7+/XjjOxxuXXUG8jXPfh6ni97mp0 Oct 13 05:39:36.686319 sshd-session[6078]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:39:36.692310 systemd-logind[1948]: New session 14 of user core. Oct 13 05:39:36.697673 systemd[1]: Started session-14.scope - Session 14 of User core. Oct 13 05:39:37.591957 sshd[6081]: Connection closed by 139.178.89.65 port 41858 Oct 13 05:39:37.593874 sshd-session[6078]: pam_unix(sshd:session): session closed for user core Oct 13 05:39:37.604320 systemd[1]: sshd@13-172.31.26.130:22-139.178.89.65:41858.service: Deactivated successfully. Oct 13 05:39:37.606630 systemd[1]: session-14.scope: Deactivated successfully. Oct 13 05:39:37.608472 systemd-logind[1948]: Session 14 logged out. Waiting for processes to exit. Oct 13 05:39:37.610075 systemd-logind[1948]: Removed session 14. Oct 13 05:39:42.637655 systemd[1]: Started sshd@14-172.31.26.130:22-139.178.89.65:36972.service - OpenSSH per-connection server daemon (139.178.89.65:36972). Oct 13 05:39:42.837508 sshd[6096]: Accepted publickey for core from 139.178.89.65 port 36972 ssh2: RSA SHA256:BpThGCEpsxsVRhv7+/XjjOxxuXXUG8jXPfh6ni97mp0 Oct 13 05:39:42.837883 sshd-session[6096]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:39:42.847614 systemd-logind[1948]: New session 15 of user core. Oct 13 05:39:42.856159 systemd[1]: Started session-15.scope - Session 15 of User core. Oct 13 05:39:43.192701 sshd[6099]: Connection closed by 139.178.89.65 port 36972 Oct 13 05:39:43.193682 sshd-session[6096]: pam_unix(sshd:session): session closed for user core Oct 13 05:39:43.199272 systemd[1]: sshd@14-172.31.26.130:22-139.178.89.65:36972.service: Deactivated successfully. Oct 13 05:39:43.202447 systemd[1]: session-15.scope: Deactivated successfully. Oct 13 05:39:43.205145 systemd-logind[1948]: Session 15 logged out. Waiting for processes to exit. Oct 13 05:39:43.206274 systemd-logind[1948]: Removed session 15. Oct 13 05:39:43.680354 containerd[1965]: time="2025-10-13T05:39:43.680306879Z" level=info msg="TaskExit event in podsandbox handler container_id:\"300fc49b68422de63f5712afbc455dfea1fdfe2e2ffe1c435e35ec592b5fc380\" id:\"4fad61fc5bc34709818e3b7e2ed610e842497415545cbfe98524f649981ba4d1\" pid:6145 exited_at:{seconds:1760333983 nanos:679919106}" Oct 13 05:39:43.934162 containerd[1965]: time="2025-10-13T05:39:43.934059147Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c81d6828ce542b7bd85950830a125e8e99dfc0a17e085396359e888ecb5d9fda\" id:\"1caf9c8cdac2719129612bb7d62b41a46de4848c6a4de06d69b4b7836c8dcfd1\" pid:6124 exited_at:{seconds:1760333983 nanos:933350008}" Oct 13 05:39:43.936081 containerd[1965]: time="2025-10-13T05:39:43.935978001Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c81d6828ce542b7bd85950830a125e8e99dfc0a17e085396359e888ecb5d9fda\" id:\"e19679cd40898b3ac8966dcd654a64376afc9ad5c72fe28aee5c6459958c1fdc\" pid:6166 exited_at:{seconds:1760333983 nanos:935639933}" Oct 13 05:39:46.533045 containerd[1965]: time="2025-10-13T05:39:46.532980323Z" level=info msg="TaskExit event in podsandbox handler container_id:\"300fc49b68422de63f5712afbc455dfea1fdfe2e2ffe1c435e35ec592b5fc380\" id:\"2e6aeba1a68f77da2cba0112ed13fe41e40511370826baec37ed384cb01dd604\" pid:6194 exited_at:{seconds:1760333986 nanos:532703946}" Oct 13 05:39:48.227729 systemd[1]: Started sshd@15-172.31.26.130:22-139.178.89.65:36984.service - OpenSSH per-connection server daemon (139.178.89.65:36984). Oct 13 05:39:48.496241 sshd[6205]: Accepted publickey for core from 139.178.89.65 port 36984 ssh2: RSA SHA256:BpThGCEpsxsVRhv7+/XjjOxxuXXUG8jXPfh6ni97mp0 Oct 13 05:39:48.497971 sshd-session[6205]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:39:48.504316 systemd-logind[1948]: New session 16 of user core. Oct 13 05:39:48.510622 systemd[1]: Started session-16.scope - Session 16 of User core. Oct 13 05:39:49.534429 sshd[6209]: Connection closed by 139.178.89.65 port 36984 Oct 13 05:39:49.536600 sshd-session[6205]: pam_unix(sshd:session): session closed for user core Oct 13 05:39:49.542089 systemd[1]: sshd@15-172.31.26.130:22-139.178.89.65:36984.service: Deactivated successfully. Oct 13 05:39:49.545067 systemd[1]: session-16.scope: Deactivated successfully. Oct 13 05:39:49.548016 systemd-logind[1948]: Session 16 logged out. Waiting for processes to exit. Oct 13 05:39:49.549393 systemd-logind[1948]: Removed session 16. Oct 13 05:39:49.567140 systemd[1]: Started sshd@16-172.31.26.130:22-139.178.89.65:36996.service - OpenSSH per-connection server daemon (139.178.89.65:36996). Oct 13 05:39:49.751270 sshd[6220]: Accepted publickey for core from 139.178.89.65 port 36996 ssh2: RSA SHA256:BpThGCEpsxsVRhv7+/XjjOxxuXXUG8jXPfh6ni97mp0 Oct 13 05:39:49.752755 sshd-session[6220]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:39:49.758456 systemd-logind[1948]: New session 17 of user core. Oct 13 05:39:49.762621 systemd[1]: Started session-17.scope - Session 17 of User core. Oct 13 05:39:50.454452 sshd[6225]: Connection closed by 139.178.89.65 port 36996 Oct 13 05:39:50.455955 sshd-session[6220]: pam_unix(sshd:session): session closed for user core Oct 13 05:39:50.468999 systemd[1]: sshd@16-172.31.26.130:22-139.178.89.65:36996.service: Deactivated successfully. Oct 13 05:39:50.471140 systemd[1]: session-17.scope: Deactivated successfully. Oct 13 05:39:50.472755 systemd-logind[1948]: Session 17 logged out. Waiting for processes to exit. Oct 13 05:39:50.474157 systemd-logind[1948]: Removed session 17. Oct 13 05:39:50.485786 systemd[1]: Started sshd@17-172.31.26.130:22-139.178.89.65:37000.service - OpenSSH per-connection server daemon (139.178.89.65:37000). Oct 13 05:39:50.691842 sshd[6234]: Accepted publickey for core from 139.178.89.65 port 37000 ssh2: RSA SHA256:BpThGCEpsxsVRhv7+/XjjOxxuXXUG8jXPfh6ni97mp0 Oct 13 05:39:50.693249 sshd-session[6234]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:39:50.698627 systemd-logind[1948]: New session 18 of user core. Oct 13 05:39:50.703621 systemd[1]: Started session-18.scope - Session 18 of User core. Oct 13 05:39:51.570541 sshd[6237]: Connection closed by 139.178.89.65 port 37000 Oct 13 05:39:51.573140 sshd-session[6234]: pam_unix(sshd:session): session closed for user core Oct 13 05:39:51.578039 systemd[1]: sshd@17-172.31.26.130:22-139.178.89.65:37000.service: Deactivated successfully. Oct 13 05:39:51.580666 systemd[1]: session-18.scope: Deactivated successfully. Oct 13 05:39:51.583130 systemd-logind[1948]: Session 18 logged out. Waiting for processes to exit. Oct 13 05:39:51.585308 systemd-logind[1948]: Removed session 18. Oct 13 05:39:51.611351 systemd[1]: Started sshd@18-172.31.26.130:22-139.178.89.65:37004.service - OpenSSH per-connection server daemon (139.178.89.65:37004). Oct 13 05:39:51.829137 sshd[6258]: Accepted publickey for core from 139.178.89.65 port 37004 ssh2: RSA SHA256:BpThGCEpsxsVRhv7+/XjjOxxuXXUG8jXPfh6ni97mp0 Oct 13 05:39:51.831480 sshd-session[6258]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:39:51.840398 systemd-logind[1948]: New session 19 of user core. Oct 13 05:39:51.846041 systemd[1]: Started session-19.scope - Session 19 of User core. Oct 13 05:39:52.695656 sshd[6261]: Connection closed by 139.178.89.65 port 37004 Oct 13 05:39:52.707121 sshd-session[6258]: pam_unix(sshd:session): session closed for user core Oct 13 05:39:52.717651 systemd-logind[1948]: Session 19 logged out. Waiting for processes to exit. Oct 13 05:39:52.720110 systemd[1]: sshd@18-172.31.26.130:22-139.178.89.65:37004.service: Deactivated successfully. Oct 13 05:39:52.724983 systemd[1]: session-19.scope: Deactivated successfully. Oct 13 05:39:52.741540 systemd-logind[1948]: Removed session 19. Oct 13 05:39:52.744354 systemd[1]: Started sshd@19-172.31.26.130:22-139.178.89.65:49250.service - OpenSSH per-connection server daemon (139.178.89.65:49250). Oct 13 05:39:52.975124 sshd[6272]: Accepted publickey for core from 139.178.89.65 port 49250 ssh2: RSA SHA256:BpThGCEpsxsVRhv7+/XjjOxxuXXUG8jXPfh6ni97mp0 Oct 13 05:39:52.976940 sshd-session[6272]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:39:52.983800 systemd-logind[1948]: New session 20 of user core. Oct 13 05:39:52.988644 systemd[1]: Started session-20.scope - Session 20 of User core. Oct 13 05:39:53.180518 sshd[6275]: Connection closed by 139.178.89.65 port 49250 Oct 13 05:39:53.181290 sshd-session[6272]: pam_unix(sshd:session): session closed for user core Oct 13 05:39:53.186079 systemd[1]: sshd@19-172.31.26.130:22-139.178.89.65:49250.service: Deactivated successfully. Oct 13 05:39:53.188887 systemd[1]: session-20.scope: Deactivated successfully. Oct 13 05:39:53.191918 systemd-logind[1948]: Session 20 logged out. Waiting for processes to exit. Oct 13 05:39:53.193287 systemd-logind[1948]: Removed session 20. Oct 13 05:39:54.890812 containerd[1965]: time="2025-10-13T05:39:54.890760864Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d4d1d98d05de31cee74e6ce6b876e9f1ae8242f0f139e9c84651334047a25d3b\" id:\"fd3efa49da54d85ed7fd78a106e4bf8f3f15f7c50d2d4a6aef292a5290c78938\" pid:6299 exited_at:{seconds:1760333994 nanos:868009091}" Oct 13 05:39:58.214952 systemd[1]: Started sshd@20-172.31.26.130:22-139.178.89.65:49260.service - OpenSSH per-connection server daemon (139.178.89.65:49260). Oct 13 05:39:58.478998 sshd[6314]: Accepted publickey for core from 139.178.89.65 port 49260 ssh2: RSA SHA256:BpThGCEpsxsVRhv7+/XjjOxxuXXUG8jXPfh6ni97mp0 Oct 13 05:39:58.481487 sshd-session[6314]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:39:58.487183 systemd-logind[1948]: New session 21 of user core. Oct 13 05:39:58.492644 systemd[1]: Started session-21.scope - Session 21 of User core. Oct 13 05:39:59.140997 sshd[6317]: Connection closed by 139.178.89.65 port 49260 Oct 13 05:39:59.142604 sshd-session[6314]: pam_unix(sshd:session): session closed for user core Oct 13 05:39:59.147676 systemd[1]: sshd@20-172.31.26.130:22-139.178.89.65:49260.service: Deactivated successfully. Oct 13 05:39:59.151551 systemd[1]: session-21.scope: Deactivated successfully. Oct 13 05:39:59.152739 systemd-logind[1948]: Session 21 logged out. Waiting for processes to exit. Oct 13 05:39:59.155333 systemd-logind[1948]: Removed session 21. Oct 13 05:40:04.188049 systemd[1]: Started sshd@21-172.31.26.130:22-139.178.89.65:53864.service - OpenSSH per-connection server daemon (139.178.89.65:53864). Oct 13 05:40:04.521006 sshd[6329]: Accepted publickey for core from 139.178.89.65 port 53864 ssh2: RSA SHA256:BpThGCEpsxsVRhv7+/XjjOxxuXXUG8jXPfh6ni97mp0 Oct 13 05:40:04.525305 sshd-session[6329]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:40:04.534133 systemd-logind[1948]: New session 22 of user core. Oct 13 05:40:04.540642 systemd[1]: Started session-22.scope - Session 22 of User core. Oct 13 05:40:05.332435 sshd[6333]: Connection closed by 139.178.89.65 port 53864 Oct 13 05:40:05.334666 sshd-session[6329]: pam_unix(sshd:session): session closed for user core Oct 13 05:40:05.342661 systemd[1]: sshd@21-172.31.26.130:22-139.178.89.65:53864.service: Deactivated successfully. Oct 13 05:40:05.347899 systemd[1]: session-22.scope: Deactivated successfully. Oct 13 05:40:05.350232 systemd-logind[1948]: Session 22 logged out. Waiting for processes to exit. Oct 13 05:40:05.353816 systemd-logind[1948]: Removed session 22. Oct 13 05:40:10.388795 systemd[1]: Started sshd@22-172.31.26.130:22-139.178.89.65:53880.service - OpenSSH per-connection server daemon (139.178.89.65:53880). Oct 13 05:40:10.680777 sshd[6346]: Accepted publickey for core from 139.178.89.65 port 53880 ssh2: RSA SHA256:BpThGCEpsxsVRhv7+/XjjOxxuXXUG8jXPfh6ni97mp0 Oct 13 05:40:10.691964 sshd-session[6346]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:40:10.701371 systemd-logind[1948]: New session 23 of user core. Oct 13 05:40:10.705769 systemd[1]: Started session-23.scope - Session 23 of User core. Oct 13 05:40:11.672481 sshd[6349]: Connection closed by 139.178.89.65 port 53880 Oct 13 05:40:11.678443 sshd-session[6346]: pam_unix(sshd:session): session closed for user core Oct 13 05:40:11.688714 systemd[1]: sshd@22-172.31.26.130:22-139.178.89.65:53880.service: Deactivated successfully. Oct 13 05:40:11.692300 systemd[1]: session-23.scope: Deactivated successfully. Oct 13 05:40:11.694317 systemd-logind[1948]: Session 23 logged out. Waiting for processes to exit. Oct 13 05:40:11.698589 systemd-logind[1948]: Removed session 23. Oct 13 05:40:13.847499 containerd[1965]: time="2025-10-13T05:40:13.847431222Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c81d6828ce542b7bd85950830a125e8e99dfc0a17e085396359e888ecb5d9fda\" id:\"3c252b2daab0739dca82be8321f1a3d66a0c18c5f68dba43bddefaf03aaf1e7f\" pid:6371 exited_at:{seconds:1760334013 nanos:792867738}" Oct 13 05:40:16.688263 containerd[1965]: time="2025-10-13T05:40:16.688208577Z" level=info msg="TaskExit event in podsandbox handler container_id:\"300fc49b68422de63f5712afbc455dfea1fdfe2e2ffe1c435e35ec592b5fc380\" id:\"9f6e4b9ca9748ec0bdcfdf55fdaef9e4e3d6bc12df87a0ca547054f74f997844\" pid:6398 exited_at:{seconds:1760334016 nanos:687075267}" Oct 13 05:40:16.712327 systemd[1]: Started sshd@23-172.31.26.130:22-139.178.89.65:38614.service - OpenSSH per-connection server daemon (139.178.89.65:38614). Oct 13 05:40:17.065236 sshd[6410]: Accepted publickey for core from 139.178.89.65 port 38614 ssh2: RSA SHA256:BpThGCEpsxsVRhv7+/XjjOxxuXXUG8jXPfh6ni97mp0 Oct 13 05:40:17.069340 sshd-session[6410]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:40:17.078095 systemd-logind[1948]: New session 24 of user core. Oct 13 05:40:17.085976 systemd[1]: Started session-24.scope - Session 24 of User core. Oct 13 05:40:17.965624 sshd[6413]: Connection closed by 139.178.89.65 port 38614 Oct 13 05:40:17.967194 sshd-session[6410]: pam_unix(sshd:session): session closed for user core Oct 13 05:40:17.982913 systemd[1]: sshd@23-172.31.26.130:22-139.178.89.65:38614.service: Deactivated successfully. Oct 13 05:40:17.985952 systemd[1]: session-24.scope: Deactivated successfully. Oct 13 05:40:17.987654 systemd-logind[1948]: Session 24 logged out. Waiting for processes to exit. Oct 13 05:40:17.989905 systemd-logind[1948]: Removed session 24. Oct 13 05:40:23.005480 systemd[1]: Started sshd@24-172.31.26.130:22-139.178.89.65:49202.service - OpenSSH per-connection server daemon (139.178.89.65:49202). Oct 13 05:40:23.211736 sshd[6430]: Accepted publickey for core from 139.178.89.65 port 49202 ssh2: RSA SHA256:BpThGCEpsxsVRhv7+/XjjOxxuXXUG8jXPfh6ni97mp0 Oct 13 05:40:23.214056 sshd-session[6430]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:40:23.224204 systemd-logind[1948]: New session 25 of user core. Oct 13 05:40:23.230020 systemd[1]: Started session-25.scope - Session 25 of User core. Oct 13 05:40:23.845046 sshd[6433]: Connection closed by 139.178.89.65 port 49202 Oct 13 05:40:23.848468 sshd-session[6430]: pam_unix(sshd:session): session closed for user core Oct 13 05:40:23.856910 systemd[1]: sshd@24-172.31.26.130:22-139.178.89.65:49202.service: Deactivated successfully. Oct 13 05:40:23.861615 systemd[1]: session-25.scope: Deactivated successfully. Oct 13 05:40:23.865074 systemd-logind[1948]: Session 25 logged out. Waiting for processes to exit. Oct 13 05:40:23.866849 systemd-logind[1948]: Removed session 25. Oct 13 05:40:25.113239 containerd[1965]: time="2025-10-13T05:40:25.113191059Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d4d1d98d05de31cee74e6ce6b876e9f1ae8242f0f139e9c84651334047a25d3b\" id:\"1ce5a55f275d2217b57b4279336a02b32bfb9c9692215b13177eb3cfbb46f755\" pid:6456 exited_at:{seconds:1760334025 nanos:112552234}" Oct 13 05:40:37.099853 systemd[1]: cri-containerd-b649f37ae49df3c0cad341d08f64273b2de688dd50f9e7f2c2d1f71c5571f038.scope: Deactivated successfully. Oct 13 05:40:37.104781 systemd[1]: cri-containerd-b649f37ae49df3c0cad341d08f64273b2de688dd50f9e7f2c2d1f71c5571f038.scope: Consumed 11.820s CPU time, 116.4M memory peak, 100.6M read from disk. Oct 13 05:40:37.253455 containerd[1965]: time="2025-10-13T05:40:37.253275885Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b649f37ae49df3c0cad341d08f64273b2de688dd50f9e7f2c2d1f71c5571f038\" id:\"b649f37ae49df3c0cad341d08f64273b2de688dd50f9e7f2c2d1f71c5571f038\" pid:3681 exit_status:1 exited_at:{seconds:1760334037 nanos:218276434}" Oct 13 05:40:37.279282 containerd[1965]: time="2025-10-13T05:40:37.269835565Z" level=info msg="received exit event container_id:\"b649f37ae49df3c0cad341d08f64273b2de688dd50f9e7f2c2d1f71c5571f038\" id:\"b649f37ae49df3c0cad341d08f64273b2de688dd50f9e7f2c2d1f71c5571f038\" pid:3681 exit_status:1 exited_at:{seconds:1760334037 nanos:218276434}" Oct 13 05:40:37.440675 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b649f37ae49df3c0cad341d08f64273b2de688dd50f9e7f2c2d1f71c5571f038-rootfs.mount: Deactivated successfully. Oct 13 05:40:38.071522 kubelet[3306]: I1013 05:40:38.067368 3306 scope.go:117] "RemoveContainer" containerID="b649f37ae49df3c0cad341d08f64273b2de688dd50f9e7f2c2d1f71c5571f038" Oct 13 05:40:38.180090 containerd[1965]: time="2025-10-13T05:40:38.180040451Z" level=info msg="CreateContainer within sandbox \"38f0a19176ba0d0aed6519617dc7db6c6903e13894baebdba40f8c032fa2cfde\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Oct 13 05:40:38.305481 containerd[1965]: time="2025-10-13T05:40:38.305182176Z" level=info msg="Container 61b88acb5380d17044567245e411f20154aabb788e3a62b532fc3796e349edb9: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:40:38.310909 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3990009823.mount: Deactivated successfully. Oct 13 05:40:38.362109 containerd[1965]: time="2025-10-13T05:40:38.361644483Z" level=info msg="CreateContainer within sandbox \"38f0a19176ba0d0aed6519617dc7db6c6903e13894baebdba40f8c032fa2cfde\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"61b88acb5380d17044567245e411f20154aabb788e3a62b532fc3796e349edb9\"" Oct 13 05:40:38.363529 containerd[1965]: time="2025-10-13T05:40:38.363466916Z" level=info msg="StartContainer for \"61b88acb5380d17044567245e411f20154aabb788e3a62b532fc3796e349edb9\"" Oct 13 05:40:38.365744 containerd[1965]: time="2025-10-13T05:40:38.365710471Z" level=info msg="connecting to shim 61b88acb5380d17044567245e411f20154aabb788e3a62b532fc3796e349edb9" address="unix:///run/containerd/s/d4db5f5ad493c952b8bfc00974a863eb5f8255009fb91ce5c84d9176951516ac" protocol=ttrpc version=3 Oct 13 05:40:38.467631 systemd[1]: Started cri-containerd-61b88acb5380d17044567245e411f20154aabb788e3a62b532fc3796e349edb9.scope - libcontainer container 61b88acb5380d17044567245e411f20154aabb788e3a62b532fc3796e349edb9. Oct 13 05:40:38.517688 containerd[1965]: time="2025-10-13T05:40:38.517648144Z" level=info msg="StartContainer for \"61b88acb5380d17044567245e411f20154aabb788e3a62b532fc3796e349edb9\" returns successfully" Oct 13 05:40:38.578083 systemd[1]: cri-containerd-40db531112f48f0e47dfe4fb83610dbc7aeb2d492db84ca5d85298567e337b52.scope: Deactivated successfully. Oct 13 05:40:38.578757 systemd[1]: cri-containerd-40db531112f48f0e47dfe4fb83610dbc7aeb2d492db84ca5d85298567e337b52.scope: Consumed 3.674s CPU time, 96.2M memory peak, 140.2M read from disk. Oct 13 05:40:38.586442 containerd[1965]: time="2025-10-13T05:40:38.585746364Z" level=info msg="received exit event container_id:\"40db531112f48f0e47dfe4fb83610dbc7aeb2d492db84ca5d85298567e337b52\" id:\"40db531112f48f0e47dfe4fb83610dbc7aeb2d492db84ca5d85298567e337b52\" pid:3114 exit_status:1 exited_at:{seconds:1760334038 nanos:584619101}" Oct 13 05:40:38.588465 containerd[1965]: time="2025-10-13T05:40:38.588325483Z" level=info msg="TaskExit event in podsandbox handler container_id:\"40db531112f48f0e47dfe4fb83610dbc7aeb2d492db84ca5d85298567e337b52\" id:\"40db531112f48f0e47dfe4fb83610dbc7aeb2d492db84ca5d85298567e337b52\" pid:3114 exit_status:1 exited_at:{seconds:1760334038 nanos:584619101}" Oct 13 05:40:38.644145 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-40db531112f48f0e47dfe4fb83610dbc7aeb2d492db84ca5d85298567e337b52-rootfs.mount: Deactivated successfully. Oct 13 05:40:38.990025 kubelet[3306]: I1013 05:40:38.989729 3306 scope.go:117] "RemoveContainer" containerID="40db531112f48f0e47dfe4fb83610dbc7aeb2d492db84ca5d85298567e337b52" Oct 13 05:40:38.992387 containerd[1965]: time="2025-10-13T05:40:38.992340254Z" level=info msg="CreateContainer within sandbox \"8d742dfae72508116bd141daf3bdb843a3edecf838c84e69cbb55f65b1aefb44\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Oct 13 05:40:39.033715 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount169056977.mount: Deactivated successfully. Oct 13 05:40:39.038914 containerd[1965]: time="2025-10-13T05:40:39.038741825Z" level=info msg="Container 91b07e62fa8098c6fdfe384e0361fbdfc3a2eb4f5ede45a9d87a5ad86aa1e724: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:40:39.053516 containerd[1965]: time="2025-10-13T05:40:39.053467805Z" level=info msg="CreateContainer within sandbox \"8d742dfae72508116bd141daf3bdb843a3edecf838c84e69cbb55f65b1aefb44\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"91b07e62fa8098c6fdfe384e0361fbdfc3a2eb4f5ede45a9d87a5ad86aa1e724\"" Oct 13 05:40:39.054021 containerd[1965]: time="2025-10-13T05:40:39.053991644Z" level=info msg="StartContainer for \"91b07e62fa8098c6fdfe384e0361fbdfc3a2eb4f5ede45a9d87a5ad86aa1e724\"" Oct 13 05:40:39.055483 containerd[1965]: time="2025-10-13T05:40:39.055408621Z" level=info msg="connecting to shim 91b07e62fa8098c6fdfe384e0361fbdfc3a2eb4f5ede45a9d87a5ad86aa1e724" address="unix:///run/containerd/s/04b24666bcb233dc5540a077b1245fe3dd5e1b08e3ebeb2b8bc3a1b9996fda1f" protocol=ttrpc version=3 Oct 13 05:40:39.074723 systemd[1]: Started cri-containerd-91b07e62fa8098c6fdfe384e0361fbdfc3a2eb4f5ede45a9d87a5ad86aa1e724.scope - libcontainer container 91b07e62fa8098c6fdfe384e0361fbdfc3a2eb4f5ede45a9d87a5ad86aa1e724. Oct 13 05:40:39.132926 containerd[1965]: time="2025-10-13T05:40:39.132872258Z" level=info msg="StartContainer for \"91b07e62fa8098c6fdfe384e0361fbdfc3a2eb4f5ede45a9d87a5ad86aa1e724\" returns successfully" Oct 13 05:40:39.448931 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2193840735.mount: Deactivated successfully. Oct 13 05:40:42.682241 systemd[1]: cri-containerd-0e489a95e171bd7b087a09ff4f8bcf292d3207aacf0668457858db59aa18a91d.scope: Deactivated successfully. Oct 13 05:40:42.682516 systemd[1]: cri-containerd-0e489a95e171bd7b087a09ff4f8bcf292d3207aacf0668457858db59aa18a91d.scope: Consumed 3.025s CPU time, 38.3M memory peak, 80.3M read from disk. Oct 13 05:40:42.687490 containerd[1965]: time="2025-10-13T05:40:42.687446361Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0e489a95e171bd7b087a09ff4f8bcf292d3207aacf0668457858db59aa18a91d\" id:\"0e489a95e171bd7b087a09ff4f8bcf292d3207aacf0668457858db59aa18a91d\" pid:3142 exit_status:1 exited_at:{seconds:1760334042 nanos:686300008}" Oct 13 05:40:42.690353 containerd[1965]: time="2025-10-13T05:40:42.690315732Z" level=info msg="received exit event container_id:\"0e489a95e171bd7b087a09ff4f8bcf292d3207aacf0668457858db59aa18a91d\" id:\"0e489a95e171bd7b087a09ff4f8bcf292d3207aacf0668457858db59aa18a91d\" pid:3142 exit_status:1 exited_at:{seconds:1760334042 nanos:686300008}" Oct 13 05:40:42.724327 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0e489a95e171bd7b087a09ff4f8bcf292d3207aacf0668457858db59aa18a91d-rootfs.mount: Deactivated successfully. Oct 13 05:40:43.009427 kubelet[3306]: I1013 05:40:43.009372 3306 scope.go:117] "RemoveContainer" containerID="0e489a95e171bd7b087a09ff4f8bcf292d3207aacf0668457858db59aa18a91d" Oct 13 05:40:43.011826 containerd[1965]: time="2025-10-13T05:40:43.011778559Z" level=info msg="CreateContainer within sandbox \"ebed4d85b5181561ca3239a63621add503e3e31fb0f71235843a6d4e549f8345\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Oct 13 05:40:43.038460 containerd[1965]: time="2025-10-13T05:40:43.038193616Z" level=info msg="Container b2e8c7ec18580ea5b7519da06b8b1ac3b45d408aae6bbffb3f9b466f501fdd4f: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:40:43.047272 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3661887456.mount: Deactivated successfully. Oct 13 05:40:43.057986 containerd[1965]: time="2025-10-13T05:40:43.057939695Z" level=info msg="CreateContainer within sandbox \"ebed4d85b5181561ca3239a63621add503e3e31fb0f71235843a6d4e549f8345\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"b2e8c7ec18580ea5b7519da06b8b1ac3b45d408aae6bbffb3f9b466f501fdd4f\"" Oct 13 05:40:43.058769 containerd[1965]: time="2025-10-13T05:40:43.058702306Z" level=info msg="StartContainer for \"b2e8c7ec18580ea5b7519da06b8b1ac3b45d408aae6bbffb3f9b466f501fdd4f\"" Oct 13 05:40:43.061508 containerd[1965]: time="2025-10-13T05:40:43.061470570Z" level=info msg="connecting to shim b2e8c7ec18580ea5b7519da06b8b1ac3b45d408aae6bbffb3f9b466f501fdd4f" address="unix:///run/containerd/s/306bdfe04ac8998cb7d5642e2b2d277f67905fd6af1d6541b4551eec9e8594b3" protocol=ttrpc version=3 Oct 13 05:40:43.095605 systemd[1]: Started cri-containerd-b2e8c7ec18580ea5b7519da06b8b1ac3b45d408aae6bbffb3f9b466f501fdd4f.scope - libcontainer container b2e8c7ec18580ea5b7519da06b8b1ac3b45d408aae6bbffb3f9b466f501fdd4f. Oct 13 05:40:43.153271 containerd[1965]: time="2025-10-13T05:40:43.153216493Z" level=info msg="StartContainer for \"b2e8c7ec18580ea5b7519da06b8b1ac3b45d408aae6bbffb3f9b466f501fdd4f\" returns successfully" Oct 13 05:40:43.506682 containerd[1965]: time="2025-10-13T05:40:43.506465150Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c81d6828ce542b7bd85950830a125e8e99dfc0a17e085396359e888ecb5d9fda\" id:\"9f1aa3a996643dd7ec4197191b85c46f3e353e074880448c284b817b7bc0c3fb\" pid:6635 exited_at:{seconds:1760334043 nanos:506113331}" Oct 13 05:40:43.664124 containerd[1965]: time="2025-10-13T05:40:43.664078689Z" level=info msg="TaskExit event in podsandbox handler container_id:\"300fc49b68422de63f5712afbc455dfea1fdfe2e2ffe1c435e35ec592b5fc380\" id:\"1ce26fc4ffaf6081bdd4f686718ff99d9337ad57cd2fb236e14214063d5eb66f\" pid:6658 exit_status:1 exited_at:{seconds:1760334043 nanos:663756354}" Oct 13 05:40:43.754961 containerd[1965]: time="2025-10-13T05:40:43.754918897Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c81d6828ce542b7bd85950830a125e8e99dfc0a17e085396359e888ecb5d9fda\" id:\"32eeefacd8054e4b07d76e104ed476894327de4940c8394b5a0ead37ae2981f1\" pid:6680 exited_at:{seconds:1760334043 nanos:754517365}" Oct 13 05:40:43.961346 kubelet[3306]: E1013 05:40:43.961208 3306 controller.go:195] "Failed to update lease" err="Put \"https://172.31.26.130:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-26-130?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 13 05:40:46.496691 containerd[1965]: time="2025-10-13T05:40:46.496623069Z" level=info msg="TaskExit event in podsandbox handler container_id:\"300fc49b68422de63f5712afbc455dfea1fdfe2e2ffe1c435e35ec592b5fc380\" id:\"bfc4b6e206dab1f0b80c8798ee426064ec025a381475557e02b98b200321c2e3\" pid:6708 exit_status:1 exited_at:{seconds:1760334046 nanos:496325925}" Oct 13 05:40:50.176961 systemd[1]: cri-containerd-61b88acb5380d17044567245e411f20154aabb788e3a62b532fc3796e349edb9.scope: Deactivated successfully. Oct 13 05:40:50.177611 systemd[1]: cri-containerd-61b88acb5380d17044567245e411f20154aabb788e3a62b532fc3796e349edb9.scope: Consumed 305ms CPU time, 69M memory peak, 33.8M read from disk. Oct 13 05:40:50.179574 containerd[1965]: time="2025-10-13T05:40:50.178487314Z" level=info msg="received exit event container_id:\"61b88acb5380d17044567245e411f20154aabb788e3a62b532fc3796e349edb9\" id:\"61b88acb5380d17044567245e411f20154aabb788e3a62b532fc3796e349edb9\" pid:6515 exit_status:1 exited_at:{seconds:1760334050 nanos:177758662}" Oct 13 05:40:50.180393 containerd[1965]: time="2025-10-13T05:40:50.180349198Z" level=info msg="TaskExit event in podsandbox handler container_id:\"61b88acb5380d17044567245e411f20154aabb788e3a62b532fc3796e349edb9\" id:\"61b88acb5380d17044567245e411f20154aabb788e3a62b532fc3796e349edb9\" pid:6515 exit_status:1 exited_at:{seconds:1760334050 nanos:177758662}" Oct 13 05:40:50.203370 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-61b88acb5380d17044567245e411f20154aabb788e3a62b532fc3796e349edb9-rootfs.mount: Deactivated successfully. Oct 13 05:40:51.032991 kubelet[3306]: I1013 05:40:51.032897 3306 scope.go:117] "RemoveContainer" containerID="b649f37ae49df3c0cad341d08f64273b2de688dd50f9e7f2c2d1f71c5571f038" Oct 13 05:40:51.034002 kubelet[3306]: I1013 05:40:51.033503 3306 scope.go:117] "RemoveContainer" containerID="61b88acb5380d17044567245e411f20154aabb788e3a62b532fc3796e349edb9" Oct 13 05:40:51.036648 kubelet[3306]: E1013 05:40:51.036595 3306 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-755d956888-5ngfz_tigera-operator(fb77dfca-99c9-4b08-ab84-57e2bbe759aa)\"" pod="tigera-operator/tigera-operator-755d956888-5ngfz" podUID="fb77dfca-99c9-4b08-ab84-57e2bbe759aa" Oct 13 05:40:51.149641 containerd[1965]: time="2025-10-13T05:40:51.149590769Z" level=info msg="RemoveContainer for \"b649f37ae49df3c0cad341d08f64273b2de688dd50f9e7f2c2d1f71c5571f038\"" Oct 13 05:40:51.171570 containerd[1965]: time="2025-10-13T05:40:51.171518978Z" level=info msg="RemoveContainer for \"b649f37ae49df3c0cad341d08f64273b2de688dd50f9e7f2c2d1f71c5571f038\" returns successfully" Oct 13 05:40:53.962013 kubelet[3306]: E1013 05:40:53.961895 3306 controller.go:195] "Failed to update lease" err="Put \"https://172.31.26.130:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-26-130?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 13 05:40:54.679583 containerd[1965]: time="2025-10-13T05:40:54.679546234Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d4d1d98d05de31cee74e6ce6b876e9f1ae8242f0f139e9c84651334047a25d3b\" id:\"2ecbde1e58d10d16bb54eba39fa4dfbcfe7afa4fa0ffb1192c75b4af9fbbd197\" pid:6744 exited_at:{seconds:1760334054 nanos:679154678}"