Jul 15 05:15:09.934783 kernel: Linux version 6.12.36-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Tue Jul 15 03:28:48 -00 2025 Jul 15 05:15:09.934825 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=926b029026d98240a9e8b6527b65fc026ae523bea87c3b77ffd7237bcc7be4fb Jul 15 05:15:09.934841 kernel: BIOS-provided physical RAM map: Jul 15 05:15:09.934853 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jul 15 05:15:09.934865 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000786cdfff] usable Jul 15 05:15:09.934877 kernel: BIOS-e820: [mem 0x00000000786ce000-0x000000007894dfff] reserved Jul 15 05:15:09.934891 kernel: BIOS-e820: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Jul 15 05:15:09.934904 kernel: BIOS-e820: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Jul 15 05:15:09.934919 kernel: BIOS-e820: [mem 0x00000000789de000-0x000000007c97bfff] usable Jul 15 05:15:09.934931 kernel: BIOS-e820: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Jul 15 05:15:09.934944 kernel: NX (Execute Disable) protection: active Jul 15 05:15:09.934956 kernel: APIC: Static calls initialized Jul 15 05:15:09.934968 kernel: e820: update [mem 0x768c0018-0x768c8e57] usable ==> usable Jul 15 05:15:09.934982 kernel: extended physical RAM map: Jul 15 05:15:09.935001 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Jul 15 05:15:09.935015 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000768c0017] usable Jul 15 05:15:09.935029 kernel: reserve setup_data: [mem 0x00000000768c0018-0x00000000768c8e57] usable Jul 15 05:15:09.935043 kernel: reserve setup_data: [mem 0x00000000768c8e58-0x00000000786cdfff] usable Jul 15 05:15:09.935056 kernel: reserve setup_data: [mem 0x00000000786ce000-0x000000007894dfff] reserved Jul 15 05:15:09.935070 kernel: reserve setup_data: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Jul 15 05:15:09.935084 kernel: reserve setup_data: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Jul 15 05:15:09.935099 kernel: reserve setup_data: [mem 0x00000000789de000-0x000000007c97bfff] usable Jul 15 05:15:09.935112 kernel: reserve setup_data: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Jul 15 05:15:09.935126 kernel: efi: EFI v2.7 by EDK II Jul 15 05:15:09.935142 kernel: efi: SMBIOS=0x7886a000 ACPI=0x7895d000 ACPI 2.0=0x7895d014 MEMATTR=0x77003518 Jul 15 05:15:09.935156 kernel: secureboot: Secure boot disabled Jul 15 05:15:09.935186 kernel: SMBIOS 2.7 present. Jul 15 05:15:09.935201 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Jul 15 05:15:09.935214 kernel: DMI: Memory slots populated: 1/1 Jul 15 05:15:09.935228 kernel: Hypervisor detected: KVM Jul 15 05:15:09.935242 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 15 05:15:09.935255 kernel: kvm-clock: using sched offset of 5182721593 cycles Jul 15 05:15:09.935271 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 15 05:15:09.935284 kernel: tsc: Detected 2499.996 MHz processor Jul 15 05:15:09.935299 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 15 05:15:09.935316 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 15 05:15:09.935330 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Jul 15 05:15:09.935345 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jul 15 05:15:09.935359 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 15 05:15:09.935373 kernel: Using GB pages for direct mapping Jul 15 05:15:09.935391 kernel: ACPI: Early table checksum verification disabled Jul 15 05:15:09.935410 kernel: ACPI: RSDP 0x000000007895D014 000024 (v02 AMAZON) Jul 15 05:15:09.935425 kernel: ACPI: XSDT 0x000000007895C0E8 00006C (v01 AMAZON AMZNFACP 00000001 01000013) Jul 15 05:15:09.935439 kernel: ACPI: FACP 0x0000000078955000 000114 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Jul 15 05:15:09.935455 kernel: ACPI: DSDT 0x0000000078956000 00115A (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Jul 15 05:15:09.935470 kernel: ACPI: FACS 0x00000000789D0000 000040 Jul 15 05:15:09.935485 kernel: ACPI: WAET 0x000000007895B000 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Jul 15 05:15:09.935500 kernel: ACPI: SLIT 0x000000007895A000 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jul 15 05:15:09.935515 kernel: ACPI: APIC 0x0000000078959000 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jul 15 05:15:09.935533 kernel: ACPI: SRAT 0x0000000078958000 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Jul 15 05:15:09.935548 kernel: ACPI: HPET 0x0000000078954000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Jul 15 05:15:09.935563 kernel: ACPI: SSDT 0x0000000078953000 000759 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Jul 15 05:15:09.935578 kernel: ACPI: SSDT 0x0000000078952000 00007F (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Jul 15 05:15:09.935594 kernel: ACPI: BGRT 0x0000000078951000 000038 (v01 AMAZON AMAZON 00000002 01000013) Jul 15 05:15:09.935608 kernel: ACPI: Reserving FACP table memory at [mem 0x78955000-0x78955113] Jul 15 05:15:09.935624 kernel: ACPI: Reserving DSDT table memory at [mem 0x78956000-0x78957159] Jul 15 05:15:09.935639 kernel: ACPI: Reserving FACS table memory at [mem 0x789d0000-0x789d003f] Jul 15 05:15:09.935656 kernel: ACPI: Reserving WAET table memory at [mem 0x7895b000-0x7895b027] Jul 15 05:15:09.935671 kernel: ACPI: Reserving SLIT table memory at [mem 0x7895a000-0x7895a06b] Jul 15 05:15:09.935685 kernel: ACPI: Reserving APIC table memory at [mem 0x78959000-0x78959075] Jul 15 05:15:09.935700 kernel: ACPI: Reserving SRAT table memory at [mem 0x78958000-0x7895809f] Jul 15 05:15:09.935716 kernel: ACPI: Reserving HPET table memory at [mem 0x78954000-0x78954037] Jul 15 05:15:09.935731 kernel: ACPI: Reserving SSDT table memory at [mem 0x78953000-0x78953758] Jul 15 05:15:09.935745 kernel: ACPI: Reserving SSDT table memory at [mem 0x78952000-0x7895207e] Jul 15 05:15:09.935760 kernel: ACPI: Reserving BGRT table memory at [mem 0x78951000-0x78951037] Jul 15 05:15:09.935775 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Jul 15 05:15:09.935790 kernel: NUMA: Initialized distance table, cnt=1 Jul 15 05:15:09.935808 kernel: NODE_DATA(0) allocated [mem 0x7a8eddc0-0x7a8f4fff] Jul 15 05:15:09.935823 kernel: Zone ranges: Jul 15 05:15:09.935837 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 15 05:15:09.935852 kernel: DMA32 [mem 0x0000000001000000-0x000000007c97bfff] Jul 15 05:15:09.935867 kernel: Normal empty Jul 15 05:15:09.935881 kernel: Device empty Jul 15 05:15:09.935896 kernel: Movable zone start for each node Jul 15 05:15:09.935911 kernel: Early memory node ranges Jul 15 05:15:09.935926 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jul 15 05:15:09.935943 kernel: node 0: [mem 0x0000000000100000-0x00000000786cdfff] Jul 15 05:15:09.935958 kernel: node 0: [mem 0x00000000789de000-0x000000007c97bfff] Jul 15 05:15:09.935973 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007c97bfff] Jul 15 05:15:09.935988 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 15 05:15:09.936002 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jul 15 05:15:09.936017 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Jul 15 05:15:09.936033 kernel: On node 0, zone DMA32: 13956 pages in unavailable ranges Jul 15 05:15:09.936047 kernel: ACPI: PM-Timer IO Port: 0xb008 Jul 15 05:15:09.936062 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 15 05:15:09.936079 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Jul 15 05:15:09.936093 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 15 05:15:09.936107 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 15 05:15:09.936120 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 15 05:15:09.936135 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 15 05:15:09.936146 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 15 05:15:09.936159 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jul 15 05:15:09.936188 kernel: TSC deadline timer available Jul 15 05:15:09.936206 kernel: CPU topo: Max. logical packages: 1 Jul 15 05:15:09.936225 kernel: CPU topo: Max. logical dies: 1 Jul 15 05:15:09.936251 kernel: CPU topo: Max. dies per package: 1 Jul 15 05:15:09.936270 kernel: CPU topo: Max. threads per core: 2 Jul 15 05:15:09.936282 kernel: CPU topo: Num. cores per package: 1 Jul 15 05:15:09.936296 kernel: CPU topo: Num. threads per package: 2 Jul 15 05:15:09.936311 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Jul 15 05:15:09.936325 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jul 15 05:15:09.936340 kernel: [mem 0x7ca00000-0xffffffff] available for PCI devices Jul 15 05:15:09.936355 kernel: Booting paravirtualized kernel on KVM Jul 15 05:15:09.936370 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 15 05:15:09.936388 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jul 15 05:15:09.936403 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Jul 15 05:15:09.936417 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Jul 15 05:15:09.936432 kernel: pcpu-alloc: [0] 0 1 Jul 15 05:15:09.936447 kernel: kvm-guest: PV spinlocks enabled Jul 15 05:15:09.936462 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 15 05:15:09.936479 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=926b029026d98240a9e8b6527b65fc026ae523bea87c3b77ffd7237bcc7be4fb Jul 15 05:15:09.936495 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 15 05:15:09.936512 kernel: random: crng init done Jul 15 05:15:09.936527 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 15 05:15:09.936542 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jul 15 05:15:09.936557 kernel: Fallback order for Node 0: 0 Jul 15 05:15:09.936572 kernel: Built 1 zonelists, mobility grouping on. Total pages: 509451 Jul 15 05:15:09.936587 kernel: Policy zone: DMA32 Jul 15 05:15:09.936612 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 15 05:15:09.936630 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 15 05:15:09.936646 kernel: Kernel/User page tables isolation: enabled Jul 15 05:15:09.936661 kernel: ftrace: allocating 40097 entries in 157 pages Jul 15 05:15:09.936677 kernel: ftrace: allocated 157 pages with 5 groups Jul 15 05:15:09.936693 kernel: Dynamic Preempt: voluntary Jul 15 05:15:09.936711 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 15 05:15:09.936732 kernel: rcu: RCU event tracing is enabled. Jul 15 05:15:09.936748 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 15 05:15:09.936764 kernel: Trampoline variant of Tasks RCU enabled. Jul 15 05:15:09.936780 kernel: Rude variant of Tasks RCU enabled. Jul 15 05:15:09.936798 kernel: Tracing variant of Tasks RCU enabled. Jul 15 05:15:09.936814 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 15 05:15:09.936829 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 15 05:15:09.936845 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 15 05:15:09.936861 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 15 05:15:09.936877 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 15 05:15:09.936892 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jul 15 05:15:09.936908 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 15 05:15:09.936924 kernel: Console: colour dummy device 80x25 Jul 15 05:15:09.936942 kernel: printk: legacy console [tty0] enabled Jul 15 05:15:09.936957 kernel: printk: legacy console [ttyS0] enabled Jul 15 05:15:09.936973 kernel: ACPI: Core revision 20240827 Jul 15 05:15:09.936989 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Jul 15 05:15:09.937005 kernel: APIC: Switch to symmetric I/O mode setup Jul 15 05:15:09.937020 kernel: x2apic enabled Jul 15 05:15:09.937036 kernel: APIC: Switched APIC routing to: physical x2apic Jul 15 05:15:09.937052 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Jul 15 05:15:09.937067 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499996) Jul 15 05:15:09.937086 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jul 15 05:15:09.937102 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Jul 15 05:15:09.937118 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 15 05:15:09.937133 kernel: Spectre V2 : Mitigation: Retpolines Jul 15 05:15:09.937148 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jul 15 05:15:09.938233 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jul 15 05:15:09.938256 kernel: RETBleed: Vulnerable Jul 15 05:15:09.938272 kernel: Speculative Store Bypass: Vulnerable Jul 15 05:15:09.938288 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Jul 15 05:15:09.938303 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jul 15 05:15:09.938324 kernel: GDS: Unknown: Dependent on hypervisor status Jul 15 05:15:09.938340 kernel: ITS: Mitigation: Aligned branch/return thunks Jul 15 05:15:09.938355 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 15 05:15:09.938371 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 15 05:15:09.938387 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 15 05:15:09.938402 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Jul 15 05:15:09.938418 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Jul 15 05:15:09.938433 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jul 15 05:15:09.938449 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jul 15 05:15:09.938464 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jul 15 05:15:09.938480 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Jul 15 05:15:09.938498 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 15 05:15:09.938514 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Jul 15 05:15:09.938530 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Jul 15 05:15:09.938545 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Jul 15 05:15:09.938560 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Jul 15 05:15:09.938576 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Jul 15 05:15:09.938591 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Jul 15 05:15:09.938607 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Jul 15 05:15:09.938623 kernel: Freeing SMP alternatives memory: 32K Jul 15 05:15:09.938638 kernel: pid_max: default: 32768 minimum: 301 Jul 15 05:15:09.938653 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jul 15 05:15:09.938671 kernel: landlock: Up and running. Jul 15 05:15:09.938687 kernel: SELinux: Initializing. Jul 15 05:15:09.938702 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jul 15 05:15:09.938718 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jul 15 05:15:09.938734 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Jul 15 05:15:09.938750 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Jul 15 05:15:09.938766 kernel: signal: max sigframe size: 3632 Jul 15 05:15:09.938782 kernel: rcu: Hierarchical SRCU implementation. Jul 15 05:15:09.938799 kernel: rcu: Max phase no-delay instances is 400. Jul 15 05:15:09.938815 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jul 15 05:15:09.938833 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jul 15 05:15:09.938849 kernel: smp: Bringing up secondary CPUs ... Jul 15 05:15:09.938865 kernel: smpboot: x86: Booting SMP configuration: Jul 15 05:15:09.938881 kernel: .... node #0, CPUs: #1 Jul 15 05:15:09.938898 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Jul 15 05:15:09.938915 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jul 15 05:15:09.938930 kernel: smp: Brought up 1 node, 2 CPUs Jul 15 05:15:09.938946 kernel: smpboot: Total of 2 processors activated (9999.98 BogoMIPS) Jul 15 05:15:09.938963 kernel: Memory: 1908052K/2037804K available (14336K kernel code, 2430K rwdata, 9956K rodata, 54608K init, 2360K bss, 125188K reserved, 0K cma-reserved) Jul 15 05:15:09.938981 kernel: devtmpfs: initialized Jul 15 05:15:09.938997 kernel: x86/mm: Memory block size: 128MB Jul 15 05:15:09.939013 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7895e000-0x789ddfff] (524288 bytes) Jul 15 05:15:09.939028 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 15 05:15:09.939044 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 15 05:15:09.939060 kernel: pinctrl core: initialized pinctrl subsystem Jul 15 05:15:09.939076 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 15 05:15:09.939092 kernel: audit: initializing netlink subsys (disabled) Jul 15 05:15:09.939107 kernel: audit: type=2000 audit(1752556508.368:1): state=initialized audit_enabled=0 res=1 Jul 15 05:15:09.939126 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 15 05:15:09.939141 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 15 05:15:09.939157 kernel: cpuidle: using governor menu Jul 15 05:15:09.941602 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 15 05:15:09.941629 kernel: dca service started, version 1.12.1 Jul 15 05:15:09.941646 kernel: PCI: Using configuration type 1 for base access Jul 15 05:15:09.941664 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 15 05:15:09.941680 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 15 05:15:09.941696 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 15 05:15:09.941716 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 15 05:15:09.941731 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 15 05:15:09.941747 kernel: ACPI: Added _OSI(Module Device) Jul 15 05:15:09.941763 kernel: ACPI: Added _OSI(Processor Device) Jul 15 05:15:09.941779 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 15 05:15:09.941793 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Jul 15 05:15:09.941806 kernel: ACPI: Interpreter enabled Jul 15 05:15:09.941820 kernel: ACPI: PM: (supports S0 S5) Jul 15 05:15:09.941835 kernel: ACPI: Using IOAPIC for interrupt routing Jul 15 05:15:09.941852 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 15 05:15:09.941867 kernel: PCI: Using E820 reservations for host bridge windows Jul 15 05:15:09.941884 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jul 15 05:15:09.941899 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 15 05:15:09.942138 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jul 15 05:15:09.942308 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jul 15 05:15:09.942458 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jul 15 05:15:09.942484 kernel: acpiphp: Slot [3] registered Jul 15 05:15:09.942502 kernel: acpiphp: Slot [4] registered Jul 15 05:15:09.942518 kernel: acpiphp: Slot [5] registered Jul 15 05:15:09.942534 kernel: acpiphp: Slot [6] registered Jul 15 05:15:09.942548 kernel: acpiphp: Slot [7] registered Jul 15 05:15:09.942562 kernel: acpiphp: Slot [8] registered Jul 15 05:15:09.942579 kernel: acpiphp: Slot [9] registered Jul 15 05:15:09.942592 kernel: acpiphp: Slot [10] registered Jul 15 05:15:09.942608 kernel: acpiphp: Slot [11] registered Jul 15 05:15:09.942625 kernel: acpiphp: Slot [12] registered Jul 15 05:15:09.942640 kernel: acpiphp: Slot [13] registered Jul 15 05:15:09.942655 kernel: acpiphp: Slot [14] registered Jul 15 05:15:09.942670 kernel: acpiphp: Slot [15] registered Jul 15 05:15:09.942683 kernel: acpiphp: Slot [16] registered Jul 15 05:15:09.942699 kernel: acpiphp: Slot [17] registered Jul 15 05:15:09.942715 kernel: acpiphp: Slot [18] registered Jul 15 05:15:09.942732 kernel: acpiphp: Slot [19] registered Jul 15 05:15:09.942748 kernel: acpiphp: Slot [20] registered Jul 15 05:15:09.942766 kernel: acpiphp: Slot [21] registered Jul 15 05:15:09.942785 kernel: acpiphp: Slot [22] registered Jul 15 05:15:09.942802 kernel: acpiphp: Slot [23] registered Jul 15 05:15:09.942818 kernel: acpiphp: Slot [24] registered Jul 15 05:15:09.942833 kernel: acpiphp: Slot [25] registered Jul 15 05:15:09.942848 kernel: acpiphp: Slot [26] registered Jul 15 05:15:09.942862 kernel: acpiphp: Slot [27] registered Jul 15 05:15:09.942875 kernel: acpiphp: Slot [28] registered Jul 15 05:15:09.942889 kernel: acpiphp: Slot [29] registered Jul 15 05:15:09.942904 kernel: acpiphp: Slot [30] registered Jul 15 05:15:09.942921 kernel: acpiphp: Slot [31] registered Jul 15 05:15:09.942935 kernel: PCI host bridge to bus 0000:00 Jul 15 05:15:09.943084 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 15 05:15:09.944285 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 15 05:15:09.944433 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 15 05:15:09.944559 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Jul 15 05:15:09.944684 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x2000ffffffff window] Jul 15 05:15:09.944808 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 15 05:15:09.944962 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint Jul 15 05:15:09.945105 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint Jul 15 05:15:09.946075 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 conventional PCI endpoint Jul 15 05:15:09.946266 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Jul 15 05:15:09.946406 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Jul 15 05:15:09.946542 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Jul 15 05:15:09.946679 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Jul 15 05:15:09.946819 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Jul 15 05:15:09.946960 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Jul 15 05:15:09.947100 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Jul 15 05:15:09.947266 kernel: pci 0000:00:01.3: quirk_piix4_acpi+0x0/0x180 took 10742 usecs Jul 15 05:15:09.947402 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 conventional PCI endpoint Jul 15 05:15:09.947534 kernel: pci 0000:00:03.0: BAR 0 [mem 0x80000000-0x803fffff pref] Jul 15 05:15:09.947665 kernel: pci 0000:00:03.0: ROM [mem 0xffff0000-0xffffffff pref] Jul 15 05:15:09.947794 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 15 05:15:09.947932 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 PCIe Endpoint Jul 15 05:15:09.948054 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80404000-0x80407fff] Jul 15 05:15:09.948204 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 PCIe Endpoint Jul 15 05:15:09.948329 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80400000-0x80403fff] Jul 15 05:15:09.948352 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 15 05:15:09.948367 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 15 05:15:09.948382 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 15 05:15:09.948396 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 15 05:15:09.948411 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jul 15 05:15:09.948426 kernel: iommu: Default domain type: Translated Jul 15 05:15:09.948441 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 15 05:15:09.948455 kernel: efivars: Registered efivars operations Jul 15 05:15:09.948470 kernel: PCI: Using ACPI for IRQ routing Jul 15 05:15:09.948487 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 15 05:15:09.948501 kernel: e820: reserve RAM buffer [mem 0x768c0018-0x77ffffff] Jul 15 05:15:09.948516 kernel: e820: reserve RAM buffer [mem 0x786ce000-0x7bffffff] Jul 15 05:15:09.948529 kernel: e820: reserve RAM buffer [mem 0x7c97c000-0x7fffffff] Jul 15 05:15:09.948649 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Jul 15 05:15:09.948769 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Jul 15 05:15:09.948892 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 15 05:15:09.948910 kernel: vgaarb: loaded Jul 15 05:15:09.948927 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Jul 15 05:15:09.948942 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Jul 15 05:15:09.948957 kernel: clocksource: Switched to clocksource kvm-clock Jul 15 05:15:09.948971 kernel: VFS: Disk quotas dquot_6.6.0 Jul 15 05:15:09.948986 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 15 05:15:09.949000 kernel: pnp: PnP ACPI init Jul 15 05:15:09.949015 kernel: pnp: PnP ACPI: found 5 devices Jul 15 05:15:09.949029 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 15 05:15:09.949044 kernel: NET: Registered PF_INET protocol family Jul 15 05:15:09.949061 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 15 05:15:09.949076 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jul 15 05:15:09.949090 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 15 05:15:09.949105 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jul 15 05:15:09.949119 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jul 15 05:15:09.949134 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jul 15 05:15:09.949148 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jul 15 05:15:09.949183 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jul 15 05:15:09.949198 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 15 05:15:09.949215 kernel: NET: Registered PF_XDP protocol family Jul 15 05:15:09.949334 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 15 05:15:09.949445 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 15 05:15:09.949571 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 15 05:15:09.949701 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Jul 15 05:15:09.949830 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x2000ffffffff window] Jul 15 05:15:09.949970 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jul 15 05:15:09.949992 kernel: PCI: CLS 0 bytes, default 64 Jul 15 05:15:09.950013 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jul 15 05:15:09.950031 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Jul 15 05:15:09.950048 kernel: clocksource: Switched to clocksource tsc Jul 15 05:15:09.950065 kernel: Initialise system trusted keyrings Jul 15 05:15:09.950081 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jul 15 05:15:09.950097 kernel: Key type asymmetric registered Jul 15 05:15:09.950114 kernel: Asymmetric key parser 'x509' registered Jul 15 05:15:09.950130 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 15 05:15:09.950147 kernel: io scheduler mq-deadline registered Jul 15 05:15:09.950167 kernel: io scheduler kyber registered Jul 15 05:15:09.950263 kernel: io scheduler bfq registered Jul 15 05:15:09.950277 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 15 05:15:09.950293 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 15 05:15:09.950309 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 15 05:15:09.950323 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 15 05:15:09.950336 kernel: i8042: Warning: Keylock active Jul 15 05:15:09.950350 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 15 05:15:09.950366 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 15 05:15:09.950533 kernel: rtc_cmos 00:00: RTC can wake from S4 Jul 15 05:15:09.950662 kernel: rtc_cmos 00:00: registered as rtc0 Jul 15 05:15:09.950796 kernel: rtc_cmos 00:00: setting system clock to 2025-07-15T05:15:09 UTC (1752556509) Jul 15 05:15:09.950931 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Jul 15 05:15:09.950977 kernel: intel_pstate: CPU model not supported Jul 15 05:15:09.950996 kernel: efifb: probing for efifb Jul 15 05:15:09.951011 kernel: efifb: framebuffer at 0x80000000, using 1876k, total 1875k Jul 15 05:15:09.951027 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 Jul 15 05:15:09.951046 kernel: efifb: scrolling: redraw Jul 15 05:15:09.951061 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jul 15 05:15:09.951078 kernel: Console: switching to colour frame buffer device 100x37 Jul 15 05:15:09.951095 kernel: fb0: EFI VGA frame buffer device Jul 15 05:15:09.951113 kernel: pstore: Using crash dump compression: deflate Jul 15 05:15:09.951128 kernel: pstore: Registered efi_pstore as persistent store backend Jul 15 05:15:09.951144 kernel: NET: Registered PF_INET6 protocol family Jul 15 05:15:09.951160 kernel: Segment Routing with IPv6 Jul 15 05:15:09.952226 kernel: In-situ OAM (IOAM) with IPv6 Jul 15 05:15:09.952253 kernel: NET: Registered PF_PACKET protocol family Jul 15 05:15:09.952271 kernel: Key type dns_resolver registered Jul 15 05:15:09.952288 kernel: IPI shorthand broadcast: enabled Jul 15 05:15:09.952306 kernel: sched_clock: Marking stable (2806069720, 239543984)->(3154372822, -108759118) Jul 15 05:15:09.952323 kernel: registered taskstats version 1 Jul 15 05:15:09.952340 kernel: Loading compiled-in X.509 certificates Jul 15 05:15:09.952358 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.36-flatcar: a24478b628e55368911ce1800a2bd6bc158938c7' Jul 15 05:15:09.952375 kernel: Demotion targets for Node 0: null Jul 15 05:15:09.952392 kernel: Key type .fscrypt registered Jul 15 05:15:09.952411 kernel: Key type fscrypt-provisioning registered Jul 15 05:15:09.952429 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 15 05:15:09.952447 kernel: ima: Allocated hash algorithm: sha1 Jul 15 05:15:09.952464 kernel: ima: No architecture policies found Jul 15 05:15:09.952481 kernel: clk: Disabling unused clocks Jul 15 05:15:09.952498 kernel: Warning: unable to open an initial console. Jul 15 05:15:09.952515 kernel: Freeing unused kernel image (initmem) memory: 54608K Jul 15 05:15:09.952532 kernel: Write protecting the kernel read-only data: 24576k Jul 15 05:15:09.952553 kernel: Freeing unused kernel image (rodata/data gap) memory: 284K Jul 15 05:15:09.952572 kernel: Run /init as init process Jul 15 05:15:09.952590 kernel: with arguments: Jul 15 05:15:09.952607 kernel: /init Jul 15 05:15:09.952623 kernel: with environment: Jul 15 05:15:09.952640 kernel: HOME=/ Jul 15 05:15:09.952659 kernel: TERM=linux Jul 15 05:15:09.952679 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 15 05:15:09.952698 systemd[1]: Successfully made /usr/ read-only. Jul 15 05:15:09.952720 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 15 05:15:09.952739 systemd[1]: Detected virtualization amazon. Jul 15 05:15:09.952756 systemd[1]: Detected architecture x86-64. Jul 15 05:15:09.952773 systemd[1]: Running in initrd. Jul 15 05:15:09.952793 systemd[1]: No hostname configured, using default hostname. Jul 15 05:15:09.952811 systemd[1]: Hostname set to . Jul 15 05:15:09.952829 systemd[1]: Initializing machine ID from VM UUID. Jul 15 05:15:09.952847 systemd[1]: Queued start job for default target initrd.target. Jul 15 05:15:09.952865 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 15 05:15:09.952882 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 15 05:15:09.952902 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 15 05:15:09.952920 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 15 05:15:09.952940 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 15 05:15:09.952959 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 15 05:15:09.952979 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 15 05:15:09.952997 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 15 05:15:09.953015 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 15 05:15:09.953033 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 15 05:15:09.953051 systemd[1]: Reached target paths.target - Path Units. Jul 15 05:15:09.953072 systemd[1]: Reached target slices.target - Slice Units. Jul 15 05:15:09.953090 systemd[1]: Reached target swap.target - Swaps. Jul 15 05:15:09.953108 systemd[1]: Reached target timers.target - Timer Units. Jul 15 05:15:09.953126 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 15 05:15:09.953144 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 15 05:15:09.953181 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 15 05:15:09.953200 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 15 05:15:09.953218 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 15 05:15:09.953235 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 15 05:15:09.953253 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 15 05:15:09.953269 systemd[1]: Reached target sockets.target - Socket Units. Jul 15 05:15:09.953285 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 15 05:15:09.953302 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 15 05:15:09.953319 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 15 05:15:09.953337 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jul 15 05:15:09.953354 systemd[1]: Starting systemd-fsck-usr.service... Jul 15 05:15:09.953371 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 15 05:15:09.953390 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 15 05:15:09.953407 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 05:15:09.953424 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 15 05:15:09.953476 systemd-journald[207]: Collecting audit messages is disabled. Jul 15 05:15:09.953518 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 15 05:15:09.953535 systemd[1]: Finished systemd-fsck-usr.service. Jul 15 05:15:09.953554 systemd-journald[207]: Journal started Jul 15 05:15:09.953593 systemd-journald[207]: Runtime Journal (/run/log/journal/ec2aad653e5dba4ae2b2d7a0e492c636) is 4.8M, max 38.4M, 33.6M free. Jul 15 05:15:09.958196 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 15 05:15:09.959242 systemd-modules-load[208]: Inserted module 'overlay' Jul 15 05:15:09.967197 systemd[1]: Started systemd-journald.service - Journal Service. Jul 15 05:15:09.976343 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 15 05:15:09.977821 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 05:15:09.988336 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 15 05:15:09.994538 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 15 05:15:10.001471 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 15 05:15:10.007326 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 15 05:15:10.013296 systemd-tmpfiles[222]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jul 15 05:15:10.021928 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 15 05:15:10.026551 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 15 05:15:10.030194 kernel: Bridge firewalling registered Jul 15 05:15:10.030327 systemd-modules-load[208]: Inserted module 'br_netfilter' Jul 15 05:15:10.031674 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 15 05:15:10.037383 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 15 05:15:10.039448 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 15 05:15:10.042492 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 15 05:15:10.046308 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 15 05:15:10.064124 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 15 05:15:10.068387 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 15 05:15:10.072533 dracut-cmdline[244]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=926b029026d98240a9e8b6527b65fc026ae523bea87c3b77ffd7237bcc7be4fb Jul 15 05:15:10.129852 systemd-resolved[255]: Positive Trust Anchors: Jul 15 05:15:10.130880 systemd-resolved[255]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 15 05:15:10.130945 systemd-resolved[255]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 15 05:15:10.139797 systemd-resolved[255]: Defaulting to hostname 'linux'. Jul 15 05:15:10.141208 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 15 05:15:10.141939 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 15 05:15:10.175215 kernel: SCSI subsystem initialized Jul 15 05:15:10.185277 kernel: Loading iSCSI transport class v2.0-870. Jul 15 05:15:10.197311 kernel: iscsi: registered transport (tcp) Jul 15 05:15:10.219518 kernel: iscsi: registered transport (qla4xxx) Jul 15 05:15:10.219593 kernel: QLogic iSCSI HBA Driver Jul 15 05:15:10.239729 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 15 05:15:10.265983 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 15 05:15:10.268265 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 15 05:15:10.314997 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 15 05:15:10.317023 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 15 05:15:10.372225 kernel: raid6: avx512x4 gen() 17759 MB/s Jul 15 05:15:10.390219 kernel: raid6: avx512x2 gen() 17586 MB/s Jul 15 05:15:10.408205 kernel: raid6: avx512x1 gen() 17835 MB/s Jul 15 05:15:10.426220 kernel: raid6: avx2x4 gen() 17583 MB/s Jul 15 05:15:10.444202 kernel: raid6: avx2x2 gen() 17730 MB/s Jul 15 05:15:10.462469 kernel: raid6: avx2x1 gen() 13659 MB/s Jul 15 05:15:10.462541 kernel: raid6: using algorithm avx512x1 gen() 17835 MB/s Jul 15 05:15:10.481823 kernel: raid6: .... xor() 21385 MB/s, rmw enabled Jul 15 05:15:10.481909 kernel: raid6: using avx512x2 recovery algorithm Jul 15 05:15:10.503211 kernel: xor: automatically using best checksumming function avx Jul 15 05:15:10.672209 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 15 05:15:10.679609 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 15 05:15:10.681866 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 15 05:15:10.709374 systemd-udevd[457]: Using default interface naming scheme 'v255'. Jul 15 05:15:10.716193 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 15 05:15:10.720460 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 15 05:15:10.743823 dracut-pre-trigger[464]: rd.md=0: removing MD RAID activation Jul 15 05:15:10.771322 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 15 05:15:10.773350 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 15 05:15:10.829428 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 15 05:15:10.834364 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 15 05:15:10.918624 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jul 15 05:15:10.918902 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jul 15 05:15:10.930579 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Jul 15 05:15:10.943229 kernel: cryptd: max_cpu_qlen set to 1000 Jul 15 05:15:10.953065 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input2 Jul 15 05:15:10.961814 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 15 05:15:10.961988 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 05:15:10.964321 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 05:15:10.966485 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 05:15:10.968488 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 15 05:15:10.978211 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80400000, mac addr 06:44:bf:58:69:1b Jul 15 05:15:10.987034 kernel: AES CTR mode by8 optimization enabled Jul 15 05:15:10.987699 (udev-worker)[516]: Network interface NamePolicy= disabled on kernel command line. Jul 15 05:15:10.987776 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 15 05:15:10.991222 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 05:15:11.004193 kernel: nvme nvme0: pci function 0000:00:04.0 Jul 15 05:15:11.001870 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 15 05:15:11.010185 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jul 15 05:15:11.017080 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 05:15:11.028204 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jul 15 05:15:11.038427 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 15 05:15:11.038483 kernel: GPT:9289727 != 16777215 Jul 15 05:15:11.038502 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 15 05:15:11.038514 kernel: GPT:9289727 != 16777215 Jul 15 05:15:11.038525 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 15 05:15:11.038536 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 15 05:15:11.057862 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 05:15:11.083215 kernel: nvme nvme0: using unchecked data buffer Jul 15 05:15:11.211378 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jul 15 05:15:11.229555 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jul 15 05:15:11.230514 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 15 05:15:11.249255 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Jul 15 05:15:11.249824 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jul 15 05:15:11.261521 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jul 15 05:15:11.262201 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 15 05:15:11.263422 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 15 05:15:11.264583 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 15 05:15:11.266374 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 15 05:15:11.268319 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 15 05:15:11.286552 disk-uuid[697]: Primary Header is updated. Jul 15 05:15:11.286552 disk-uuid[697]: Secondary Entries is updated. Jul 15 05:15:11.286552 disk-uuid[697]: Secondary Header is updated. Jul 15 05:15:11.293916 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 15 05:15:11.294958 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 15 05:15:11.314207 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 15 05:15:12.321383 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 15 05:15:12.321777 disk-uuid[700]: The operation has completed successfully. Jul 15 05:15:12.451617 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 15 05:15:12.451747 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 15 05:15:12.497081 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 15 05:15:12.510860 sh[963]: Success Jul 15 05:15:12.531459 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 15 05:15:12.531538 kernel: device-mapper: uevent: version 1.0.3 Jul 15 05:15:12.531553 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jul 15 05:15:12.544202 kernel: device-mapper: verity: sha256 using shash "sha256-avx2" Jul 15 05:15:12.634714 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 15 05:15:12.638260 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 15 05:15:12.653018 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 15 05:15:12.676578 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jul 15 05:15:12.676647 kernel: BTRFS: device fsid eb96c768-dac4-4ca9-ae1d-82815d4ce00b devid 1 transid 36 /dev/mapper/usr (254:0) scanned by mount (986) Jul 15 05:15:12.683228 kernel: BTRFS info (device dm-0): first mount of filesystem eb96c768-dac4-4ca9-ae1d-82815d4ce00b Jul 15 05:15:12.683300 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 15 05:15:12.683314 kernel: BTRFS info (device dm-0): using free-space-tree Jul 15 05:15:12.785805 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 15 05:15:12.786759 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jul 15 05:15:12.787314 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 15 05:15:12.788051 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 15 05:15:12.789680 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 15 05:15:12.825306 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1019) Jul 15 05:15:12.829612 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 86e7a055-b4ff-48a6-9a0a-c301ff74862f Jul 15 05:15:12.829685 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jul 15 05:15:12.833195 kernel: BTRFS info (device nvme0n1p6): using free-space-tree Jul 15 05:15:12.855409 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 86e7a055-b4ff-48a6-9a0a-c301ff74862f Jul 15 05:15:12.855970 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 15 05:15:12.858460 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 15 05:15:12.890288 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 15 05:15:12.892760 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 15 05:15:12.931387 systemd-networkd[1155]: lo: Link UP Jul 15 05:15:12.931399 systemd-networkd[1155]: lo: Gained carrier Jul 15 05:15:12.933293 systemd-networkd[1155]: Enumeration completed Jul 15 05:15:12.933737 systemd-networkd[1155]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 15 05:15:12.933742 systemd-networkd[1155]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 15 05:15:12.934781 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 15 05:15:12.936205 systemd[1]: Reached target network.target - Network. Jul 15 05:15:12.937757 systemd-networkd[1155]: eth0: Link UP Jul 15 05:15:12.937763 systemd-networkd[1155]: eth0: Gained carrier Jul 15 05:15:12.937780 systemd-networkd[1155]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 15 05:15:12.957387 systemd-networkd[1155]: eth0: DHCPv4 address 172.31.18.224/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jul 15 05:15:13.346106 ignition[1116]: Ignition 2.21.0 Jul 15 05:15:13.346125 ignition[1116]: Stage: fetch-offline Jul 15 05:15:13.346372 ignition[1116]: no configs at "/usr/lib/ignition/base.d" Jul 15 05:15:13.346385 ignition[1116]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 15 05:15:13.348699 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 15 05:15:13.346657 ignition[1116]: Ignition finished successfully Jul 15 05:15:13.350950 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 15 05:15:13.379355 ignition[1165]: Ignition 2.21.0 Jul 15 05:15:13.380249 ignition[1165]: Stage: fetch Jul 15 05:15:13.380735 ignition[1165]: no configs at "/usr/lib/ignition/base.d" Jul 15 05:15:13.380747 ignition[1165]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 15 05:15:13.380916 ignition[1165]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 15 05:15:13.392118 ignition[1165]: PUT result: OK Jul 15 05:15:13.399471 ignition[1165]: parsed url from cmdline: "" Jul 15 05:15:13.399481 ignition[1165]: no config URL provided Jul 15 05:15:13.399489 ignition[1165]: reading system config file "/usr/lib/ignition/user.ign" Jul 15 05:15:13.399502 ignition[1165]: no config at "/usr/lib/ignition/user.ign" Jul 15 05:15:13.399521 ignition[1165]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 15 05:15:13.400514 ignition[1165]: PUT result: OK Jul 15 05:15:13.400570 ignition[1165]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jul 15 05:15:13.401488 ignition[1165]: GET result: OK Jul 15 05:15:13.401646 ignition[1165]: parsing config with SHA512: 657f6e31f20f83076362c498f94b76341e435aeb982d052fdcc5fe8d4dcf32aa1b6fc602a529dbcc57503a7bb5e19e6824bbaaddf594216532b7f7d98e38dad9 Jul 15 05:15:13.407642 unknown[1165]: fetched base config from "system" Jul 15 05:15:13.407657 unknown[1165]: fetched base config from "system" Jul 15 05:15:13.408219 ignition[1165]: fetch: fetch complete Jul 15 05:15:13.407664 unknown[1165]: fetched user config from "aws" Jul 15 05:15:13.408226 ignition[1165]: fetch: fetch passed Jul 15 05:15:13.408290 ignition[1165]: Ignition finished successfully Jul 15 05:15:13.411215 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 15 05:15:13.413075 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 15 05:15:13.445857 ignition[1172]: Ignition 2.21.0 Jul 15 05:15:13.445871 ignition[1172]: Stage: kargs Jul 15 05:15:13.446411 ignition[1172]: no configs at "/usr/lib/ignition/base.d" Jul 15 05:15:13.446421 ignition[1172]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 15 05:15:13.446530 ignition[1172]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 15 05:15:13.448054 ignition[1172]: PUT result: OK Jul 15 05:15:13.450899 ignition[1172]: kargs: kargs passed Jul 15 05:15:13.450958 ignition[1172]: Ignition finished successfully Jul 15 05:15:13.452651 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 15 05:15:13.454070 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 15 05:15:13.480067 ignition[1179]: Ignition 2.21.0 Jul 15 05:15:13.480080 ignition[1179]: Stage: disks Jul 15 05:15:13.480382 ignition[1179]: no configs at "/usr/lib/ignition/base.d" Jul 15 05:15:13.480390 ignition[1179]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 15 05:15:13.480469 ignition[1179]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 15 05:15:13.481543 ignition[1179]: PUT result: OK Jul 15 05:15:13.483722 ignition[1179]: disks: disks passed Jul 15 05:15:13.483776 ignition[1179]: Ignition finished successfully Jul 15 05:15:13.485399 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 15 05:15:13.485909 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 15 05:15:13.486227 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 15 05:15:13.486747 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 15 05:15:13.487023 systemd[1]: Reached target sysinit.target - System Initialization. Jul 15 05:15:13.487590 systemd[1]: Reached target basic.target - Basic System. Jul 15 05:15:13.488975 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 15 05:15:13.553931 systemd-fsck[1188]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jul 15 05:15:13.556916 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 15 05:15:13.559362 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 15 05:15:13.713687 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 277c3938-5262-4ab1-8fa3-62fde82f8257 r/w with ordered data mode. Quota mode: none. Jul 15 05:15:13.713833 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 15 05:15:13.714694 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 15 05:15:13.716754 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 15 05:15:13.719261 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 15 05:15:13.720438 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 15 05:15:13.720820 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 15 05:15:13.720846 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 15 05:15:13.729800 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 15 05:15:13.732142 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 15 05:15:13.748192 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1207) Jul 15 05:15:13.752816 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 86e7a055-b4ff-48a6-9a0a-c301ff74862f Jul 15 05:15:13.752878 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jul 15 05:15:13.752892 kernel: BTRFS info (device nvme0n1p6): using free-space-tree Jul 15 05:15:13.763954 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 15 05:15:14.173122 initrd-setup-root[1231]: cut: /sysroot/etc/passwd: No such file or directory Jul 15 05:15:14.187493 initrd-setup-root[1238]: cut: /sysroot/etc/group: No such file or directory Jul 15 05:15:14.191622 initrd-setup-root[1245]: cut: /sysroot/etc/shadow: No such file or directory Jul 15 05:15:14.195999 initrd-setup-root[1252]: cut: /sysroot/etc/gshadow: No such file or directory Jul 15 05:15:14.492121 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 15 05:15:14.494371 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 15 05:15:14.497428 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 15 05:15:14.516707 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 15 05:15:14.520894 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 86e7a055-b4ff-48a6-9a0a-c301ff74862f Jul 15 05:15:14.549370 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 15 05:15:14.552902 ignition[1320]: INFO : Ignition 2.21.0 Jul 15 05:15:14.552902 ignition[1320]: INFO : Stage: mount Jul 15 05:15:14.554615 ignition[1320]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 15 05:15:14.554615 ignition[1320]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 15 05:15:14.554615 ignition[1320]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 15 05:15:14.554615 ignition[1320]: INFO : PUT result: OK Jul 15 05:15:14.557463 ignition[1320]: INFO : mount: mount passed Jul 15 05:15:14.558642 ignition[1320]: INFO : Ignition finished successfully Jul 15 05:15:14.559478 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 15 05:15:14.560952 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 15 05:15:14.715627 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 15 05:15:14.753431 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1332) Jul 15 05:15:14.756464 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 86e7a055-b4ff-48a6-9a0a-c301ff74862f Jul 15 05:15:14.756535 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jul 15 05:15:14.759052 kernel: BTRFS info (device nvme0n1p6): using free-space-tree Jul 15 05:15:14.769026 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 15 05:15:14.796619 ignition[1349]: INFO : Ignition 2.21.0 Jul 15 05:15:14.796619 ignition[1349]: INFO : Stage: files Jul 15 05:15:14.798103 ignition[1349]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 15 05:15:14.798103 ignition[1349]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 15 05:15:14.798103 ignition[1349]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 15 05:15:14.798103 ignition[1349]: INFO : PUT result: OK Jul 15 05:15:14.801449 ignition[1349]: DEBUG : files: compiled without relabeling support, skipping Jul 15 05:15:14.802972 ignition[1349]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 15 05:15:14.802972 ignition[1349]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 15 05:15:14.807792 ignition[1349]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 15 05:15:14.808698 ignition[1349]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 15 05:15:14.808698 ignition[1349]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 15 05:15:14.808347 unknown[1349]: wrote ssh authorized keys file for user: core Jul 15 05:15:14.811111 ignition[1349]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jul 15 05:15:14.811860 ignition[1349]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jul 15 05:15:14.905472 systemd-networkd[1155]: eth0: Gained IPv6LL Jul 15 05:15:15.109800 ignition[1349]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 15 05:15:15.346317 ignition[1349]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jul 15 05:15:15.347273 ignition[1349]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 15 05:15:15.347273 ignition[1349]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 15 05:15:15.347273 ignition[1349]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 15 05:15:15.347273 ignition[1349]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 15 05:15:15.347273 ignition[1349]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 15 05:15:15.347273 ignition[1349]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 15 05:15:15.347273 ignition[1349]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 15 05:15:15.347273 ignition[1349]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 15 05:15:15.352590 ignition[1349]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 15 05:15:15.352590 ignition[1349]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 15 05:15:15.352590 ignition[1349]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 15 05:15:15.354929 ignition[1349]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 15 05:15:15.354929 ignition[1349]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 15 05:15:15.354929 ignition[1349]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jul 15 05:15:16.169494 ignition[1349]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 15 05:15:18.636443 ignition[1349]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 15 05:15:18.636443 ignition[1349]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 15 05:15:18.638619 ignition[1349]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 15 05:15:18.642409 ignition[1349]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 15 05:15:18.642409 ignition[1349]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 15 05:15:18.642409 ignition[1349]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jul 15 05:15:18.644878 ignition[1349]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jul 15 05:15:18.644878 ignition[1349]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 15 05:15:18.644878 ignition[1349]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 15 05:15:18.644878 ignition[1349]: INFO : files: files passed Jul 15 05:15:18.644878 ignition[1349]: INFO : Ignition finished successfully Jul 15 05:15:18.644092 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 15 05:15:18.645793 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 15 05:15:18.650399 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 15 05:15:18.657832 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 15 05:15:18.657945 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 15 05:15:18.675884 initrd-setup-root-after-ignition[1379]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 15 05:15:18.675884 initrd-setup-root-after-ignition[1379]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 15 05:15:18.679253 initrd-setup-root-after-ignition[1383]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 15 05:15:18.678989 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 15 05:15:18.680154 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 15 05:15:18.682088 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 15 05:15:18.729761 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 15 05:15:18.729906 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 15 05:15:18.731138 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 15 05:15:18.732255 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 15 05:15:18.733024 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 15 05:15:18.734724 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 15 05:15:18.756154 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 15 05:15:18.758305 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 15 05:15:18.781676 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 15 05:15:18.782343 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 15 05:15:18.783357 systemd[1]: Stopped target timers.target - Timer Units. Jul 15 05:15:18.784232 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 15 05:15:18.784397 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 15 05:15:18.785760 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 15 05:15:18.786680 systemd[1]: Stopped target basic.target - Basic System. Jul 15 05:15:18.787386 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 15 05:15:18.788151 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 15 05:15:18.788933 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 15 05:15:18.794415 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jul 15 05:15:18.795232 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 15 05:15:18.795869 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 15 05:15:18.796788 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 15 05:15:18.798223 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 15 05:15:18.799162 systemd[1]: Stopped target swap.target - Swaps. Jul 15 05:15:18.799905 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 15 05:15:18.800136 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 15 05:15:18.801339 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 15 05:15:18.802137 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 15 05:15:18.802792 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 15 05:15:18.802932 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 15 05:15:18.803628 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 15 05:15:18.803847 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 15 05:15:18.805307 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 15 05:15:18.805568 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 15 05:15:18.806201 systemd[1]: ignition-files.service: Deactivated successfully. Jul 15 05:15:18.806354 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 15 05:15:18.809257 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 15 05:15:18.812280 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 15 05:15:18.812506 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 15 05:15:18.829466 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 15 05:15:18.830353 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 15 05:15:18.830535 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 15 05:15:18.832437 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 15 05:15:18.832586 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 15 05:15:18.838477 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 15 05:15:18.839276 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 15 05:15:18.847544 ignition[1403]: INFO : Ignition 2.21.0 Jul 15 05:15:18.847544 ignition[1403]: INFO : Stage: umount Jul 15 05:15:18.850233 ignition[1403]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 15 05:15:18.850233 ignition[1403]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 15 05:15:18.850233 ignition[1403]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 15 05:15:18.852224 ignition[1403]: INFO : PUT result: OK Jul 15 05:15:18.854582 ignition[1403]: INFO : umount: umount passed Jul 15 05:15:18.855445 ignition[1403]: INFO : Ignition finished successfully Jul 15 05:15:18.857555 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 15 05:15:18.857667 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 15 05:15:18.858259 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 15 05:15:18.858302 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 15 05:15:18.858697 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 15 05:15:18.858735 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 15 05:15:18.859323 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 15 05:15:18.859359 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 15 05:15:18.860077 systemd[1]: Stopped target network.target - Network. Jul 15 05:15:18.861111 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 15 05:15:18.861295 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 15 05:15:18.862762 systemd[1]: Stopped target paths.target - Path Units. Jul 15 05:15:18.863268 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 15 05:15:18.868252 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 15 05:15:18.868699 systemd[1]: Stopped target slices.target - Slice Units. Jul 15 05:15:18.869756 systemd[1]: Stopped target sockets.target - Socket Units. Jul 15 05:15:18.870437 systemd[1]: iscsid.socket: Deactivated successfully. Jul 15 05:15:18.870506 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 15 05:15:18.871047 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 15 05:15:18.871097 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 15 05:15:18.871663 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 15 05:15:18.871754 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 15 05:15:18.872306 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 15 05:15:18.872360 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 15 05:15:18.873037 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 15 05:15:18.873780 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 15 05:15:18.877546 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 15 05:15:18.878391 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 15 05:15:18.878518 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 15 05:15:18.882055 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 15 05:15:18.882450 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 15 05:15:18.882597 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 15 05:15:18.884772 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 15 05:15:18.885085 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 15 05:15:18.885378 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 15 05:15:18.887606 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jul 15 05:15:18.888031 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 15 05:15:18.888090 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 15 05:15:18.888695 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 15 05:15:18.888768 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 15 05:15:18.890482 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 15 05:15:18.892817 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 15 05:15:18.892897 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 15 05:15:18.895300 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 15 05:15:18.895367 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 15 05:15:18.895917 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 15 05:15:18.895976 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 15 05:15:18.897373 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 15 05:15:18.897442 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 15 05:15:18.898052 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 15 05:15:18.901866 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 15 05:15:18.901972 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 15 05:15:18.908792 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 15 05:15:18.909119 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 15 05:15:18.910689 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 15 05:15:18.910774 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 15 05:15:18.911537 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 15 05:15:18.911585 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 15 05:15:18.913376 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 15 05:15:18.913447 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 15 05:15:18.915289 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 15 05:15:18.915358 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 15 05:15:18.916892 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 15 05:15:18.916964 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 15 05:15:18.919260 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 15 05:15:18.919854 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jul 15 05:15:18.919924 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jul 15 05:15:18.920683 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 15 05:15:18.920748 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 15 05:15:18.921538 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 15 05:15:18.921596 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 05:15:18.927623 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jul 15 05:15:18.927714 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 15 05:15:18.927772 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 15 05:15:18.928285 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 15 05:15:18.930290 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 15 05:15:18.937401 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 15 05:15:18.937550 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 15 05:15:18.939050 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 15 05:15:18.940712 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 15 05:15:18.963816 systemd[1]: Switching root. Jul 15 05:15:19.006273 systemd-journald[207]: Journal stopped Jul 15 05:15:21.040553 systemd-journald[207]: Received SIGTERM from PID 1 (systemd). Jul 15 05:15:21.040642 kernel: SELinux: policy capability network_peer_controls=1 Jul 15 05:15:21.040666 kernel: SELinux: policy capability open_perms=1 Jul 15 05:15:21.040690 kernel: SELinux: policy capability extended_socket_class=1 Jul 15 05:15:21.040719 kernel: SELinux: policy capability always_check_network=0 Jul 15 05:15:21.040744 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 15 05:15:21.040764 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 15 05:15:21.040784 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 15 05:15:21.040805 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 15 05:15:21.040825 kernel: SELinux: policy capability userspace_initial_context=0 Jul 15 05:15:21.040845 kernel: audit: type=1403 audit(1752556519.519:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 15 05:15:21.040867 systemd[1]: Successfully loaded SELinux policy in 90.273ms. Jul 15 05:15:21.040899 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 6.256ms. Jul 15 05:15:21.040922 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 15 05:15:21.040945 systemd[1]: Detected virtualization amazon. Jul 15 05:15:21.040966 systemd[1]: Detected architecture x86-64. Jul 15 05:15:21.040987 systemd[1]: Detected first boot. Jul 15 05:15:21.041008 systemd[1]: Initializing machine ID from VM UUID. Jul 15 05:15:21.041028 zram_generator::config[1447]: No configuration found. Jul 15 05:15:21.041050 kernel: Guest personality initialized and is inactive Jul 15 05:15:21.041070 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Jul 15 05:15:21.041093 kernel: Initialized host personality Jul 15 05:15:21.041113 kernel: NET: Registered PF_VSOCK protocol family Jul 15 05:15:21.041147 systemd[1]: Populated /etc with preset unit settings. Jul 15 05:15:21.046212 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 15 05:15:21.046275 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 15 05:15:21.046297 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 15 05:15:21.046320 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 15 05:15:21.046341 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 15 05:15:21.046369 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 15 05:15:21.046391 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 15 05:15:21.046412 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 15 05:15:21.046433 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 15 05:15:21.046454 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 15 05:15:21.046475 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 15 05:15:21.046502 systemd[1]: Created slice user.slice - User and Session Slice. Jul 15 05:15:21.046523 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 15 05:15:21.046543 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 15 05:15:21.046567 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 15 05:15:21.046589 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 15 05:15:21.046610 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 15 05:15:21.046638 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 15 05:15:21.046660 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 15 05:15:21.046683 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 15 05:15:21.046707 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 15 05:15:21.046730 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 15 05:15:21.046757 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 15 05:15:21.046777 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 15 05:15:21.046799 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 15 05:15:21.046820 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 15 05:15:21.046841 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 15 05:15:21.046862 systemd[1]: Reached target slices.target - Slice Units. Jul 15 05:15:21.046885 systemd[1]: Reached target swap.target - Swaps. Jul 15 05:15:21.046902 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 15 05:15:21.046920 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 15 05:15:21.046943 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 15 05:15:21.046961 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 15 05:15:21.046981 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 15 05:15:21.047000 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 15 05:15:21.047018 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 15 05:15:21.047037 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 15 05:15:21.047057 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 15 05:15:21.047077 systemd[1]: Mounting media.mount - External Media Directory... Jul 15 05:15:21.047096 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 05:15:21.047119 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 15 05:15:21.047136 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 15 05:15:21.056313 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 15 05:15:21.056367 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 15 05:15:21.056393 systemd[1]: Reached target machines.target - Containers. Jul 15 05:15:21.056420 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 15 05:15:21.056447 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 15 05:15:21.056473 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 15 05:15:21.056505 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 15 05:15:21.056529 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 15 05:15:21.056553 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 15 05:15:21.056577 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 15 05:15:21.056602 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 15 05:15:21.056625 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 15 05:15:21.056651 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 15 05:15:21.056677 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 15 05:15:21.056705 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 15 05:15:21.056732 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 15 05:15:21.056758 systemd[1]: Stopped systemd-fsck-usr.service. Jul 15 05:15:21.056787 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 15 05:15:21.056810 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 15 05:15:21.056836 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 15 05:15:21.056860 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 15 05:15:21.056887 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 15 05:15:21.056913 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 15 05:15:21.056941 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 15 05:15:21.056967 systemd[1]: verity-setup.service: Deactivated successfully. Jul 15 05:15:21.056993 systemd[1]: Stopped verity-setup.service. Jul 15 05:15:21.057018 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 05:15:21.057044 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 15 05:15:21.057069 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 15 05:15:21.057093 systemd[1]: Mounted media.mount - External Media Directory. Jul 15 05:15:21.057115 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 15 05:15:21.057152 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 15 05:15:21.064756 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 15 05:15:21.064804 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 15 05:15:21.064826 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 15 05:15:21.064843 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 15 05:15:21.064859 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 15 05:15:21.064876 kernel: fuse: init (API version 7.41) Jul 15 05:15:21.064893 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 15 05:15:21.064913 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 15 05:15:21.064932 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 15 05:15:21.064951 kernel: loop: module loaded Jul 15 05:15:21.064973 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 15 05:15:21.064992 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 15 05:15:21.065011 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 15 05:15:21.065030 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 15 05:15:21.065048 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 15 05:15:21.065069 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 15 05:15:21.065087 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 15 05:15:21.065105 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 15 05:15:21.066470 systemd-journald[1526]: Collecting audit messages is disabled. Jul 15 05:15:21.066552 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 15 05:15:21.066582 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 15 05:15:21.066608 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 15 05:15:21.066638 systemd-journald[1526]: Journal started Jul 15 05:15:21.066686 systemd-journald[1526]: Runtime Journal (/run/log/journal/ec2aad653e5dba4ae2b2d7a0e492c636) is 4.8M, max 38.4M, 33.6M free. Jul 15 05:15:21.071759 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 15 05:15:20.628311 systemd[1]: Queued start job for default target multi-user.target. Jul 15 05:15:20.641616 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jul 15 05:15:20.642138 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 15 05:15:21.082055 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 15 05:15:21.088202 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 15 05:15:21.096199 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 15 05:15:21.104986 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 15 05:15:21.112778 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 15 05:15:21.116928 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 15 05:15:21.120200 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 15 05:15:21.127206 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 15 05:15:21.140222 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 15 05:15:21.153599 systemd[1]: Started systemd-journald.service - Journal Service. Jul 15 05:15:21.158259 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 15 05:15:21.165707 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 15 05:15:21.168506 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 15 05:15:21.170452 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 15 05:15:21.196627 kernel: ACPI: bus type drm_connector registered Jul 15 05:15:21.198351 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 15 05:15:21.200865 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 15 05:15:21.201934 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 15 05:15:21.211847 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 15 05:15:21.213982 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 15 05:15:21.225673 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 15 05:15:21.228321 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 15 05:15:21.232642 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 15 05:15:21.236438 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 15 05:15:21.253444 kernel: loop0: detected capacity change from 0 to 114000 Jul 15 05:15:21.256473 systemd-journald[1526]: Time spent on flushing to /var/log/journal/ec2aad653e5dba4ae2b2d7a0e492c636 is 41.819ms for 1023 entries. Jul 15 05:15:21.256473 systemd-journald[1526]: System Journal (/var/log/journal/ec2aad653e5dba4ae2b2d7a0e492c636) is 8M, max 195.6M, 187.6M free. Jul 15 05:15:21.307090 systemd-journald[1526]: Received client request to flush runtime journal. Jul 15 05:15:21.308042 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 15 05:15:21.309241 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 15 05:15:21.311572 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 15 05:15:21.334289 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 15 05:15:21.344664 systemd-tmpfiles[1596]: ACLs are not supported, ignoring. Jul 15 05:15:21.345027 systemd-tmpfiles[1596]: ACLs are not supported, ignoring. Jul 15 05:15:21.349472 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 15 05:15:21.368705 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 15 05:15:21.398218 kernel: loop1: detected capacity change from 0 to 146488 Jul 15 05:15:21.519369 kernel: loop2: detected capacity change from 0 to 224512 Jul 15 05:15:21.561330 kernel: loop3: detected capacity change from 0 to 72384 Jul 15 05:15:21.643771 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 15 05:15:21.675284 kernel: loop4: detected capacity change from 0 to 114000 Jul 15 05:15:21.692208 kernel: loop5: detected capacity change from 0 to 146488 Jul 15 05:15:21.741619 kernel: loop6: detected capacity change from 0 to 224512 Jul 15 05:15:21.781340 kernel: loop7: detected capacity change from 0 to 72384 Jul 15 05:15:21.797778 (sd-merge)[1605]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Jul 15 05:15:21.798632 (sd-merge)[1605]: Merged extensions into '/usr'. Jul 15 05:15:21.804027 systemd[1]: Reload requested from client PID 1562 ('systemd-sysext') (unit systemd-sysext.service)... Jul 15 05:15:21.804185 systemd[1]: Reloading... Jul 15 05:15:21.860415 zram_generator::config[1628]: No configuration found. Jul 15 05:15:22.003011 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 05:15:22.098815 systemd[1]: Reloading finished in 294 ms. Jul 15 05:15:22.116713 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 15 05:15:22.117620 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 15 05:15:22.127363 systemd[1]: Starting ensure-sysext.service... Jul 15 05:15:22.131305 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 15 05:15:22.133198 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 15 05:15:22.161134 systemd[1]: Reload requested from client PID 1683 ('systemctl') (unit ensure-sysext.service)... Jul 15 05:15:22.161152 systemd[1]: Reloading... Jul 15 05:15:22.165546 systemd-tmpfiles[1684]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jul 15 05:15:22.165587 systemd-tmpfiles[1684]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jul 15 05:15:22.165867 systemd-tmpfiles[1684]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 15 05:15:22.166122 systemd-tmpfiles[1684]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 15 05:15:22.167451 systemd-tmpfiles[1684]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 15 05:15:22.167783 systemd-tmpfiles[1684]: ACLs are not supported, ignoring. Jul 15 05:15:22.167896 systemd-tmpfiles[1684]: ACLs are not supported, ignoring. Jul 15 05:15:22.179517 systemd-udevd[1685]: Using default interface naming scheme 'v255'. Jul 15 05:15:22.182803 systemd-tmpfiles[1684]: Detected autofs mount point /boot during canonicalization of boot. Jul 15 05:15:22.182814 systemd-tmpfiles[1684]: Skipping /boot Jul 15 05:15:22.191118 systemd-tmpfiles[1684]: Detected autofs mount point /boot during canonicalization of boot. Jul 15 05:15:22.191280 systemd-tmpfiles[1684]: Skipping /boot Jul 15 05:15:22.256865 zram_generator::config[1713]: No configuration found. Jul 15 05:15:22.504141 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 05:15:22.523502 (udev-worker)[1721]: Network interface NamePolicy= disabled on kernel command line. Jul 15 05:15:22.558197 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jul 15 05:15:22.564196 kernel: ACPI: button: Power Button [PWRF] Jul 15 05:15:22.569232 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Jul 15 05:15:22.577201 kernel: ACPI: button: Sleep Button [SLPF] Jul 15 05:15:22.579274 kernel: mousedev: PS/2 mouse device common for all mice Jul 15 05:15:22.589202 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Jul 15 05:15:22.777472 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 15 05:15:22.779499 systemd[1]: Reloading finished in 617 ms. Jul 15 05:15:22.795227 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 15 05:15:22.799080 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 15 05:15:22.825373 ldconfig[1558]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 15 05:15:22.840510 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 15 05:15:22.918942 systemd[1]: Finished ensure-sysext.service. Jul 15 05:15:22.940588 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 05:15:22.943568 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 15 05:15:22.946729 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 15 05:15:22.947822 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 15 05:15:22.952440 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 15 05:15:22.955475 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 15 05:15:22.967447 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 15 05:15:22.972093 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 15 05:15:22.972916 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 15 05:15:22.972981 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 15 05:15:22.976916 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 15 05:15:22.985543 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 15 05:15:22.996469 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 15 05:15:22.997080 systemd[1]: Reached target time-set.target - System Time Set. Jul 15 05:15:23.002465 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 15 05:15:23.006926 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 05:15:23.007538 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 05:15:23.008712 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 15 05:15:23.009521 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 15 05:15:23.045489 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 15 05:15:23.048209 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 15 05:15:23.069462 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 15 05:15:23.099270 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 15 05:15:23.108711 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 15 05:15:23.109736 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 15 05:15:23.128006 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 15 05:15:23.129564 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 15 05:15:23.133497 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 15 05:15:23.133617 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 15 05:15:23.149413 augenrules[1914]: No rules Jul 15 05:15:23.152086 systemd[1]: audit-rules.service: Deactivated successfully. Jul 15 05:15:23.154315 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 15 05:15:23.172465 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 15 05:15:23.178713 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 15 05:15:23.205464 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 15 05:15:23.206359 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 15 05:15:23.228088 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 15 05:15:23.244763 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 05:15:23.269075 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jul 15 05:15:23.271322 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 15 05:15:23.300539 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 15 05:15:23.324734 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 15 05:15:23.394355 systemd-networkd[1854]: lo: Link UP Jul 15 05:15:23.394373 systemd-networkd[1854]: lo: Gained carrier Jul 15 05:15:23.396313 systemd-networkd[1854]: Enumeration completed Jul 15 05:15:23.396801 systemd-networkd[1854]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 15 05:15:23.396816 systemd-networkd[1854]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 15 05:15:23.397674 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 15 05:15:23.400927 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 15 05:15:23.403246 systemd-resolved[1860]: Positive Trust Anchors: Jul 15 05:15:23.403620 systemd-resolved[1860]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 15 05:15:23.403748 systemd-resolved[1860]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 15 05:15:23.404415 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 15 05:15:23.405324 systemd-networkd[1854]: eth0: Link UP Jul 15 05:15:23.405507 systemd-networkd[1854]: eth0: Gained carrier Jul 15 05:15:23.405541 systemd-networkd[1854]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 15 05:15:23.415658 systemd-resolved[1860]: Defaulting to hostname 'linux'. Jul 15 05:15:23.417986 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 15 05:15:23.418579 systemd[1]: Reached target network.target - Network. Jul 15 05:15:23.419053 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 15 05:15:23.419262 systemd-networkd[1854]: eth0: DHCPv4 address 172.31.18.224/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jul 15 05:15:23.420273 systemd[1]: Reached target sysinit.target - System Initialization. Jul 15 05:15:23.421005 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 15 05:15:23.422084 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 15 05:15:23.423249 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jul 15 05:15:23.424243 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 15 05:15:23.424956 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 15 05:15:23.425747 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 15 05:15:23.427027 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 15 05:15:23.427071 systemd[1]: Reached target paths.target - Path Units. Jul 15 05:15:23.427650 systemd[1]: Reached target timers.target - Timer Units. Jul 15 05:15:23.432386 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 15 05:15:23.434925 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 15 05:15:23.437931 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 15 05:15:23.438924 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 15 05:15:23.439416 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 15 05:15:23.442165 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 15 05:15:23.442984 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 15 05:15:23.444455 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 15 05:15:23.445046 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 15 05:15:23.446811 systemd[1]: Reached target sockets.target - Socket Units. Jul 15 05:15:23.447377 systemd[1]: Reached target basic.target - Basic System. Jul 15 05:15:23.447863 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 15 05:15:23.447908 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 15 05:15:23.449265 systemd[1]: Starting containerd.service - containerd container runtime... Jul 15 05:15:23.452336 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jul 15 05:15:23.455323 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 15 05:15:23.459997 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 15 05:15:23.465081 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 15 05:15:23.469721 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 15 05:15:23.470414 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 15 05:15:23.474436 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jul 15 05:15:23.477609 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 15 05:15:23.483437 systemd[1]: Started ntpd.service - Network Time Service. Jul 15 05:15:23.489274 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 15 05:15:23.497214 systemd[1]: Starting setup-oem.service - Setup OEM... Jul 15 05:15:23.505457 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 15 05:15:23.511907 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 15 05:15:23.545712 jq[1969]: false Jul 15 05:15:23.548046 oslogin_cache_refresh[1971]: Refreshing passwd entry cache Jul 15 05:15:23.550410 google_oslogin_nss_cache[1971]: oslogin_cache_refresh[1971]: Refreshing passwd entry cache Jul 15 05:15:23.555421 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 15 05:15:23.558531 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 15 05:15:23.559826 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 15 05:15:23.563900 systemd[1]: Starting update-engine.service - Update Engine... Jul 15 05:15:23.569353 google_oslogin_nss_cache[1971]: oslogin_cache_refresh[1971]: Failure getting users, quitting Jul 15 05:15:23.569353 google_oslogin_nss_cache[1971]: oslogin_cache_refresh[1971]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 15 05:15:23.569353 google_oslogin_nss_cache[1971]: oslogin_cache_refresh[1971]: Refreshing group entry cache Jul 15 05:15:23.567390 oslogin_cache_refresh[1971]: Failure getting users, quitting Jul 15 05:15:23.567414 oslogin_cache_refresh[1971]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 15 05:15:23.567473 oslogin_cache_refresh[1971]: Refreshing group entry cache Jul 15 05:15:23.571324 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 15 05:15:23.577203 google_oslogin_nss_cache[1971]: oslogin_cache_refresh[1971]: Failure getting groups, quitting Jul 15 05:15:23.577203 google_oslogin_nss_cache[1971]: oslogin_cache_refresh[1971]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 15 05:15:23.574620 oslogin_cache_refresh[1971]: Failure getting groups, quitting Jul 15 05:15:23.574637 oslogin_cache_refresh[1971]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 15 05:15:23.585220 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 15 05:15:23.586363 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 15 05:15:23.587448 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 15 05:15:23.587876 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jul 15 05:15:23.604329 extend-filesystems[1970]: Found /dev/nvme0n1p6 Jul 15 05:15:23.594913 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jul 15 05:15:23.606939 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 15 05:15:23.609076 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 15 05:15:23.636852 jq[1986]: true Jul 15 05:15:23.645852 systemd[1]: motdgen.service: Deactivated successfully. Jul 15 05:15:23.646150 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 15 05:15:23.662415 extend-filesystems[1970]: Found /dev/nvme0n1p9 Jul 15 05:15:23.675203 extend-filesystems[1970]: Checking size of /dev/nvme0n1p9 Jul 15 05:15:23.674081 (ntainerd)[1996]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 15 05:15:23.686455 ntpd[1973]: ntpd 4.2.8p17@1.4004-o Tue Jul 15 03:00:16 UTC 2025 (1): Starting Jul 15 05:15:23.688434 ntpd[1973]: 15 Jul 05:15:23 ntpd[1973]: ntpd 4.2.8p17@1.4004-o Tue Jul 15 03:00:16 UTC 2025 (1): Starting Jul 15 05:15:23.688434 ntpd[1973]: 15 Jul 05:15:23 ntpd[1973]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jul 15 05:15:23.688434 ntpd[1973]: 15 Jul 05:15:23 ntpd[1973]: ---------------------------------------------------- Jul 15 05:15:23.688434 ntpd[1973]: 15 Jul 05:15:23 ntpd[1973]: ntp-4 is maintained by Network Time Foundation, Jul 15 05:15:23.688434 ntpd[1973]: 15 Jul 05:15:23 ntpd[1973]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jul 15 05:15:23.688434 ntpd[1973]: 15 Jul 05:15:23 ntpd[1973]: corporation. Support and training for ntp-4 are Jul 15 05:15:23.688434 ntpd[1973]: 15 Jul 05:15:23 ntpd[1973]: available at https://www.nwtime.org/support Jul 15 05:15:23.688434 ntpd[1973]: 15 Jul 05:15:23 ntpd[1973]: ---------------------------------------------------- Jul 15 05:15:23.686486 ntpd[1973]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jul 15 05:15:23.686496 ntpd[1973]: ---------------------------------------------------- Jul 15 05:15:23.686508 ntpd[1973]: ntp-4 is maintained by Network Time Foundation, Jul 15 05:15:23.686518 ntpd[1973]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jul 15 05:15:23.686527 ntpd[1973]: corporation. Support and training for ntp-4 are Jul 15 05:15:23.686537 ntpd[1973]: available at https://www.nwtime.org/support Jul 15 05:15:23.686546 ntpd[1973]: ---------------------------------------------------- Jul 15 05:15:23.697218 tar[1993]: linux-amd64/LICENSE Jul 15 05:15:23.698971 ntpd[1973]: proto: precision = 0.069 usec (-24) Jul 15 05:15:23.702439 tar[1993]: linux-amd64/helm Jul 15 05:15:23.702489 ntpd[1973]: 15 Jul 05:15:23 ntpd[1973]: proto: precision = 0.069 usec (-24) Jul 15 05:15:23.702489 ntpd[1973]: 15 Jul 05:15:23 ntpd[1973]: basedate set to 2025-07-03 Jul 15 05:15:23.702489 ntpd[1973]: 15 Jul 05:15:23 ntpd[1973]: gps base set to 2025-07-06 (week 2374) Jul 15 05:15:23.700984 ntpd[1973]: basedate set to 2025-07-03 Jul 15 05:15:23.701007 ntpd[1973]: gps base set to 2025-07-06 (week 2374) Jul 15 05:15:23.706219 update_engine[1985]: I20250715 05:15:23.703768 1985 main.cc:92] Flatcar Update Engine starting Jul 15 05:15:23.706573 ntpd[1973]: 15 Jul 05:15:23 ntpd[1973]: Listen and drop on 0 v6wildcard [::]:123 Jul 15 05:15:23.706573 ntpd[1973]: 15 Jul 05:15:23 ntpd[1973]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jul 15 05:15:23.706573 ntpd[1973]: 15 Jul 05:15:23 ntpd[1973]: Listen normally on 2 lo 127.0.0.1:123 Jul 15 05:15:23.706573 ntpd[1973]: 15 Jul 05:15:23 ntpd[1973]: Listen normally on 3 eth0 172.31.18.224:123 Jul 15 05:15:23.706295 ntpd[1973]: Listen and drop on 0 v6wildcard [::]:123 Jul 15 05:15:23.711387 ntpd[1973]: 15 Jul 05:15:23 ntpd[1973]: Listen normally on 4 lo [::1]:123 Jul 15 05:15:23.711387 ntpd[1973]: 15 Jul 05:15:23 ntpd[1973]: bind(21) AF_INET6 fe80::444:bfff:fe58:691b%2#123 flags 0x11 failed: Cannot assign requested address Jul 15 05:15:23.711387 ntpd[1973]: 15 Jul 05:15:23 ntpd[1973]: unable to create socket on eth0 (5) for fe80::444:bfff:fe58:691b%2#123 Jul 15 05:15:23.711387 ntpd[1973]: 15 Jul 05:15:23 ntpd[1973]: failed to init interface for address fe80::444:bfff:fe58:691b%2 Jul 15 05:15:23.711387 ntpd[1973]: 15 Jul 05:15:23 ntpd[1973]: Listening on routing socket on fd #21 for interface updates Jul 15 05:15:23.706348 ntpd[1973]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jul 15 05:15:23.706534 ntpd[1973]: Listen normally on 2 lo 127.0.0.1:123 Jul 15 05:15:23.706572 ntpd[1973]: Listen normally on 3 eth0 172.31.18.224:123 Jul 15 05:15:23.706614 ntpd[1973]: Listen normally on 4 lo [::1]:123 Jul 15 05:15:23.706661 ntpd[1973]: bind(21) AF_INET6 fe80::444:bfff:fe58:691b%2#123 flags 0x11 failed: Cannot assign requested address Jul 15 05:15:23.706682 ntpd[1973]: unable to create socket on eth0 (5) for fe80::444:bfff:fe58:691b%2#123 Jul 15 05:15:23.706696 ntpd[1973]: failed to init interface for address fe80::444:bfff:fe58:691b%2 Jul 15 05:15:23.706731 ntpd[1973]: Listening on routing socket on fd #21 for interface updates Jul 15 05:15:23.720726 ntpd[1973]: 15 Jul 05:15:23 ntpd[1973]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 15 05:15:23.720726 ntpd[1973]: 15 Jul 05:15:23 ntpd[1973]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 15 05:15:23.712163 ntpd[1973]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 15 05:15:23.712218 ntpd[1973]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 15 05:15:23.725012 dbus-daemon[1967]: [system] SELinux support is enabled Jul 15 05:15:23.732940 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 15 05:15:23.739816 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 15 05:15:23.739867 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 15 05:15:23.740570 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 15 05:15:23.740601 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 15 05:15:23.756436 jq[2009]: true Jul 15 05:15:23.770255 extend-filesystems[1970]: Resized partition /dev/nvme0n1p9 Jul 15 05:15:23.784049 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Jul 15 05:15:23.766658 systemd-logind[1980]: Watching system buttons on /dev/input/event2 (Power Button) Jul 15 05:15:23.776760 dbus-daemon[1967]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.3' (uid=244 pid=1854 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jul 15 05:15:23.784462 extend-filesystems[2026]: resize2fs 1.47.2 (1-Jan-2025) Jul 15 05:15:23.788900 update_engine[1985]: I20250715 05:15:23.773324 1985 update_check_scheduler.cc:74] Next update check in 11m0s Jul 15 05:15:23.766684 systemd-logind[1980]: Watching system buttons on /dev/input/event3 (Sleep Button) Jul 15 05:15:23.766709 systemd-logind[1980]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 15 05:15:23.768381 systemd-logind[1980]: New seat seat0. Jul 15 05:15:23.773066 systemd[1]: Started systemd-logind.service - User Login Management. Jul 15 05:15:23.777653 systemd[1]: Started update-engine.service - Update Engine. Jul 15 05:15:23.798479 coreos-metadata[1966]: Jul 15 05:15:23.795 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jul 15 05:15:23.789927 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jul 15 05:15:23.816530 coreos-metadata[1966]: Jul 15 05:15:23.814 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jul 15 05:15:23.816530 coreos-metadata[1966]: Jul 15 05:15:23.815 INFO Fetch successful Jul 15 05:15:23.816530 coreos-metadata[1966]: Jul 15 05:15:23.815 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jul 15 05:15:23.816530 coreos-metadata[1966]: Jul 15 05:15:23.816 INFO Fetch successful Jul 15 05:15:23.816530 coreos-metadata[1966]: Jul 15 05:15:23.816 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jul 15 05:15:23.820675 coreos-metadata[1966]: Jul 15 05:15:23.816 INFO Fetch successful Jul 15 05:15:23.820675 coreos-metadata[1966]: Jul 15 05:15:23.818 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jul 15 05:15:23.824245 coreos-metadata[1966]: Jul 15 05:15:23.822 INFO Fetch successful Jul 15 05:15:23.824245 coreos-metadata[1966]: Jul 15 05:15:23.822 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jul 15 05:15:23.824706 coreos-metadata[1966]: Jul 15 05:15:23.824 INFO Fetch failed with 404: resource not found Jul 15 05:15:23.824706 coreos-metadata[1966]: Jul 15 05:15:23.824 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jul 15 05:15:23.825757 coreos-metadata[1966]: Jul 15 05:15:23.825 INFO Fetch successful Jul 15 05:15:23.825757 coreos-metadata[1966]: Jul 15 05:15:23.825 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jul 15 05:15:23.826244 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 15 05:15:23.834342 coreos-metadata[1966]: Jul 15 05:15:23.830 INFO Fetch successful Jul 15 05:15:23.834342 coreos-metadata[1966]: Jul 15 05:15:23.830 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jul 15 05:15:23.838223 coreos-metadata[1966]: Jul 15 05:15:23.834 INFO Fetch successful Jul 15 05:15:23.838223 coreos-metadata[1966]: Jul 15 05:15:23.834 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jul 15 05:15:23.839016 coreos-metadata[1966]: Jul 15 05:15:23.838 INFO Fetch successful Jul 15 05:15:23.839016 coreos-metadata[1966]: Jul 15 05:15:23.838 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jul 15 05:15:23.841191 coreos-metadata[1966]: Jul 15 05:15:23.839 INFO Fetch successful Jul 15 05:15:23.860816 systemd[1]: Finished setup-oem.service - Setup OEM. Jul 15 05:15:23.961523 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Jul 15 05:15:23.968819 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jul 15 05:15:23.972479 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 15 05:15:23.981370 extend-filesystems[2026]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jul 15 05:15:23.981370 extend-filesystems[2026]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 15 05:15:23.981370 extend-filesystems[2026]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Jul 15 05:15:23.988596 extend-filesystems[1970]: Resized filesystem in /dev/nvme0n1p9 Jul 15 05:15:23.983580 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 15 05:15:23.983877 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 15 05:15:24.032902 bash[2055]: Updated "/home/core/.ssh/authorized_keys" Jul 15 05:15:24.037041 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 15 05:15:24.050802 systemd[1]: Starting sshkeys.service... Jul 15 05:15:24.182617 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jul 15 05:15:24.187406 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jul 15 05:15:24.324117 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 15 05:15:24.383626 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jul 15 05:15:24.391549 dbus-daemon[1967]: [system] Successfully activated service 'org.freedesktop.hostname1' Jul 15 05:15:24.396969 locksmithd[2028]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 15 05:15:24.403008 dbus-daemon[1967]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=2027 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jul 15 05:15:24.421159 systemd[1]: Starting polkit.service - Authorization Manager... Jul 15 05:15:24.456610 coreos-metadata[2120]: Jul 15 05:15:24.456 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jul 15 05:15:24.457796 coreos-metadata[2120]: Jul 15 05:15:24.457 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jul 15 05:15:24.458646 coreos-metadata[2120]: Jul 15 05:15:24.458 INFO Fetch successful Jul 15 05:15:24.458848 coreos-metadata[2120]: Jul 15 05:15:24.458 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jul 15 05:15:24.460511 coreos-metadata[2120]: Jul 15 05:15:24.460 INFO Fetch successful Jul 15 05:15:24.462556 unknown[2120]: wrote ssh authorized keys file for user: core Jul 15 05:15:24.522711 update-ssh-keys[2165]: Updated "/home/core/.ssh/authorized_keys" Jul 15 05:15:24.526265 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jul 15 05:15:24.534258 systemd[1]: Finished sshkeys.service. Jul 15 05:15:24.623340 containerd[1996]: time="2025-07-15T05:15:24Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jul 15 05:15:24.627891 containerd[1996]: time="2025-07-15T05:15:24.627841004Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Jul 15 05:15:24.665932 containerd[1996]: time="2025-07-15T05:15:24.665879216Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="13.161µs" Jul 15 05:15:24.665932 containerd[1996]: time="2025-07-15T05:15:24.665926156Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jul 15 05:15:24.666085 containerd[1996]: time="2025-07-15T05:15:24.665950771Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jul 15 05:15:24.666160 containerd[1996]: time="2025-07-15T05:15:24.666138505Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jul 15 05:15:24.668312 polkitd[2164]: Started polkitd version 126 Jul 15 05:15:24.669500 containerd[1996]: time="2025-07-15T05:15:24.666167720Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jul 15 05:15:24.669500 containerd[1996]: time="2025-07-15T05:15:24.668811899Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 15 05:15:24.669500 containerd[1996]: time="2025-07-15T05:15:24.668921818Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 15 05:15:24.669500 containerd[1996]: time="2025-07-15T05:15:24.668938571Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 15 05:15:24.669994 containerd[1996]: time="2025-07-15T05:15:24.669959442Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 15 05:15:24.670074 containerd[1996]: time="2025-07-15T05:15:24.670060085Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 15 05:15:24.670144 containerd[1996]: time="2025-07-15T05:15:24.670129608Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 15 05:15:24.670595 containerd[1996]: time="2025-07-15T05:15:24.670567179Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jul 15 05:15:24.670808 containerd[1996]: time="2025-07-15T05:15:24.670787589Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jul 15 05:15:24.671286 containerd[1996]: time="2025-07-15T05:15:24.671254520Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 15 05:15:24.671419 containerd[1996]: time="2025-07-15T05:15:24.671397744Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 15 05:15:24.671503 containerd[1996]: time="2025-07-15T05:15:24.671488686Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jul 15 05:15:24.671602 containerd[1996]: time="2025-07-15T05:15:24.671586779Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jul 15 05:15:24.672508 containerd[1996]: time="2025-07-15T05:15:24.672484310Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jul 15 05:15:24.673349 containerd[1996]: time="2025-07-15T05:15:24.673162246Z" level=info msg="metadata content store policy set" policy=shared Jul 15 05:15:24.677710 polkitd[2164]: Loading rules from directory /etc/polkit-1/rules.d Jul 15 05:15:24.678666 containerd[1996]: time="2025-07-15T05:15:24.678234431Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jul 15 05:15:24.679187 containerd[1996]: time="2025-07-15T05:15:24.678782762Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jul 15 05:15:24.679187 containerd[1996]: time="2025-07-15T05:15:24.678810288Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jul 15 05:15:24.679187 containerd[1996]: time="2025-07-15T05:15:24.678871441Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jul 15 05:15:24.679187 containerd[1996]: time="2025-07-15T05:15:24.678890795Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jul 15 05:15:24.679187 containerd[1996]: time="2025-07-15T05:15:24.678907079Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jul 15 05:15:24.679187 containerd[1996]: time="2025-07-15T05:15:24.678927251Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jul 15 05:15:24.679187 containerd[1996]: time="2025-07-15T05:15:24.678945361Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jul 15 05:15:24.679187 containerd[1996]: time="2025-07-15T05:15:24.678961472Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jul 15 05:15:24.679187 containerd[1996]: time="2025-07-15T05:15:24.678977093Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jul 15 05:15:24.679187 containerd[1996]: time="2025-07-15T05:15:24.678991614Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jul 15 05:15:24.679187 containerd[1996]: time="2025-07-15T05:15:24.679009497Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jul 15 05:15:24.679187 containerd[1996]: time="2025-07-15T05:15:24.679154005Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jul 15 05:15:24.680193 containerd[1996]: time="2025-07-15T05:15:24.679736043Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jul 15 05:15:24.680193 containerd[1996]: time="2025-07-15T05:15:24.679769719Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jul 15 05:15:24.680193 containerd[1996]: time="2025-07-15T05:15:24.679791915Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jul 15 05:15:24.680193 containerd[1996]: time="2025-07-15T05:15:24.679817860Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jul 15 05:15:24.680193 containerd[1996]: time="2025-07-15T05:15:24.679834636Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jul 15 05:15:24.680193 containerd[1996]: time="2025-07-15T05:15:24.679852279Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jul 15 05:15:24.680193 containerd[1996]: time="2025-07-15T05:15:24.679867553Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jul 15 05:15:24.680193 containerd[1996]: time="2025-07-15T05:15:24.679885981Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jul 15 05:15:24.680193 containerd[1996]: time="2025-07-15T05:15:24.679902398Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jul 15 05:15:24.680193 containerd[1996]: time="2025-07-15T05:15:24.679917997Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jul 15 05:15:24.680193 containerd[1996]: time="2025-07-15T05:15:24.680003947Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jul 15 05:15:24.680193 containerd[1996]: time="2025-07-15T05:15:24.680021612Z" level=info msg="Start snapshots syncer" Jul 15 05:15:24.681357 containerd[1996]: time="2025-07-15T05:15:24.680990605Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jul 15 05:15:24.681841 containerd[1996]: time="2025-07-15T05:15:24.681630836Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jul 15 05:15:24.681841 containerd[1996]: time="2025-07-15T05:15:24.681706983Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jul 15 05:15:24.682148 polkitd[2164]: Loading rules from directory /run/polkit-1/rules.d Jul 15 05:15:24.682517 polkitd[2164]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jul 15 05:15:24.682651 containerd[1996]: time="2025-07-15T05:15:24.682537045Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jul 15 05:15:24.683061 containerd[1996]: time="2025-07-15T05:15:24.683037213Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jul 15 05:15:24.684958 containerd[1996]: time="2025-07-15T05:15:24.683219852Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jul 15 05:15:24.684958 containerd[1996]: time="2025-07-15T05:15:24.683247375Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jul 15 05:15:24.684958 containerd[1996]: time="2025-07-15T05:15:24.683267265Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jul 15 05:15:24.684958 containerd[1996]: time="2025-07-15T05:15:24.683291107Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jul 15 05:15:24.684958 containerd[1996]: time="2025-07-15T05:15:24.683307127Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jul 15 05:15:24.684958 containerd[1996]: time="2025-07-15T05:15:24.683323986Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jul 15 05:15:24.684958 containerd[1996]: time="2025-07-15T05:15:24.683368279Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jul 15 05:15:24.684958 containerd[1996]: time="2025-07-15T05:15:24.683388298Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jul 15 05:15:24.684958 containerd[1996]: time="2025-07-15T05:15:24.683404551Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jul 15 05:15:24.684958 containerd[1996]: time="2025-07-15T05:15:24.683447338Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 15 05:15:24.684958 containerd[1996]: time="2025-07-15T05:15:24.683469993Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 15 05:15:24.684958 containerd[1996]: time="2025-07-15T05:15:24.683484480Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 15 05:15:24.684958 containerd[1996]: time="2025-07-15T05:15:24.683499514Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 15 05:15:24.684958 containerd[1996]: time="2025-07-15T05:15:24.683511574Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jul 15 05:15:24.684238 polkitd[2164]: Loading rules from directory /usr/local/share/polkit-1/rules.d Jul 15 05:15:24.685562 containerd[1996]: time="2025-07-15T05:15:24.683525853Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jul 15 05:15:24.685562 containerd[1996]: time="2025-07-15T05:15:24.683542158Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jul 15 05:15:24.685562 containerd[1996]: time="2025-07-15T05:15:24.683566728Z" level=info msg="runtime interface created" Jul 15 05:15:24.685562 containerd[1996]: time="2025-07-15T05:15:24.683574753Z" level=info msg="created NRI interface" Jul 15 05:15:24.685562 containerd[1996]: time="2025-07-15T05:15:24.683588133Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jul 15 05:15:24.685562 containerd[1996]: time="2025-07-15T05:15:24.683607703Z" level=info msg="Connect containerd service" Jul 15 05:15:24.685562 containerd[1996]: time="2025-07-15T05:15:24.683646720Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 15 05:15:24.684473 polkitd[2164]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jul 15 05:15:24.684628 polkitd[2164]: Loading rules from directory /usr/share/polkit-1/rules.d Jul 15 05:15:24.686921 ntpd[1973]: bind(24) AF_INET6 fe80::444:bfff:fe58:691b%2#123 flags 0x11 failed: Cannot assign requested address Jul 15 05:15:24.686962 ntpd[1973]: unable to create socket on eth0 (6) for fe80::444:bfff:fe58:691b%2#123 Jul 15 05:15:24.687291 ntpd[1973]: 15 Jul 05:15:24 ntpd[1973]: bind(24) AF_INET6 fe80::444:bfff:fe58:691b%2#123 flags 0x11 failed: Cannot assign requested address Jul 15 05:15:24.687291 ntpd[1973]: 15 Jul 05:15:24 ntpd[1973]: unable to create socket on eth0 (6) for fe80::444:bfff:fe58:691b%2#123 Jul 15 05:15:24.687291 ntpd[1973]: 15 Jul 05:15:24 ntpd[1973]: failed to init interface for address fe80::444:bfff:fe58:691b%2 Jul 15 05:15:24.686976 ntpd[1973]: failed to init interface for address fe80::444:bfff:fe58:691b%2 Jul 15 05:15:24.687285 polkitd[2164]: Finished loading, compiling and executing 2 rules Jul 15 05:15:24.688315 containerd[1996]: time="2025-07-15T05:15:24.687887009Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 15 05:15:24.688009 systemd[1]: Started polkit.service - Authorization Manager. Jul 15 05:15:24.692116 dbus-daemon[1967]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jul 15 05:15:24.693240 polkitd[2164]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jul 15 05:15:24.720449 systemd-hostnamed[2027]: Hostname set to (transient) Jul 15 05:15:24.720584 systemd-resolved[1860]: System hostname changed to 'ip-172-31-18-224'. Jul 15 05:15:24.802960 sshd_keygen[2017]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 15 05:15:24.845459 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 15 05:15:24.851560 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 15 05:15:24.854327 systemd[1]: Started sshd@0-172.31.18.224:22-139.178.89.65:49194.service - OpenSSH per-connection server daemon (139.178.89.65:49194). Jul 15 05:15:24.880884 systemd[1]: issuegen.service: Deactivated successfully. Jul 15 05:15:24.881230 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 15 05:15:24.887113 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 15 05:15:24.924166 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 15 05:15:24.929937 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 15 05:15:24.933439 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 15 05:15:24.935530 systemd[1]: Reached target getty.target - Login Prompts. Jul 15 05:15:24.978939 tar[1993]: linux-amd64/README.md Jul 15 05:15:25.003773 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 15 05:15:25.083872 containerd[1996]: time="2025-07-15T05:15:25.083826912Z" level=info msg="Start subscribing containerd event" Jul 15 05:15:25.083984 containerd[1996]: time="2025-07-15T05:15:25.083896228Z" level=info msg="Start recovering state" Jul 15 05:15:25.084059 containerd[1996]: time="2025-07-15T05:15:25.084039217Z" level=info msg="Start event monitor" Jul 15 05:15:25.084084 containerd[1996]: time="2025-07-15T05:15:25.084058079Z" level=info msg="Start cni network conf syncer for default" Jul 15 05:15:25.084084 containerd[1996]: time="2025-07-15T05:15:25.084071675Z" level=info msg="Start streaming server" Jul 15 05:15:25.084150 containerd[1996]: time="2025-07-15T05:15:25.084082854Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jul 15 05:15:25.084150 containerd[1996]: time="2025-07-15T05:15:25.084091050Z" level=info msg="runtime interface starting up..." Jul 15 05:15:25.084150 containerd[1996]: time="2025-07-15T05:15:25.084108487Z" level=info msg="starting plugins..." Jul 15 05:15:25.084150 containerd[1996]: time="2025-07-15T05:15:25.084122239Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jul 15 05:15:25.084420 containerd[1996]: time="2025-07-15T05:15:25.084393871Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 15 05:15:25.084554 containerd[1996]: time="2025-07-15T05:15:25.084519477Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 15 05:15:25.084690 systemd[1]: Started containerd.service - containerd container runtime. Jul 15 05:15:25.084800 containerd[1996]: time="2025-07-15T05:15:25.084688829Z" level=info msg="containerd successfully booted in 0.461942s" Jul 15 05:15:25.140034 sshd[2194]: Accepted publickey for core from 139.178.89.65 port 49194 ssh2: RSA SHA256:GkB2NQb8ttcecrkr6wMNwKWllqcPg0g7p088zv9jGDI Jul 15 05:15:25.142789 sshd-session[2194]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:15:25.150748 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 15 05:15:25.152282 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 15 05:15:25.160852 systemd-logind[1980]: New session 1 of user core. Jul 15 05:15:25.175749 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 15 05:15:25.179850 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 15 05:15:25.194092 (systemd)[2215]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 15 05:15:25.197020 systemd-logind[1980]: New session c1 of user core. Jul 15 05:15:25.209365 systemd-networkd[1854]: eth0: Gained IPv6LL Jul 15 05:15:25.212238 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 15 05:15:25.214517 systemd[1]: Reached target network-online.target - Network is Online. Jul 15 05:15:25.217386 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jul 15 05:15:25.222890 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 05:15:25.228569 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 15 05:15:25.309272 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 15 05:15:25.360692 amazon-ssm-agent[2220]: Initializing new seelog logger Jul 15 05:15:25.361051 amazon-ssm-agent[2220]: New Seelog Logger Creation Complete Jul 15 05:15:25.361051 amazon-ssm-agent[2220]: 2025/07/15 05:15:25 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 15 05:15:25.361051 amazon-ssm-agent[2220]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 15 05:15:25.363199 amazon-ssm-agent[2220]: 2025/07/15 05:15:25 processing appconfig overrides Jul 15 05:15:25.363199 amazon-ssm-agent[2220]: 2025/07/15 05:15:25 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 15 05:15:25.363199 amazon-ssm-agent[2220]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 15 05:15:25.363409 amazon-ssm-agent[2220]: 2025/07/15 05:15:25 processing appconfig overrides Jul 15 05:15:25.363718 amazon-ssm-agent[2220]: 2025/07/15 05:15:25 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 15 05:15:25.363718 amazon-ssm-agent[2220]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 15 05:15:25.363813 amazon-ssm-agent[2220]: 2025/07/15 05:15:25 processing appconfig overrides Jul 15 05:15:25.364907 amazon-ssm-agent[2220]: 2025-07-15 05:15:25.3618 INFO Proxy environment variables: Jul 15 05:15:25.367047 amazon-ssm-agent[2220]: 2025/07/15 05:15:25 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 15 05:15:25.367047 amazon-ssm-agent[2220]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 15 05:15:25.367163 amazon-ssm-agent[2220]: 2025/07/15 05:15:25 processing appconfig overrides Jul 15 05:15:25.468195 amazon-ssm-agent[2220]: 2025-07-15 05:15:25.3622 INFO https_proxy: Jul 15 05:15:25.467948 systemd[2215]: Queued start job for default target default.target. Jul 15 05:15:25.474051 systemd[2215]: Created slice app.slice - User Application Slice. Jul 15 05:15:25.474100 systemd[2215]: Reached target paths.target - Paths. Jul 15 05:15:25.474601 systemd[2215]: Reached target timers.target - Timers. Jul 15 05:15:25.477297 systemd[2215]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 15 05:15:25.502993 systemd[2215]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 15 05:15:25.504155 systemd[2215]: Reached target sockets.target - Sockets. Jul 15 05:15:25.504252 systemd[2215]: Reached target basic.target - Basic System. Jul 15 05:15:25.504304 systemd[2215]: Reached target default.target - Main User Target. Jul 15 05:15:25.504350 systemd[2215]: Startup finished in 299ms. Jul 15 05:15:25.504513 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 15 05:15:25.513490 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 15 05:15:25.564728 amazon-ssm-agent[2220]: 2025-07-15 05:15:25.3622 INFO http_proxy: Jul 15 05:15:25.665466 amazon-ssm-agent[2220]: 2025-07-15 05:15:25.3622 INFO no_proxy: Jul 15 05:15:25.675535 systemd[1]: Started sshd@1-172.31.18.224:22-139.178.89.65:49202.service - OpenSSH per-connection server daemon (139.178.89.65:49202). Jul 15 05:15:25.767425 amazon-ssm-agent[2220]: 2025/07/15 05:15:25 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 15 05:15:25.767425 amazon-ssm-agent[2220]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 15 05:15:25.767425 amazon-ssm-agent[2220]: 2025/07/15 05:15:25 processing appconfig overrides Jul 15 05:15:25.779682 amazon-ssm-agent[2220]: 2025-07-15 05:15:25.3634 INFO Checking if agent identity type OnPrem can be assumed Jul 15 05:15:25.807845 amazon-ssm-agent[2220]: 2025-07-15 05:15:25.3636 INFO Checking if agent identity type EC2 can be assumed Jul 15 05:15:25.807845 amazon-ssm-agent[2220]: 2025-07-15 05:15:25.4360 INFO Agent will take identity from EC2 Jul 15 05:15:25.807845 amazon-ssm-agent[2220]: 2025-07-15 05:15:25.4391 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.3.0.0 Jul 15 05:15:25.807845 amazon-ssm-agent[2220]: 2025-07-15 05:15:25.4407 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Jul 15 05:15:25.807845 amazon-ssm-agent[2220]: 2025-07-15 05:15:25.4408 INFO [amazon-ssm-agent] Starting Core Agent Jul 15 05:15:25.807845 amazon-ssm-agent[2220]: 2025-07-15 05:15:25.4408 INFO [amazon-ssm-agent] Registrar detected. Attempting registration Jul 15 05:15:25.808045 amazon-ssm-agent[2220]: 2025-07-15 05:15:25.4408 INFO [Registrar] Starting registrar module Jul 15 05:15:25.808045 amazon-ssm-agent[2220]: 2025-07-15 05:15:25.4443 INFO [EC2Identity] Checking disk for registration info Jul 15 05:15:25.808045 amazon-ssm-agent[2220]: 2025-07-15 05:15:25.4444 INFO [EC2Identity] No registration info found for ec2 instance, attempting registration Jul 15 05:15:25.808045 amazon-ssm-agent[2220]: 2025-07-15 05:15:25.4444 INFO [EC2Identity] Generating registration keypair Jul 15 05:15:25.808045 amazon-ssm-agent[2220]: 2025-07-15 05:15:25.7175 INFO [EC2Identity] Checking write access before registering Jul 15 05:15:25.808045 amazon-ssm-agent[2220]: 2025-07-15 05:15:25.7180 INFO [EC2Identity] Registering EC2 instance with Systems Manager Jul 15 05:15:25.808045 amazon-ssm-agent[2220]: 2025-07-15 05:15:25.7670 INFO [EC2Identity] EC2 registration was successful. Jul 15 05:15:25.808045 amazon-ssm-agent[2220]: 2025-07-15 05:15:25.7670 INFO [amazon-ssm-agent] Registration attempted. Resuming core agent startup. Jul 15 05:15:25.808045 amazon-ssm-agent[2220]: 2025-07-15 05:15:25.7671 INFO [CredentialRefresher] credentialRefresher has started Jul 15 05:15:25.808045 amazon-ssm-agent[2220]: 2025-07-15 05:15:25.7671 INFO [CredentialRefresher] Starting credentials refresher loop Jul 15 05:15:25.808045 amazon-ssm-agent[2220]: 2025-07-15 05:15:25.8076 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jul 15 05:15:25.808045 amazon-ssm-agent[2220]: 2025-07-15 05:15:25.8077 INFO [CredentialRefresher] Credentials ready Jul 15 05:15:25.877573 sshd[2246]: Accepted publickey for core from 139.178.89.65 port 49202 ssh2: RSA SHA256:GkB2NQb8ttcecrkr6wMNwKWllqcPg0g7p088zv9jGDI Jul 15 05:15:25.878229 amazon-ssm-agent[2220]: 2025-07-15 05:15:25.8079 INFO [CredentialRefresher] Next credential rotation will be in 29.999994343166666 minutes Jul 15 05:15:25.879463 sshd-session[2246]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:15:25.887249 systemd-logind[1980]: New session 2 of user core. Jul 15 05:15:25.892469 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 15 05:15:26.011728 sshd[2249]: Connection closed by 139.178.89.65 port 49202 Jul 15 05:15:26.012441 sshd-session[2246]: pam_unix(sshd:session): session closed for user core Jul 15 05:15:26.018269 systemd[1]: sshd@1-172.31.18.224:22-139.178.89.65:49202.service: Deactivated successfully. Jul 15 05:15:26.019342 systemd-logind[1980]: Session 2 logged out. Waiting for processes to exit. Jul 15 05:15:26.022020 systemd[1]: session-2.scope: Deactivated successfully. Jul 15 05:15:26.024798 systemd-logind[1980]: Removed session 2. Jul 15 05:15:26.047933 systemd[1]: Started sshd@2-172.31.18.224:22-139.178.89.65:49212.service - OpenSSH per-connection server daemon (139.178.89.65:49212). Jul 15 05:15:26.234421 sshd[2255]: Accepted publickey for core from 139.178.89.65 port 49212 ssh2: RSA SHA256:GkB2NQb8ttcecrkr6wMNwKWllqcPg0g7p088zv9jGDI Jul 15 05:15:26.235825 sshd-session[2255]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:15:26.241079 systemd-logind[1980]: New session 3 of user core. Jul 15 05:15:26.246408 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 15 05:15:26.373992 sshd[2258]: Connection closed by 139.178.89.65 port 49212 Jul 15 05:15:26.374095 sshd-session[2255]: pam_unix(sshd:session): session closed for user core Jul 15 05:15:26.378208 systemd[1]: sshd@2-172.31.18.224:22-139.178.89.65:49212.service: Deactivated successfully. Jul 15 05:15:26.379815 systemd[1]: session-3.scope: Deactivated successfully. Jul 15 05:15:26.380925 systemd-logind[1980]: Session 3 logged out. Waiting for processes to exit. Jul 15 05:15:26.382798 systemd-logind[1980]: Removed session 3. Jul 15 05:15:26.734453 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 05:15:26.737045 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 15 05:15:26.740271 systemd[1]: Startup finished in 2.910s (kernel) + 9.805s (initrd) + 7.308s (userspace) = 20.025s. Jul 15 05:15:26.748965 (kubelet)[2268]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 15 05:15:26.821265 amazon-ssm-agent[2220]: 2025-07-15 05:15:26.8207 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jul 15 05:15:26.922012 amazon-ssm-agent[2220]: 2025-07-15 05:15:26.8225 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2275) started Jul 15 05:15:27.023717 amazon-ssm-agent[2220]: 2025-07-15 05:15:26.8226 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jul 15 05:15:27.503104 kubelet[2268]: E0715 05:15:27.503022 2268 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 15 05:15:27.505430 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 15 05:15:27.505580 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 15 05:15:27.506069 systemd[1]: kubelet.service: Consumed 1.036s CPU time, 263.8M memory peak. Jul 15 05:15:27.686939 ntpd[1973]: Listen normally on 7 eth0 [fe80::444:bfff:fe58:691b%2]:123 Jul 15 05:15:27.687326 ntpd[1973]: 15 Jul 05:15:27 ntpd[1973]: Listen normally on 7 eth0 [fe80::444:bfff:fe58:691b%2]:123 Jul 15 05:15:32.633116 systemd-resolved[1860]: Clock change detected. Flushing caches. Jul 15 05:15:38.361512 systemd[1]: Started sshd@3-172.31.18.224:22-139.178.89.65:49488.service - OpenSSH per-connection server daemon (139.178.89.65:49488). Jul 15 05:15:38.528134 sshd[2294]: Accepted publickey for core from 139.178.89.65 port 49488 ssh2: RSA SHA256:GkB2NQb8ttcecrkr6wMNwKWllqcPg0g7p088zv9jGDI Jul 15 05:15:38.529495 sshd-session[2294]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:15:38.534940 systemd-logind[1980]: New session 4 of user core. Jul 15 05:15:38.541106 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 15 05:15:38.663157 sshd[2297]: Connection closed by 139.178.89.65 port 49488 Jul 15 05:15:38.663705 sshd-session[2294]: pam_unix(sshd:session): session closed for user core Jul 15 05:15:38.668100 systemd[1]: sshd@3-172.31.18.224:22-139.178.89.65:49488.service: Deactivated successfully. Jul 15 05:15:38.669886 systemd[1]: session-4.scope: Deactivated successfully. Jul 15 05:15:38.671171 systemd-logind[1980]: Session 4 logged out. Waiting for processes to exit. Jul 15 05:15:38.672712 systemd-logind[1980]: Removed session 4. Jul 15 05:15:38.704797 systemd[1]: Started sshd@4-172.31.18.224:22-139.178.89.65:49494.service - OpenSSH per-connection server daemon (139.178.89.65:49494). Jul 15 05:15:38.879379 sshd[2303]: Accepted publickey for core from 139.178.89.65 port 49494 ssh2: RSA SHA256:GkB2NQb8ttcecrkr6wMNwKWllqcPg0g7p088zv9jGDI Jul 15 05:15:38.880717 sshd-session[2303]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:15:38.886041 systemd-logind[1980]: New session 5 of user core. Jul 15 05:15:38.896165 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 15 05:15:39.011634 sshd[2306]: Connection closed by 139.178.89.65 port 49494 Jul 15 05:15:39.012553 sshd-session[2303]: pam_unix(sshd:session): session closed for user core Jul 15 05:15:39.016816 systemd[1]: sshd@4-172.31.18.224:22-139.178.89.65:49494.service: Deactivated successfully. Jul 15 05:15:39.018429 systemd[1]: session-5.scope: Deactivated successfully. Jul 15 05:15:39.019249 systemd-logind[1980]: Session 5 logged out. Waiting for processes to exit. Jul 15 05:15:39.020371 systemd-logind[1980]: Removed session 5. Jul 15 05:15:39.049625 systemd[1]: Started sshd@5-172.31.18.224:22-139.178.89.65:41174.service - OpenSSH per-connection server daemon (139.178.89.65:41174). Jul 15 05:15:39.212453 sshd[2312]: Accepted publickey for core from 139.178.89.65 port 41174 ssh2: RSA SHA256:GkB2NQb8ttcecrkr6wMNwKWllqcPg0g7p088zv9jGDI Jul 15 05:15:39.213992 sshd-session[2312]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:15:39.220252 systemd-logind[1980]: New session 6 of user core. Jul 15 05:15:39.226157 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 15 05:15:39.345157 sshd[2315]: Connection closed by 139.178.89.65 port 41174 Jul 15 05:15:39.346421 sshd-session[2312]: pam_unix(sshd:session): session closed for user core Jul 15 05:15:39.350242 systemd[1]: sshd@5-172.31.18.224:22-139.178.89.65:41174.service: Deactivated successfully. Jul 15 05:15:39.352418 systemd[1]: session-6.scope: Deactivated successfully. Jul 15 05:15:39.354806 systemd-logind[1980]: Session 6 logged out. Waiting for processes to exit. Jul 15 05:15:39.356150 systemd-logind[1980]: Removed session 6. Jul 15 05:15:39.375845 systemd[1]: Started sshd@6-172.31.18.224:22-139.178.89.65:41182.service - OpenSSH per-connection server daemon (139.178.89.65:41182). Jul 15 05:15:39.516568 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 15 05:15:39.518631 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 05:15:39.547683 sshd[2321]: Accepted publickey for core from 139.178.89.65 port 41182 ssh2: RSA SHA256:GkB2NQb8ttcecrkr6wMNwKWllqcPg0g7p088zv9jGDI Jul 15 05:15:39.549373 sshd-session[2321]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:15:39.557474 systemd-logind[1980]: New session 7 of user core. Jul 15 05:15:39.570159 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 15 05:15:39.713792 sudo[2328]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 15 05:15:39.714117 sudo[2328]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 15 05:15:39.726286 sudo[2328]: pam_unix(sudo:session): session closed for user root Jul 15 05:15:39.749555 sshd[2327]: Connection closed by 139.178.89.65 port 41182 Jul 15 05:15:39.751135 sshd-session[2321]: pam_unix(sshd:session): session closed for user core Jul 15 05:15:39.755325 systemd[1]: sshd@6-172.31.18.224:22-139.178.89.65:41182.service: Deactivated successfully. Jul 15 05:15:39.764266 systemd[1]: session-7.scope: Deactivated successfully. Jul 15 05:15:39.766196 systemd-logind[1980]: Session 7 logged out. Waiting for processes to exit. Jul 15 05:15:39.769315 systemd-logind[1980]: Removed session 7. Jul 15 05:15:39.788130 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 05:15:39.801344 systemd[1]: Started sshd@7-172.31.18.224:22-139.178.89.65:41188.service - OpenSSH per-connection server daemon (139.178.89.65:41188). Jul 15 05:15:39.813309 (kubelet)[2338]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 15 05:15:39.893436 kubelet[2338]: E0715 05:15:39.893373 2338 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 15 05:15:39.897597 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 15 05:15:39.897794 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 15 05:15:39.898769 systemd[1]: kubelet.service: Consumed 190ms CPU time, 111.1M memory peak. Jul 15 05:15:40.000009 sshd[2340]: Accepted publickey for core from 139.178.89.65 port 41188 ssh2: RSA SHA256:GkB2NQb8ttcecrkr6wMNwKWllqcPg0g7p088zv9jGDI Jul 15 05:15:40.001254 sshd-session[2340]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:15:40.007624 systemd-logind[1980]: New session 8 of user core. Jul 15 05:15:40.017148 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 15 05:15:40.113098 sudo[2352]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 15 05:15:40.113361 sudo[2352]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 15 05:15:40.119150 sudo[2352]: pam_unix(sudo:session): session closed for user root Jul 15 05:15:40.124812 sudo[2351]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 15 05:15:40.125104 sudo[2351]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 15 05:15:40.135396 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 15 05:15:40.174267 augenrules[2374]: No rules Jul 15 05:15:40.175755 systemd[1]: audit-rules.service: Deactivated successfully. Jul 15 05:15:40.176076 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 15 05:15:40.177493 sudo[2351]: pam_unix(sudo:session): session closed for user root Jul 15 05:15:40.201190 sshd[2350]: Connection closed by 139.178.89.65 port 41188 Jul 15 05:15:40.201709 sshd-session[2340]: pam_unix(sshd:session): session closed for user core Jul 15 05:15:40.206208 systemd[1]: sshd@7-172.31.18.224:22-139.178.89.65:41188.service: Deactivated successfully. Jul 15 05:15:40.208097 systemd[1]: session-8.scope: Deactivated successfully. Jul 15 05:15:40.209077 systemd-logind[1980]: Session 8 logged out. Waiting for processes to exit. Jul 15 05:15:40.211375 systemd-logind[1980]: Removed session 8. Jul 15 05:15:40.237672 systemd[1]: Started sshd@8-172.31.18.224:22-139.178.89.65:41196.service - OpenSSH per-connection server daemon (139.178.89.65:41196). Jul 15 05:15:40.412131 sshd[2383]: Accepted publickey for core from 139.178.89.65 port 41196 ssh2: RSA SHA256:GkB2NQb8ttcecrkr6wMNwKWllqcPg0g7p088zv9jGDI Jul 15 05:15:40.413421 sshd-session[2383]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:15:40.418780 systemd-logind[1980]: New session 9 of user core. Jul 15 05:15:40.425111 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 15 05:15:40.524928 sudo[2387]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 15 05:15:40.525305 sudo[2387]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 15 05:15:41.142289 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 15 05:15:41.164400 (dockerd)[2406]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 15 05:15:41.606643 dockerd[2406]: time="2025-07-15T05:15:41.606510867Z" level=info msg="Starting up" Jul 15 05:15:41.608060 dockerd[2406]: time="2025-07-15T05:15:41.608006627Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jul 15 05:15:41.620697 dockerd[2406]: time="2025-07-15T05:15:41.620648138Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jul 15 05:15:41.638849 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1916197396-merged.mount: Deactivated successfully. Jul 15 05:15:41.677760 dockerd[2406]: time="2025-07-15T05:15:41.677553768Z" level=info msg="Loading containers: start." Jul 15 05:15:41.687958 kernel: Initializing XFRM netlink socket Jul 15 05:15:41.997596 (udev-worker)[2426]: Network interface NamePolicy= disabled on kernel command line. Jul 15 05:15:42.065477 systemd-networkd[1854]: docker0: Link UP Jul 15 05:15:42.071057 dockerd[2406]: time="2025-07-15T05:15:42.070800295Z" level=info msg="Loading containers: done." Jul 15 05:15:42.089730 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2297576016-merged.mount: Deactivated successfully. Jul 15 05:15:42.092549 dockerd[2406]: time="2025-07-15T05:15:42.092501141Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 15 05:15:42.092704 dockerd[2406]: time="2025-07-15T05:15:42.092606897Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jul 15 05:15:42.092752 dockerd[2406]: time="2025-07-15T05:15:42.092711739Z" level=info msg="Initializing buildkit" Jul 15 05:15:42.123655 dockerd[2406]: time="2025-07-15T05:15:42.123615323Z" level=info msg="Completed buildkit initialization" Jul 15 05:15:42.128677 dockerd[2406]: time="2025-07-15T05:15:42.128154002Z" level=info msg="Daemon has completed initialization" Jul 15 05:15:42.128677 dockerd[2406]: time="2025-07-15T05:15:42.128210446Z" level=info msg="API listen on /run/docker.sock" Jul 15 05:15:42.128413 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 15 05:15:43.152274 containerd[1996]: time="2025-07-15T05:15:43.152218764Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\"" Jul 15 05:15:43.713961 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3553569643.mount: Deactivated successfully. Jul 15 05:15:45.312757 containerd[1996]: time="2025-07-15T05:15:45.312690985Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:15:45.313695 containerd[1996]: time="2025-07-15T05:15:45.313653193Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.6: active requests=0, bytes read=28799045" Jul 15 05:15:45.315363 containerd[1996]: time="2025-07-15T05:15:45.315307077Z" level=info msg="ImageCreate event name:\"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:15:45.317898 containerd[1996]: time="2025-07-15T05:15:45.317839440Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:15:45.318955 containerd[1996]: time="2025-07-15T05:15:45.318659646Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.6\" with image id \"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\", size \"28795845\" in 2.166401311s" Jul 15 05:15:45.318955 containerd[1996]: time="2025-07-15T05:15:45.318694073Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\" returns image reference \"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\"" Jul 15 05:15:45.319475 containerd[1996]: time="2025-07-15T05:15:45.319455796Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\"" Jul 15 05:15:47.055257 containerd[1996]: time="2025-07-15T05:15:47.055203176Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:15:47.057541 containerd[1996]: time="2025-07-15T05:15:47.057330127Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.6: active requests=0, bytes read=24783912" Jul 15 05:15:47.059576 containerd[1996]: time="2025-07-15T05:15:47.059537930Z" level=info msg="ImageCreate event name:\"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:15:47.063354 containerd[1996]: time="2025-07-15T05:15:47.063301934Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:15:47.064497 containerd[1996]: time="2025-07-15T05:15:47.064190169Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.6\" with image id \"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\", size \"26385746\" in 1.744707721s" Jul 15 05:15:47.064497 containerd[1996]: time="2025-07-15T05:15:47.064221994Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\" returns image reference \"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\"" Jul 15 05:15:47.064720 containerd[1996]: time="2025-07-15T05:15:47.064701794Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\"" Jul 15 05:15:48.758615 containerd[1996]: time="2025-07-15T05:15:48.758560053Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:15:48.759664 containerd[1996]: time="2025-07-15T05:15:48.759624222Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.6: active requests=0, bytes read=19176916" Jul 15 05:15:48.760945 containerd[1996]: time="2025-07-15T05:15:48.760865201Z" level=info msg="ImageCreate event name:\"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:15:48.763262 containerd[1996]: time="2025-07-15T05:15:48.763219492Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:15:48.764101 containerd[1996]: time="2025-07-15T05:15:48.764077074Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.6\" with image id \"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\", size \"20778768\" in 1.699303171s" Jul 15 05:15:48.764261 containerd[1996]: time="2025-07-15T05:15:48.764180251Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\" returns image reference \"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\"" Jul 15 05:15:48.764895 containerd[1996]: time="2025-07-15T05:15:48.764817436Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\"" Jul 15 05:15:49.839859 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2906607226.mount: Deactivated successfully. Jul 15 05:15:50.123013 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 15 05:15:50.127039 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 05:15:50.401277 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 05:15:50.414760 (kubelet)[2692]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 15 05:15:50.493082 kubelet[2692]: E0715 05:15:50.493033 2692 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 15 05:15:50.495985 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 15 05:15:50.496391 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 15 05:15:50.497011 systemd[1]: kubelet.service: Consumed 217ms CPU time, 110.5M memory peak. Jul 15 05:15:50.550519 containerd[1996]: time="2025-07-15T05:15:50.550468402Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:15:50.552640 containerd[1996]: time="2025-07-15T05:15:50.552593982Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.6: active requests=0, bytes read=30895363" Jul 15 05:15:50.555209 containerd[1996]: time="2025-07-15T05:15:50.555157972Z" level=info msg="ImageCreate event name:\"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:15:50.558037 containerd[1996]: time="2025-07-15T05:15:50.557986769Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:15:50.558749 containerd[1996]: time="2025-07-15T05:15:50.558447520Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.6\" with image id \"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\", repo tag \"registry.k8s.io/kube-proxy:v1.32.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\", size \"30894382\" in 1.793523776s" Jul 15 05:15:50.558749 containerd[1996]: time="2025-07-15T05:15:50.558479197Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\" returns image reference \"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\"" Jul 15 05:15:50.559188 containerd[1996]: time="2025-07-15T05:15:50.559169283Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 15 05:15:51.033185 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3215991689.mount: Deactivated successfully. Jul 15 05:15:51.939583 containerd[1996]: time="2025-07-15T05:15:51.939521231Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:15:51.941758 containerd[1996]: time="2025-07-15T05:15:51.941709391Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Jul 15 05:15:51.943579 containerd[1996]: time="2025-07-15T05:15:51.943142021Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:15:51.948099 containerd[1996]: time="2025-07-15T05:15:51.947830798Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:15:51.949683 containerd[1996]: time="2025-07-15T05:15:51.948941474Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.389708377s" Jul 15 05:15:51.949683 containerd[1996]: time="2025-07-15T05:15:51.948982599Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jul 15 05:15:51.950237 containerd[1996]: time="2025-07-15T05:15:51.950186929Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 15 05:15:52.399745 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3628917179.mount: Deactivated successfully. Jul 15 05:15:52.406397 containerd[1996]: time="2025-07-15T05:15:52.406342956Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 15 05:15:52.407274 containerd[1996]: time="2025-07-15T05:15:52.407221851Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jul 15 05:15:52.408371 containerd[1996]: time="2025-07-15T05:15:52.408316811Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 15 05:15:52.411047 containerd[1996]: time="2025-07-15T05:15:52.410984950Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 15 05:15:52.411937 containerd[1996]: time="2025-07-15T05:15:52.411563364Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 461.340457ms" Jul 15 05:15:52.411937 containerd[1996]: time="2025-07-15T05:15:52.411601794Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 15 05:15:52.412112 containerd[1996]: time="2025-07-15T05:15:52.412092060Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jul 15 05:15:52.978508 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount66502424.mount: Deactivated successfully. Jul 15 05:15:55.369711 containerd[1996]: time="2025-07-15T05:15:55.369653378Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:15:55.370993 containerd[1996]: time="2025-07-15T05:15:55.370944437Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551360" Jul 15 05:15:55.372181 containerd[1996]: time="2025-07-15T05:15:55.372129866Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:15:55.375358 containerd[1996]: time="2025-07-15T05:15:55.375283748Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:15:55.376568 containerd[1996]: time="2025-07-15T05:15:55.376399856Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.964276447s" Jul 15 05:15:55.376568 containerd[1996]: time="2025-07-15T05:15:55.376440272Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jul 15 05:15:56.675677 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jul 15 05:15:58.429103 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 05:15:58.429800 systemd[1]: kubelet.service: Consumed 217ms CPU time, 110.5M memory peak. Jul 15 05:15:58.432329 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 05:15:58.468659 systemd[1]: Reload requested from client PID 2841 ('systemctl') (unit session-9.scope)... Jul 15 05:15:58.468678 systemd[1]: Reloading... Jul 15 05:15:58.614977 zram_generator::config[2888]: No configuration found. Jul 15 05:15:58.750834 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 05:15:58.884119 systemd[1]: Reloading finished in 414 ms. Jul 15 05:15:58.951474 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 15 05:15:58.951561 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 15 05:15:58.952144 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 05:15:58.952241 systemd[1]: kubelet.service: Consumed 140ms CPU time, 98.2M memory peak. Jul 15 05:15:58.954006 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 05:15:59.227427 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 05:15:59.238285 (kubelet)[2948]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 15 05:15:59.294684 kubelet[2948]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 05:15:59.295147 kubelet[2948]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 15 05:15:59.295147 kubelet[2948]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 05:15:59.295147 kubelet[2948]: I0715 05:15:59.294888 2948 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 15 05:15:59.581815 kubelet[2948]: I0715 05:15:59.581701 2948 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 15 05:15:59.581815 kubelet[2948]: I0715 05:15:59.581740 2948 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 15 05:15:59.582301 kubelet[2948]: I0715 05:15:59.582194 2948 server.go:954] "Client rotation is on, will bootstrap in background" Jul 15 05:15:59.643235 kubelet[2948]: E0715 05:15:59.642706 2948 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.18.224:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.18.224:6443: connect: connection refused" logger="UnhandledError" Jul 15 05:15:59.643235 kubelet[2948]: I0715 05:15:59.643085 2948 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 15 05:15:59.661060 kubelet[2948]: I0715 05:15:59.661022 2948 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 15 05:15:59.667424 kubelet[2948]: I0715 05:15:59.667397 2948 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 15 05:15:59.667825 kubelet[2948]: I0715 05:15:59.667764 2948 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 15 05:15:59.668001 kubelet[2948]: I0715 05:15:59.667801 2948 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-18-224","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 15 05:15:59.670126 kubelet[2948]: I0715 05:15:59.670083 2948 topology_manager.go:138] "Creating topology manager with none policy" Jul 15 05:15:59.670126 kubelet[2948]: I0715 05:15:59.670118 2948 container_manager_linux.go:304] "Creating device plugin manager" Jul 15 05:15:59.670250 kubelet[2948]: I0715 05:15:59.670238 2948 state_mem.go:36] "Initialized new in-memory state store" Jul 15 05:15:59.676443 kubelet[2948]: I0715 05:15:59.676410 2948 kubelet.go:446] "Attempting to sync node with API server" Jul 15 05:15:59.676443 kubelet[2948]: I0715 05:15:59.676444 2948 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 15 05:15:59.678554 kubelet[2948]: I0715 05:15:59.678300 2948 kubelet.go:352] "Adding apiserver pod source" Jul 15 05:15:59.678554 kubelet[2948]: I0715 05:15:59.678329 2948 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 15 05:15:59.679418 kubelet[2948]: W0715 05:15:59.679222 2948 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.18.224:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-18-224&limit=500&resourceVersion=0": dial tcp 172.31.18.224:6443: connect: connection refused Jul 15 05:15:59.679418 kubelet[2948]: E0715 05:15:59.679294 2948 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.18.224:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-18-224&limit=500&resourceVersion=0\": dial tcp 172.31.18.224:6443: connect: connection refused" logger="UnhandledError" Jul 15 05:15:59.681320 kubelet[2948]: W0715 05:15:59.681286 2948 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.18.224:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.18.224:6443: connect: connection refused Jul 15 05:15:59.681655 kubelet[2948]: E0715 05:15:59.681634 2948 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.18.224:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.18.224:6443: connect: connection refused" logger="UnhandledError" Jul 15 05:15:59.683397 kubelet[2948]: I0715 05:15:59.683363 2948 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Jul 15 05:15:59.687777 kubelet[2948]: I0715 05:15:59.687281 2948 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 15 05:15:59.687777 kubelet[2948]: W0715 05:15:59.687343 2948 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 15 05:15:59.689896 kubelet[2948]: I0715 05:15:59.689876 2948 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 15 05:15:59.690025 kubelet[2948]: I0715 05:15:59.690017 2948 server.go:1287] "Started kubelet" Jul 15 05:15:59.690744 kubelet[2948]: I0715 05:15:59.690428 2948 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 15 05:15:59.713443 kubelet[2948]: I0715 05:15:59.713010 2948 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 15 05:15:59.713443 kubelet[2948]: I0715 05:15:59.713416 2948 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 15 05:15:59.714182 kubelet[2948]: I0715 05:15:59.714132 2948 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 15 05:15:59.718004 kubelet[2948]: I0715 05:15:59.717983 2948 server.go:479] "Adding debug handlers to kubelet server" Jul 15 05:15:59.722269 kubelet[2948]: I0715 05:15:59.721841 2948 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 15 05:15:59.726021 kubelet[2948]: I0715 05:15:59.725984 2948 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 15 05:15:59.727021 kubelet[2948]: E0715 05:15:59.726511 2948 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-18-224\" not found" Jul 15 05:15:59.730706 kubelet[2948]: E0715 05:15:59.725116 2948 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.18.224:6443/api/v1/namespaces/default/events\": dial tcp 172.31.18.224:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-18-224.185254e7d51a4e83 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-18-224,UID:ip-172-31-18-224,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-18-224,},FirstTimestamp:2025-07-15 05:15:59.689993859 +0000 UTC m=+0.447647896,LastTimestamp:2025-07-15 05:15:59.689993859 +0000 UTC m=+0.447647896,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-18-224,}" Jul 15 05:15:59.730706 kubelet[2948]: E0715 05:15:59.730630 2948 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.224:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-224?timeout=10s\": dial tcp 172.31.18.224:6443: connect: connection refused" interval="200ms" Jul 15 05:15:59.731114 kubelet[2948]: I0715 05:15:59.731099 2948 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 15 05:15:59.732667 kubelet[2948]: W0715 05:15:59.732619 2948 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.18.224:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.18.224:6443: connect: connection refused Jul 15 05:15:59.732989 kubelet[2948]: E0715 05:15:59.732894 2948 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.18.224:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.18.224:6443: connect: connection refused" logger="UnhandledError" Jul 15 05:15:59.735495 kubelet[2948]: I0715 05:15:59.735125 2948 reconciler.go:26] "Reconciler: start to sync state" Jul 15 05:15:59.736945 kubelet[2948]: I0715 05:15:59.736879 2948 factory.go:221] Registration of the systemd container factory successfully Jul 15 05:15:59.737136 kubelet[2948]: I0715 05:15:59.737116 2948 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 15 05:15:59.740867 kubelet[2948]: I0715 05:15:59.740844 2948 factory.go:221] Registration of the containerd container factory successfully Jul 15 05:15:59.756676 kubelet[2948]: I0715 05:15:59.754980 2948 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 15 05:15:59.761689 kubelet[2948]: I0715 05:15:59.761662 2948 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 15 05:15:59.764960 kubelet[2948]: I0715 05:15:59.764595 2948 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 15 05:15:59.764960 kubelet[2948]: I0715 05:15:59.764627 2948 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 15 05:15:59.764960 kubelet[2948]: I0715 05:15:59.764646 2948 kubelet.go:2382] "Starting kubelet main sync loop" Jul 15 05:15:59.764960 kubelet[2948]: E0715 05:15:59.764695 2948 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 15 05:15:59.769174 kubelet[2948]: W0715 05:15:59.769012 2948 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.18.224:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.18.224:6443: connect: connection refused Jul 15 05:15:59.769174 kubelet[2948]: E0715 05:15:59.769063 2948 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.18.224:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.18.224:6443: connect: connection refused" logger="UnhandledError" Jul 15 05:15:59.773178 kubelet[2948]: I0715 05:15:59.773158 2948 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 15 05:15:59.773556 kubelet[2948]: I0715 05:15:59.773541 2948 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 15 05:15:59.773677 kubelet[2948]: I0715 05:15:59.773667 2948 state_mem.go:36] "Initialized new in-memory state store" Jul 15 05:15:59.785518 kubelet[2948]: I0715 05:15:59.785486 2948 policy_none.go:49] "None policy: Start" Jul 15 05:15:59.785657 kubelet[2948]: I0715 05:15:59.785554 2948 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 15 05:15:59.785657 kubelet[2948]: I0715 05:15:59.785582 2948 state_mem.go:35] "Initializing new in-memory state store" Jul 15 05:15:59.796713 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 15 05:15:59.805597 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 15 05:15:59.810441 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 15 05:15:59.824094 kubelet[2948]: I0715 05:15:59.823080 2948 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 15 05:15:59.824094 kubelet[2948]: I0715 05:15:59.823267 2948 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 15 05:15:59.824094 kubelet[2948]: I0715 05:15:59.823276 2948 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 15 05:15:59.827092 kubelet[2948]: I0715 05:15:59.826755 2948 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 15 05:15:59.827092 kubelet[2948]: E0715 05:15:59.826982 2948 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 15 05:15:59.827092 kubelet[2948]: E0715 05:15:59.827013 2948 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-18-224\" not found" Jul 15 05:15:59.877375 systemd[1]: Created slice kubepods-burstable-podac07b934a252126c7f33ed0ac65947b0.slice - libcontainer container kubepods-burstable-podac07b934a252126c7f33ed0ac65947b0.slice. Jul 15 05:15:59.892624 kubelet[2948]: E0715 05:15:59.892573 2948 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-224\" not found" node="ip-172-31-18-224" Jul 15 05:15:59.898015 systemd[1]: Created slice kubepods-burstable-pod0140bb43c12902110e94dadc71de815a.slice - libcontainer container kubepods-burstable-pod0140bb43c12902110e94dadc71de815a.slice. Jul 15 05:15:59.904842 kubelet[2948]: E0715 05:15:59.904794 2948 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-224\" not found" node="ip-172-31-18-224" Jul 15 05:15:59.908173 systemd[1]: Created slice kubepods-burstable-poda1e75edb38df30805e5d14090c1b1383.slice - libcontainer container kubepods-burstable-poda1e75edb38df30805e5d14090c1b1383.slice. Jul 15 05:15:59.910630 kubelet[2948]: E0715 05:15:59.910605 2948 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-224\" not found" node="ip-172-31-18-224" Jul 15 05:15:59.925809 kubelet[2948]: I0715 05:15:59.925756 2948 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-18-224" Jul 15 05:15:59.926195 kubelet[2948]: E0715 05:15:59.926164 2948 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.18.224:6443/api/v1/nodes\": dial tcp 172.31.18.224:6443: connect: connection refused" node="ip-172-31-18-224" Jul 15 05:15:59.931780 kubelet[2948]: E0715 05:15:59.931741 2948 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.224:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-224?timeout=10s\": dial tcp 172.31.18.224:6443: connect: connection refused" interval="400ms" Jul 15 05:15:59.935990 kubelet[2948]: I0715 05:15:59.935887 2948 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ac07b934a252126c7f33ed0ac65947b0-ca-certs\") pod \"kube-apiserver-ip-172-31-18-224\" (UID: \"ac07b934a252126c7f33ed0ac65947b0\") " pod="kube-system/kube-apiserver-ip-172-31-18-224" Jul 15 05:15:59.935990 kubelet[2948]: I0715 05:15:59.935941 2948 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0140bb43c12902110e94dadc71de815a-ca-certs\") pod \"kube-controller-manager-ip-172-31-18-224\" (UID: \"0140bb43c12902110e94dadc71de815a\") " pod="kube-system/kube-controller-manager-ip-172-31-18-224" Jul 15 05:15:59.935990 kubelet[2948]: I0715 05:15:59.935962 2948 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0140bb43c12902110e94dadc71de815a-k8s-certs\") pod \"kube-controller-manager-ip-172-31-18-224\" (UID: \"0140bb43c12902110e94dadc71de815a\") " pod="kube-system/kube-controller-manager-ip-172-31-18-224" Jul 15 05:15:59.935990 kubelet[2948]: I0715 05:15:59.935979 2948 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0140bb43c12902110e94dadc71de815a-kubeconfig\") pod \"kube-controller-manager-ip-172-31-18-224\" (UID: \"0140bb43c12902110e94dadc71de815a\") " pod="kube-system/kube-controller-manager-ip-172-31-18-224" Jul 15 05:15:59.935990 kubelet[2948]: I0715 05:15:59.935994 2948 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ac07b934a252126c7f33ed0ac65947b0-k8s-certs\") pod \"kube-apiserver-ip-172-31-18-224\" (UID: \"ac07b934a252126c7f33ed0ac65947b0\") " pod="kube-system/kube-apiserver-ip-172-31-18-224" Jul 15 05:15:59.936324 kubelet[2948]: I0715 05:15:59.936013 2948 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ac07b934a252126c7f33ed0ac65947b0-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-18-224\" (UID: \"ac07b934a252126c7f33ed0ac65947b0\") " pod="kube-system/kube-apiserver-ip-172-31-18-224" Jul 15 05:15:59.936324 kubelet[2948]: I0715 05:15:59.936034 2948 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/0140bb43c12902110e94dadc71de815a-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-18-224\" (UID: \"0140bb43c12902110e94dadc71de815a\") " pod="kube-system/kube-controller-manager-ip-172-31-18-224" Jul 15 05:15:59.936324 kubelet[2948]: I0715 05:15:59.936051 2948 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0140bb43c12902110e94dadc71de815a-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-18-224\" (UID: \"0140bb43c12902110e94dadc71de815a\") " pod="kube-system/kube-controller-manager-ip-172-31-18-224" Jul 15 05:15:59.936324 kubelet[2948]: I0715 05:15:59.936069 2948 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a1e75edb38df30805e5d14090c1b1383-kubeconfig\") pod \"kube-scheduler-ip-172-31-18-224\" (UID: \"a1e75edb38df30805e5d14090c1b1383\") " pod="kube-system/kube-scheduler-ip-172-31-18-224" Jul 15 05:16:00.130081 kubelet[2948]: I0715 05:16:00.129694 2948 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-18-224" Jul 15 05:16:00.130476 kubelet[2948]: E0715 05:16:00.130428 2948 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.18.224:6443/api/v1/nodes\": dial tcp 172.31.18.224:6443: connect: connection refused" node="ip-172-31-18-224" Jul 15 05:16:00.197307 containerd[1996]: time="2025-07-15T05:16:00.197252210Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-18-224,Uid:ac07b934a252126c7f33ed0ac65947b0,Namespace:kube-system,Attempt:0,}" Jul 15 05:16:00.208008 containerd[1996]: time="2025-07-15T05:16:00.207884647Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-18-224,Uid:0140bb43c12902110e94dadc71de815a,Namespace:kube-system,Attempt:0,}" Jul 15 05:16:00.213627 containerd[1996]: time="2025-07-15T05:16:00.213585974Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-18-224,Uid:a1e75edb38df30805e5d14090c1b1383,Namespace:kube-system,Attempt:0,}" Jul 15 05:16:00.332285 kubelet[2948]: E0715 05:16:00.332243 2948 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.224:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-224?timeout=10s\": dial tcp 172.31.18.224:6443: connect: connection refused" interval="800ms" Jul 15 05:16:00.414260 containerd[1996]: time="2025-07-15T05:16:00.414072433Z" level=info msg="connecting to shim abcafa4b34f4a203a16f7b3a7932f02dde7454b2d38685c3bb66326481ac0f6d" address="unix:///run/containerd/s/057e3a05e9a371df8c43de391bb64c615b6da0903753cfa72f19d3f22e54839f" namespace=k8s.io protocol=ttrpc version=3 Jul 15 05:16:00.415171 containerd[1996]: time="2025-07-15T05:16:00.415131007Z" level=info msg="connecting to shim e5206b197b31e57ac3322c24acb02ef7aad1f3bfaff0bad7b74cbb9d43f7c35b" address="unix:///run/containerd/s/8f09c88bab7706f3372e6902668b5bf9cd7c984a2496d22f43715c4ce94f195f" namespace=k8s.io protocol=ttrpc version=3 Jul 15 05:16:00.422711 containerd[1996]: time="2025-07-15T05:16:00.422624285Z" level=info msg="connecting to shim c200d94a4f246219f00390b4a65fb795c796f21c55b7405d26f78329aa954cfc" address="unix:///run/containerd/s/38b117e53867d61e9b720be1663edb63bee63eae85513989752969eaee7a581b" namespace=k8s.io protocol=ttrpc version=3 Jul 15 05:16:00.502035 kubelet[2948]: W0715 05:16:00.501937 2948 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.18.224:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.18.224:6443: connect: connection refused Jul 15 05:16:00.502035 kubelet[2948]: E0715 05:16:00.502000 2948 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.18.224:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.18.224:6443: connect: connection refused" logger="UnhandledError" Jul 15 05:16:00.533355 kubelet[2948]: I0715 05:16:00.532900 2948 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-18-224" Jul 15 05:16:00.533355 kubelet[2948]: E0715 05:16:00.533304 2948 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.18.224:6443/api/v1/nodes\": dial tcp 172.31.18.224:6443: connect: connection refused" node="ip-172-31-18-224" Jul 15 05:16:00.550129 systemd[1]: Started cri-containerd-abcafa4b34f4a203a16f7b3a7932f02dde7454b2d38685c3bb66326481ac0f6d.scope - libcontainer container abcafa4b34f4a203a16f7b3a7932f02dde7454b2d38685c3bb66326481ac0f6d. Jul 15 05:16:00.551987 systemd[1]: Started cri-containerd-c200d94a4f246219f00390b4a65fb795c796f21c55b7405d26f78329aa954cfc.scope - libcontainer container c200d94a4f246219f00390b4a65fb795c796f21c55b7405d26f78329aa954cfc. Jul 15 05:16:00.553534 systemd[1]: Started cri-containerd-e5206b197b31e57ac3322c24acb02ef7aad1f3bfaff0bad7b74cbb9d43f7c35b.scope - libcontainer container e5206b197b31e57ac3322c24acb02ef7aad1f3bfaff0bad7b74cbb9d43f7c35b. Jul 15 05:16:00.667623 containerd[1996]: time="2025-07-15T05:16:00.667020936Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-18-224,Uid:ac07b934a252126c7f33ed0ac65947b0,Namespace:kube-system,Attempt:0,} returns sandbox id \"abcafa4b34f4a203a16f7b3a7932f02dde7454b2d38685c3bb66326481ac0f6d\"" Jul 15 05:16:00.672518 containerd[1996]: time="2025-07-15T05:16:00.672474788Z" level=info msg="CreateContainer within sandbox \"abcafa4b34f4a203a16f7b3a7932f02dde7454b2d38685c3bb66326481ac0f6d\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 15 05:16:00.677347 containerd[1996]: time="2025-07-15T05:16:00.677289638Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-18-224,Uid:a1e75edb38df30805e5d14090c1b1383,Namespace:kube-system,Attempt:0,} returns sandbox id \"c200d94a4f246219f00390b4a65fb795c796f21c55b7405d26f78329aa954cfc\"" Jul 15 05:16:00.680657 containerd[1996]: time="2025-07-15T05:16:00.680539902Z" level=info msg="CreateContainer within sandbox \"c200d94a4f246219f00390b4a65fb795c796f21c55b7405d26f78329aa954cfc\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 15 05:16:00.693231 containerd[1996]: time="2025-07-15T05:16:00.693191672Z" level=info msg="Container 323c83cc22d17276e5b5efb9f1ee9ec004a02d0a60e7f55ce3bb76f6fbb15857: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:16:00.698284 containerd[1996]: time="2025-07-15T05:16:00.698233492Z" level=info msg="Container b0d661325424f7bcabe5ddbcc60105b657ad4fcf800928c3f9d8a224c7e37b48: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:16:00.716894 containerd[1996]: time="2025-07-15T05:16:00.716851252Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-18-224,Uid:0140bb43c12902110e94dadc71de815a,Namespace:kube-system,Attempt:0,} returns sandbox id \"e5206b197b31e57ac3322c24acb02ef7aad1f3bfaff0bad7b74cbb9d43f7c35b\"" Jul 15 05:16:00.721692 containerd[1996]: time="2025-07-15T05:16:00.721652120Z" level=info msg="CreateContainer within sandbox \"abcafa4b34f4a203a16f7b3a7932f02dde7454b2d38685c3bb66326481ac0f6d\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"323c83cc22d17276e5b5efb9f1ee9ec004a02d0a60e7f55ce3bb76f6fbb15857\"" Jul 15 05:16:00.722357 containerd[1996]: time="2025-07-15T05:16:00.722333490Z" level=info msg="StartContainer for \"323c83cc22d17276e5b5efb9f1ee9ec004a02d0a60e7f55ce3bb76f6fbb15857\"" Jul 15 05:16:00.723396 containerd[1996]: time="2025-07-15T05:16:00.722663376Z" level=info msg="CreateContainer within sandbox \"c200d94a4f246219f00390b4a65fb795c796f21c55b7405d26f78329aa954cfc\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"b0d661325424f7bcabe5ddbcc60105b657ad4fcf800928c3f9d8a224c7e37b48\"" Jul 15 05:16:00.723396 containerd[1996]: time="2025-07-15T05:16:00.722999765Z" level=info msg="CreateContainer within sandbox \"e5206b197b31e57ac3322c24acb02ef7aad1f3bfaff0bad7b74cbb9d43f7c35b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 15 05:16:00.724350 containerd[1996]: time="2025-07-15T05:16:00.724326224Z" level=info msg="connecting to shim 323c83cc22d17276e5b5efb9f1ee9ec004a02d0a60e7f55ce3bb76f6fbb15857" address="unix:///run/containerd/s/057e3a05e9a371df8c43de391bb64c615b6da0903753cfa72f19d3f22e54839f" protocol=ttrpc version=3 Jul 15 05:16:00.725339 containerd[1996]: time="2025-07-15T05:16:00.725283791Z" level=info msg="StartContainer for \"b0d661325424f7bcabe5ddbcc60105b657ad4fcf800928c3f9d8a224c7e37b48\"" Jul 15 05:16:00.726284 containerd[1996]: time="2025-07-15T05:16:00.726254150Z" level=info msg="connecting to shim b0d661325424f7bcabe5ddbcc60105b657ad4fcf800928c3f9d8a224c7e37b48" address="unix:///run/containerd/s/38b117e53867d61e9b720be1663edb63bee63eae85513989752969eaee7a581b" protocol=ttrpc version=3 Jul 15 05:16:00.729631 containerd[1996]: time="2025-07-15T05:16:00.729601553Z" level=info msg="Container b83c137954f45710c06b480e5b4354891198fc886b5c7f0e48001cfe000eab40: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:16:00.746207 systemd[1]: Started cri-containerd-323c83cc22d17276e5b5efb9f1ee9ec004a02d0a60e7f55ce3bb76f6fbb15857.scope - libcontainer container 323c83cc22d17276e5b5efb9f1ee9ec004a02d0a60e7f55ce3bb76f6fbb15857. Jul 15 05:16:00.756345 containerd[1996]: time="2025-07-15T05:16:00.756297192Z" level=info msg="CreateContainer within sandbox \"e5206b197b31e57ac3322c24acb02ef7aad1f3bfaff0bad7b74cbb9d43f7c35b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"b83c137954f45710c06b480e5b4354891198fc886b5c7f0e48001cfe000eab40\"" Jul 15 05:16:00.757380 systemd[1]: Started cri-containerd-b0d661325424f7bcabe5ddbcc60105b657ad4fcf800928c3f9d8a224c7e37b48.scope - libcontainer container b0d661325424f7bcabe5ddbcc60105b657ad4fcf800928c3f9d8a224c7e37b48. Jul 15 05:16:00.758180 containerd[1996]: time="2025-07-15T05:16:00.758111366Z" level=info msg="StartContainer for \"b83c137954f45710c06b480e5b4354891198fc886b5c7f0e48001cfe000eab40\"" Jul 15 05:16:00.761682 containerd[1996]: time="2025-07-15T05:16:00.761601045Z" level=info msg="connecting to shim b83c137954f45710c06b480e5b4354891198fc886b5c7f0e48001cfe000eab40" address="unix:///run/containerd/s/8f09c88bab7706f3372e6902668b5bf9cd7c984a2496d22f43715c4ce94f195f" protocol=ttrpc version=3 Jul 15 05:16:00.814139 systemd[1]: Started cri-containerd-b83c137954f45710c06b480e5b4354891198fc886b5c7f0e48001cfe000eab40.scope - libcontainer container b83c137954f45710c06b480e5b4354891198fc886b5c7f0e48001cfe000eab40. Jul 15 05:16:00.862688 containerd[1996]: time="2025-07-15T05:16:00.862633864Z" level=info msg="StartContainer for \"323c83cc22d17276e5b5efb9f1ee9ec004a02d0a60e7f55ce3bb76f6fbb15857\" returns successfully" Jul 15 05:16:00.887229 kubelet[2948]: W0715 05:16:00.887066 2948 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.18.224:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-18-224&limit=500&resourceVersion=0": dial tcp 172.31.18.224:6443: connect: connection refused Jul 15 05:16:00.887470 kubelet[2948]: E0715 05:16:00.887407 2948 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.18.224:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-18-224&limit=500&resourceVersion=0\": dial tcp 172.31.18.224:6443: connect: connection refused" logger="UnhandledError" Jul 15 05:16:00.889981 containerd[1996]: time="2025-07-15T05:16:00.889939510Z" level=info msg="StartContainer for \"b0d661325424f7bcabe5ddbcc60105b657ad4fcf800928c3f9d8a224c7e37b48\" returns successfully" Jul 15 05:16:00.937517 containerd[1996]: time="2025-07-15T05:16:00.937399469Z" level=info msg="StartContainer for \"b83c137954f45710c06b480e5b4354891198fc886b5c7f0e48001cfe000eab40\" returns successfully" Jul 15 05:16:00.986321 kubelet[2948]: W0715 05:16:00.986189 2948 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.18.224:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.18.224:6443: connect: connection refused Jul 15 05:16:00.986321 kubelet[2948]: E0715 05:16:00.986276 2948 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.18.224:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.18.224:6443: connect: connection refused" logger="UnhandledError" Jul 15 05:16:01.133778 kubelet[2948]: E0715 05:16:01.133705 2948 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.224:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-224?timeout=10s\": dial tcp 172.31.18.224:6443: connect: connection refused" interval="1.6s" Jul 15 05:16:01.336025 kubelet[2948]: I0715 05:16:01.335553 2948 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-18-224" Jul 15 05:16:01.337603 kubelet[2948]: E0715 05:16:01.337566 2948 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.18.224:6443/api/v1/nodes\": dial tcp 172.31.18.224:6443: connect: connection refused" node="ip-172-31-18-224" Jul 15 05:16:01.364736 kubelet[2948]: W0715 05:16:01.364598 2948 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.18.224:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.18.224:6443: connect: connection refused Jul 15 05:16:01.364736 kubelet[2948]: E0715 05:16:01.364704 2948 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.18.224:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.18.224:6443: connect: connection refused" logger="UnhandledError" Jul 15 05:16:01.811265 kubelet[2948]: E0715 05:16:01.810541 2948 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-224\" not found" node="ip-172-31-18-224" Jul 15 05:16:01.817368 kubelet[2948]: E0715 05:16:01.817332 2948 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-224\" not found" node="ip-172-31-18-224" Jul 15 05:16:01.821292 kubelet[2948]: E0715 05:16:01.821094 2948 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-224\" not found" node="ip-172-31-18-224" Jul 15 05:16:02.823885 kubelet[2948]: E0715 05:16:02.823183 2948 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-224\" not found" node="ip-172-31-18-224" Jul 15 05:16:02.823885 kubelet[2948]: E0715 05:16:02.823518 2948 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-224\" not found" node="ip-172-31-18-224" Jul 15 05:16:02.823885 kubelet[2948]: E0715 05:16:02.823761 2948 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-224\" not found" node="ip-172-31-18-224" Jul 15 05:16:02.941669 kubelet[2948]: I0715 05:16:02.940843 2948 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-18-224" Jul 15 05:16:03.292635 kubelet[2948]: E0715 05:16:03.292597 2948 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-18-224\" not found" node="ip-172-31-18-224" Jul 15 05:16:03.474694 kubelet[2948]: I0715 05:16:03.474028 2948 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-18-224" Jul 15 05:16:03.474694 kubelet[2948]: E0715 05:16:03.474072 2948 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ip-172-31-18-224\": node \"ip-172-31-18-224\" not found" Jul 15 05:16:03.527452 kubelet[2948]: I0715 05:16:03.527411 2948 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-18-224" Jul 15 05:16:03.532476 kubelet[2948]: E0715 05:16:03.532442 2948 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-18-224\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-18-224" Jul 15 05:16:03.532476 kubelet[2948]: I0715 05:16:03.532470 2948 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-18-224" Jul 15 05:16:03.534103 kubelet[2948]: E0715 05:16:03.534059 2948 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-18-224\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-18-224" Jul 15 05:16:03.534103 kubelet[2948]: I0715 05:16:03.534084 2948 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-18-224" Jul 15 05:16:03.535684 kubelet[2948]: E0715 05:16:03.535658 2948 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-18-224\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-18-224" Jul 15 05:16:03.680953 kubelet[2948]: I0715 05:16:03.680884 2948 apiserver.go:52] "Watching apiserver" Jul 15 05:16:03.732119 kubelet[2948]: I0715 05:16:03.732076 2948 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 15 05:16:03.821715 kubelet[2948]: I0715 05:16:03.821527 2948 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-18-224" Jul 15 05:16:03.821715 kubelet[2948]: I0715 05:16:03.821596 2948 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-18-224" Jul 15 05:16:03.823611 kubelet[2948]: E0715 05:16:03.823445 2948 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-18-224\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-18-224" Jul 15 05:16:03.823611 kubelet[2948]: E0715 05:16:03.823554 2948 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-18-224\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-18-224" Jul 15 05:16:05.701292 systemd[1]: Reload requested from client PID 3218 ('systemctl') (unit session-9.scope)... Jul 15 05:16:05.701311 systemd[1]: Reloading... Jul 15 05:16:05.805936 zram_generator::config[3268]: No configuration found. Jul 15 05:16:05.915708 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 05:16:06.070517 systemd[1]: Reloading finished in 368 ms. Jul 15 05:16:06.098727 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 05:16:06.114266 systemd[1]: kubelet.service: Deactivated successfully. Jul 15 05:16:06.114508 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 05:16:06.114573 systemd[1]: kubelet.service: Consumed 882ms CPU time, 128.5M memory peak. Jul 15 05:16:06.116777 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 05:16:06.386586 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 05:16:06.399415 (kubelet)[3322]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 15 05:16:06.475642 kubelet[3322]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 05:16:06.475642 kubelet[3322]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 15 05:16:06.475642 kubelet[3322]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 05:16:06.476138 kubelet[3322]: I0715 05:16:06.475753 3322 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 15 05:16:06.488716 kubelet[3322]: I0715 05:16:06.488196 3322 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 15 05:16:06.488716 kubelet[3322]: I0715 05:16:06.488235 3322 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 15 05:16:06.489114 kubelet[3322]: I0715 05:16:06.489084 3322 server.go:954] "Client rotation is on, will bootstrap in background" Jul 15 05:16:06.490612 kubelet[3322]: I0715 05:16:06.490578 3322 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 15 05:16:06.502003 kubelet[3322]: I0715 05:16:06.501951 3322 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 15 05:16:06.507244 kubelet[3322]: I0715 05:16:06.507155 3322 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 15 05:16:06.511506 kubelet[3322]: I0715 05:16:06.510030 3322 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 15 05:16:06.511506 kubelet[3322]: I0715 05:16:06.510311 3322 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 15 05:16:06.511506 kubelet[3322]: I0715 05:16:06.510341 3322 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-18-224","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 15 05:16:06.511506 kubelet[3322]: I0715 05:16:06.510536 3322 topology_manager.go:138] "Creating topology manager with none policy" Jul 15 05:16:06.511844 kubelet[3322]: I0715 05:16:06.510547 3322 container_manager_linux.go:304] "Creating device plugin manager" Jul 15 05:16:06.511844 kubelet[3322]: I0715 05:16:06.510593 3322 state_mem.go:36] "Initialized new in-memory state store" Jul 15 05:16:06.511844 kubelet[3322]: I0715 05:16:06.510943 3322 kubelet.go:446] "Attempting to sync node with API server" Jul 15 05:16:06.511844 kubelet[3322]: I0715 05:16:06.510969 3322 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 15 05:16:06.511844 kubelet[3322]: I0715 05:16:06.510992 3322 kubelet.go:352] "Adding apiserver pod source" Jul 15 05:16:06.511844 kubelet[3322]: I0715 05:16:06.511006 3322 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 15 05:16:06.514622 kubelet[3322]: I0715 05:16:06.514587 3322 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Jul 15 05:16:06.516356 kubelet[3322]: I0715 05:16:06.516331 3322 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 15 05:16:06.518018 kubelet[3322]: I0715 05:16:06.517995 3322 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 15 05:16:06.518113 kubelet[3322]: I0715 05:16:06.518039 3322 server.go:1287] "Started kubelet" Jul 15 05:16:06.536657 kubelet[3322]: I0715 05:16:06.536612 3322 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 15 05:16:06.542234 kubelet[3322]: I0715 05:16:06.542167 3322 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 15 05:16:06.543199 kubelet[3322]: I0715 05:16:06.543175 3322 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 15 05:16:06.544324 kubelet[3322]: I0715 05:16:06.544305 3322 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 15 05:16:06.553163 kubelet[3322]: I0715 05:16:06.553132 3322 server.go:479] "Adding debug handlers to kubelet server" Jul 15 05:16:06.555629 kubelet[3322]: I0715 05:16:06.555595 3322 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 15 05:16:06.556968 kubelet[3322]: I0715 05:16:06.556928 3322 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 15 05:16:06.557525 kubelet[3322]: E0715 05:16:06.557485 3322 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-18-224\" not found" Jul 15 05:16:06.560432 kubelet[3322]: E0715 05:16:06.559969 3322 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 15 05:16:06.560432 kubelet[3322]: I0715 05:16:06.560368 3322 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 15 05:16:06.562059 kubelet[3322]: I0715 05:16:06.562033 3322 reconciler.go:26] "Reconciler: start to sync state" Jul 15 05:16:06.562568 kubelet[3322]: I0715 05:16:06.562500 3322 factory.go:221] Registration of the systemd container factory successfully Jul 15 05:16:06.563235 kubelet[3322]: I0715 05:16:06.562722 3322 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 15 05:16:06.565149 kubelet[3322]: I0715 05:16:06.565119 3322 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 15 05:16:06.567367 kubelet[3322]: I0715 05:16:06.567342 3322 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 15 05:16:06.567519 kubelet[3322]: I0715 05:16:06.567508 3322 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 15 05:16:06.567614 kubelet[3322]: I0715 05:16:06.567604 3322 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 15 05:16:06.567677 kubelet[3322]: I0715 05:16:06.567669 3322 kubelet.go:2382] "Starting kubelet main sync loop" Jul 15 05:16:06.567788 kubelet[3322]: E0715 05:16:06.567768 3322 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 15 05:16:06.573118 kubelet[3322]: I0715 05:16:06.573087 3322 factory.go:221] Registration of the containerd container factory successfully Jul 15 05:16:06.667070 kubelet[3322]: I0715 05:16:06.665468 3322 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 15 05:16:06.667070 kubelet[3322]: I0715 05:16:06.665490 3322 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 15 05:16:06.667070 kubelet[3322]: I0715 05:16:06.665512 3322 state_mem.go:36] "Initialized new in-memory state store" Jul 15 05:16:06.667070 kubelet[3322]: I0715 05:16:06.665718 3322 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 15 05:16:06.667070 kubelet[3322]: I0715 05:16:06.665732 3322 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 15 05:16:06.667070 kubelet[3322]: I0715 05:16:06.665757 3322 policy_none.go:49] "None policy: Start" Jul 15 05:16:06.667070 kubelet[3322]: I0715 05:16:06.665770 3322 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 15 05:16:06.667070 kubelet[3322]: I0715 05:16:06.665782 3322 state_mem.go:35] "Initializing new in-memory state store" Jul 15 05:16:06.667070 kubelet[3322]: I0715 05:16:06.665939 3322 state_mem.go:75] "Updated machine memory state" Jul 15 05:16:06.668982 kubelet[3322]: E0715 05:16:06.668951 3322 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 15 05:16:06.673403 kubelet[3322]: I0715 05:16:06.673364 3322 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 15 05:16:06.673623 kubelet[3322]: I0715 05:16:06.673565 3322 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 15 05:16:06.673623 kubelet[3322]: I0715 05:16:06.673584 3322 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 15 05:16:06.675689 kubelet[3322]: I0715 05:16:06.675290 3322 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 15 05:16:06.682987 kubelet[3322]: E0715 05:16:06.682953 3322 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 15 05:16:06.797930 kubelet[3322]: I0715 05:16:06.797816 3322 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-18-224" Jul 15 05:16:06.808890 kubelet[3322]: I0715 05:16:06.808468 3322 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-18-224" Jul 15 05:16:06.808890 kubelet[3322]: I0715 05:16:06.808555 3322 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-18-224" Jul 15 05:16:06.869693 kubelet[3322]: I0715 05:16:06.869653 3322 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-18-224" Jul 15 05:16:06.873346 kubelet[3322]: I0715 05:16:06.873137 3322 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-18-224" Jul 15 05:16:06.873346 kubelet[3322]: I0715 05:16:06.873271 3322 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-18-224" Jul 15 05:16:06.964243 kubelet[3322]: I0715 05:16:06.963708 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0140bb43c12902110e94dadc71de815a-kubeconfig\") pod \"kube-controller-manager-ip-172-31-18-224\" (UID: \"0140bb43c12902110e94dadc71de815a\") " pod="kube-system/kube-controller-manager-ip-172-31-18-224" Jul 15 05:16:06.964243 kubelet[3322]: I0715 05:16:06.963751 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0140bb43c12902110e94dadc71de815a-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-18-224\" (UID: \"0140bb43c12902110e94dadc71de815a\") " pod="kube-system/kube-controller-manager-ip-172-31-18-224" Jul 15 05:16:06.964243 kubelet[3322]: I0715 05:16:06.963776 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ac07b934a252126c7f33ed0ac65947b0-ca-certs\") pod \"kube-apiserver-ip-172-31-18-224\" (UID: \"ac07b934a252126c7f33ed0ac65947b0\") " pod="kube-system/kube-apiserver-ip-172-31-18-224" Jul 15 05:16:06.964243 kubelet[3322]: I0715 05:16:06.963793 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ac07b934a252126c7f33ed0ac65947b0-k8s-certs\") pod \"kube-apiserver-ip-172-31-18-224\" (UID: \"ac07b934a252126c7f33ed0ac65947b0\") " pod="kube-system/kube-apiserver-ip-172-31-18-224" Jul 15 05:16:06.964243 kubelet[3322]: I0715 05:16:06.963808 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0140bb43c12902110e94dadc71de815a-k8s-certs\") pod \"kube-controller-manager-ip-172-31-18-224\" (UID: \"0140bb43c12902110e94dadc71de815a\") " pod="kube-system/kube-controller-manager-ip-172-31-18-224" Jul 15 05:16:06.964464 kubelet[3322]: I0715 05:16:06.963825 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/0140bb43c12902110e94dadc71de815a-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-18-224\" (UID: \"0140bb43c12902110e94dadc71de815a\") " pod="kube-system/kube-controller-manager-ip-172-31-18-224" Jul 15 05:16:06.964464 kubelet[3322]: I0715 05:16:06.963840 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a1e75edb38df30805e5d14090c1b1383-kubeconfig\") pod \"kube-scheduler-ip-172-31-18-224\" (UID: \"a1e75edb38df30805e5d14090c1b1383\") " pod="kube-system/kube-scheduler-ip-172-31-18-224" Jul 15 05:16:06.964464 kubelet[3322]: I0715 05:16:06.963855 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ac07b934a252126c7f33ed0ac65947b0-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-18-224\" (UID: \"ac07b934a252126c7f33ed0ac65947b0\") " pod="kube-system/kube-apiserver-ip-172-31-18-224" Jul 15 05:16:06.964464 kubelet[3322]: I0715 05:16:06.963870 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0140bb43c12902110e94dadc71de815a-ca-certs\") pod \"kube-controller-manager-ip-172-31-18-224\" (UID: \"0140bb43c12902110e94dadc71de815a\") " pod="kube-system/kube-controller-manager-ip-172-31-18-224" Jul 15 05:16:07.512443 kubelet[3322]: I0715 05:16:07.512387 3322 apiserver.go:52] "Watching apiserver" Jul 15 05:16:07.560525 kubelet[3322]: I0715 05:16:07.560485 3322 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 15 05:16:07.622327 kubelet[3322]: I0715 05:16:07.622225 3322 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-18-224" Jul 15 05:16:07.636621 kubelet[3322]: E0715 05:16:07.636567 3322 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-18-224\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-18-224" Jul 15 05:16:07.684120 kubelet[3322]: I0715 05:16:07.683896 3322 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-18-224" podStartSLOduration=1.683874738 podStartE2EDuration="1.683874738s" podCreationTimestamp="2025-07-15 05:16:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 05:16:07.667344623 +0000 UTC m=+1.259679587" watchObservedRunningTime="2025-07-15 05:16:07.683874738 +0000 UTC m=+1.276209699" Jul 15 05:16:07.696987 kubelet[3322]: I0715 05:16:07.696812 3322 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-18-224" podStartSLOduration=1.696787893 podStartE2EDuration="1.696787893s" podCreationTimestamp="2025-07-15 05:16:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 05:16:07.684554641 +0000 UTC m=+1.276889643" watchObservedRunningTime="2025-07-15 05:16:07.696787893 +0000 UTC m=+1.289122856" Jul 15 05:16:07.713730 kubelet[3322]: I0715 05:16:07.713665 3322 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-18-224" podStartSLOduration=1.7136461509999998 podStartE2EDuration="1.713646151s" podCreationTimestamp="2025-07-15 05:16:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 05:16:07.698171974 +0000 UTC m=+1.290506947" watchObservedRunningTime="2025-07-15 05:16:07.713646151 +0000 UTC m=+1.305981113" Jul 15 05:16:10.664073 update_engine[1985]: I20250715 05:16:10.663965 1985 update_attempter.cc:509] Updating boot flags... Jul 15 05:16:11.168421 kubelet[3322]: I0715 05:16:11.168372 3322 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 15 05:16:11.170014 containerd[1996]: time="2025-07-15T05:16:11.169234520Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 15 05:16:11.170534 kubelet[3322]: I0715 05:16:11.169387 3322 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 15 05:16:11.940825 systemd[1]: Created slice kubepods-besteffort-podfc0d6f73_2baf_48ba_a7d5_12d2b884e667.slice - libcontainer container kubepods-besteffort-podfc0d6f73_2baf_48ba_a7d5_12d2b884e667.slice. Jul 15 05:16:12.002004 kubelet[3322]: I0715 05:16:12.001942 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fc0d6f73-2baf-48ba-a7d5-12d2b884e667-lib-modules\") pod \"kube-proxy-k2lsh\" (UID: \"fc0d6f73-2baf-48ba-a7d5-12d2b884e667\") " pod="kube-system/kube-proxy-k2lsh" Jul 15 05:16:12.002004 kubelet[3322]: I0715 05:16:12.002003 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqbvk\" (UniqueName: \"kubernetes.io/projected/fc0d6f73-2baf-48ba-a7d5-12d2b884e667-kube-api-access-dqbvk\") pod \"kube-proxy-k2lsh\" (UID: \"fc0d6f73-2baf-48ba-a7d5-12d2b884e667\") " pod="kube-system/kube-proxy-k2lsh" Jul 15 05:16:12.002211 kubelet[3322]: I0715 05:16:12.002036 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/fc0d6f73-2baf-48ba-a7d5-12d2b884e667-kube-proxy\") pod \"kube-proxy-k2lsh\" (UID: \"fc0d6f73-2baf-48ba-a7d5-12d2b884e667\") " pod="kube-system/kube-proxy-k2lsh" Jul 15 05:16:12.002211 kubelet[3322]: I0715 05:16:12.002059 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fc0d6f73-2baf-48ba-a7d5-12d2b884e667-xtables-lock\") pod \"kube-proxy-k2lsh\" (UID: \"fc0d6f73-2baf-48ba-a7d5-12d2b884e667\") " pod="kube-system/kube-proxy-k2lsh" Jul 15 05:16:12.251503 containerd[1996]: time="2025-07-15T05:16:12.251382359Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-k2lsh,Uid:fc0d6f73-2baf-48ba-a7d5-12d2b884e667,Namespace:kube-system,Attempt:0,}" Jul 15 05:16:12.286374 containerd[1996]: time="2025-07-15T05:16:12.286331049Z" level=info msg="connecting to shim 9027823144a6db6bb99a2133c33c74f2e537701f4297ec9a4c4d59775f7e4548" address="unix:///run/containerd/s/a0f6d28b53360b4ff2225ea9aab1ada83023b4da189b2e349cab0c9a70217a1f" namespace=k8s.io protocol=ttrpc version=3 Jul 15 05:16:12.330807 systemd[1]: Started cri-containerd-9027823144a6db6bb99a2133c33c74f2e537701f4297ec9a4c4d59775f7e4548.scope - libcontainer container 9027823144a6db6bb99a2133c33c74f2e537701f4297ec9a4c4d59775f7e4548. Jul 15 05:16:12.402792 systemd[1]: Created slice kubepods-besteffort-pod3fcd21d2_35c3_4b43_9b5f_586bcb678c9e.slice - libcontainer container kubepods-besteffort-pod3fcd21d2_35c3_4b43_9b5f_586bcb678c9e.slice. Jul 15 05:16:12.407484 containerd[1996]: time="2025-07-15T05:16:12.407097343Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-k2lsh,Uid:fc0d6f73-2baf-48ba-a7d5-12d2b884e667,Namespace:kube-system,Attempt:0,} returns sandbox id \"9027823144a6db6bb99a2133c33c74f2e537701f4297ec9a4c4d59775f7e4548\"" Jul 15 05:16:12.408944 kubelet[3322]: I0715 05:16:12.408855 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/3fcd21d2-35c3-4b43-9b5f-586bcb678c9e-var-lib-calico\") pod \"tigera-operator-747864d56d-7j2b9\" (UID: \"3fcd21d2-35c3-4b43-9b5f-586bcb678c9e\") " pod="tigera-operator/tigera-operator-747864d56d-7j2b9" Jul 15 05:16:12.408944 kubelet[3322]: I0715 05:16:12.408918 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdx8r\" (UniqueName: \"kubernetes.io/projected/3fcd21d2-35c3-4b43-9b5f-586bcb678c9e-kube-api-access-zdx8r\") pod \"tigera-operator-747864d56d-7j2b9\" (UID: \"3fcd21d2-35c3-4b43-9b5f-586bcb678c9e\") " pod="tigera-operator/tigera-operator-747864d56d-7j2b9" Jul 15 05:16:12.413052 containerd[1996]: time="2025-07-15T05:16:12.412967602Z" level=info msg="CreateContainer within sandbox \"9027823144a6db6bb99a2133c33c74f2e537701f4297ec9a4c4d59775f7e4548\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 15 05:16:12.442411 containerd[1996]: time="2025-07-15T05:16:12.442358896Z" level=info msg="Container cd9a87cdfce5bbe8c4d07cdb2d70f6d78a1fc7e764fd73cb6d166c4b35eb0232: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:16:12.444946 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3359815266.mount: Deactivated successfully. Jul 15 05:16:12.472463 containerd[1996]: time="2025-07-15T05:16:12.472412153Z" level=info msg="CreateContainer within sandbox \"9027823144a6db6bb99a2133c33c74f2e537701f4297ec9a4c4d59775f7e4548\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"cd9a87cdfce5bbe8c4d07cdb2d70f6d78a1fc7e764fd73cb6d166c4b35eb0232\"" Jul 15 05:16:12.475529 containerd[1996]: time="2025-07-15T05:16:12.475491648Z" level=info msg="StartContainer for \"cd9a87cdfce5bbe8c4d07cdb2d70f6d78a1fc7e764fd73cb6d166c4b35eb0232\"" Jul 15 05:16:12.484122 containerd[1996]: time="2025-07-15T05:16:12.484054661Z" level=info msg="connecting to shim cd9a87cdfce5bbe8c4d07cdb2d70f6d78a1fc7e764fd73cb6d166c4b35eb0232" address="unix:///run/containerd/s/a0f6d28b53360b4ff2225ea9aab1ada83023b4da189b2e349cab0c9a70217a1f" protocol=ttrpc version=3 Jul 15 05:16:12.509168 systemd[1]: Started cri-containerd-cd9a87cdfce5bbe8c4d07cdb2d70f6d78a1fc7e764fd73cb6d166c4b35eb0232.scope - libcontainer container cd9a87cdfce5bbe8c4d07cdb2d70f6d78a1fc7e764fd73cb6d166c4b35eb0232. Jul 15 05:16:12.559574 containerd[1996]: time="2025-07-15T05:16:12.559481011Z" level=info msg="StartContainer for \"cd9a87cdfce5bbe8c4d07cdb2d70f6d78a1fc7e764fd73cb6d166c4b35eb0232\" returns successfully" Jul 15 05:16:12.651392 kubelet[3322]: I0715 05:16:12.651337 3322 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-k2lsh" podStartSLOduration=1.65131857 podStartE2EDuration="1.65131857s" podCreationTimestamp="2025-07-15 05:16:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 05:16:12.650985968 +0000 UTC m=+6.243320926" watchObservedRunningTime="2025-07-15 05:16:12.65131857 +0000 UTC m=+6.243653528" Jul 15 05:16:12.713689 containerd[1996]: time="2025-07-15T05:16:12.713542097Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-7j2b9,Uid:3fcd21d2-35c3-4b43-9b5f-586bcb678c9e,Namespace:tigera-operator,Attempt:0,}" Jul 15 05:16:12.749168 containerd[1996]: time="2025-07-15T05:16:12.749076926Z" level=info msg="connecting to shim b965e46b09224cfecb5f18a4fd64323fbd09e5993dd032754bb713767077508f" address="unix:///run/containerd/s/ac608c2a2c098c2ebeb12d2431ea6113231a19474682abad16ac372f23b7b880" namespace=k8s.io protocol=ttrpc version=3 Jul 15 05:16:12.788143 systemd[1]: Started cri-containerd-b965e46b09224cfecb5f18a4fd64323fbd09e5993dd032754bb713767077508f.scope - libcontainer container b965e46b09224cfecb5f18a4fd64323fbd09e5993dd032754bb713767077508f. Jul 15 05:16:12.846423 containerd[1996]: time="2025-07-15T05:16:12.846353087Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-7j2b9,Uid:3fcd21d2-35c3-4b43-9b5f-586bcb678c9e,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"b965e46b09224cfecb5f18a4fd64323fbd09e5993dd032754bb713767077508f\"" Jul 15 05:16:12.849365 containerd[1996]: time="2025-07-15T05:16:12.849020995Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 15 05:16:13.138430 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2283354250.mount: Deactivated successfully. Jul 15 05:16:14.374364 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3402813163.mount: Deactivated successfully. Jul 15 05:16:15.114647 containerd[1996]: time="2025-07-15T05:16:15.114551283Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:16:15.115935 containerd[1996]: time="2025-07-15T05:16:15.115816829Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=25056543" Jul 15 05:16:15.117929 containerd[1996]: time="2025-07-15T05:16:15.116976011Z" level=info msg="ImageCreate event name:\"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:16:15.119304 containerd[1996]: time="2025-07-15T05:16:15.119265977Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:16:15.119843 containerd[1996]: time="2025-07-15T05:16:15.119817758Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"25052538\" in 2.270764724s" Jul 15 05:16:15.119951 containerd[1996]: time="2025-07-15T05:16:15.119937038Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\"" Jul 15 05:16:15.122302 containerd[1996]: time="2025-07-15T05:16:15.122253062Z" level=info msg="CreateContainer within sandbox \"b965e46b09224cfecb5f18a4fd64323fbd09e5993dd032754bb713767077508f\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 15 05:16:15.130354 containerd[1996]: time="2025-07-15T05:16:15.130317605Z" level=info msg="Container 9425325fd2ac7dc432e996ec9995a7edfe4501fa5e2090f7af056065df121e81: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:16:15.147668 containerd[1996]: time="2025-07-15T05:16:15.147610864Z" level=info msg="CreateContainer within sandbox \"b965e46b09224cfecb5f18a4fd64323fbd09e5993dd032754bb713767077508f\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"9425325fd2ac7dc432e996ec9995a7edfe4501fa5e2090f7af056065df121e81\"" Jul 15 05:16:15.148482 containerd[1996]: time="2025-07-15T05:16:15.148452992Z" level=info msg="StartContainer for \"9425325fd2ac7dc432e996ec9995a7edfe4501fa5e2090f7af056065df121e81\"" Jul 15 05:16:15.149294 containerd[1996]: time="2025-07-15T05:16:15.149263012Z" level=info msg="connecting to shim 9425325fd2ac7dc432e996ec9995a7edfe4501fa5e2090f7af056065df121e81" address="unix:///run/containerd/s/ac608c2a2c098c2ebeb12d2431ea6113231a19474682abad16ac372f23b7b880" protocol=ttrpc version=3 Jul 15 05:16:15.178118 systemd[1]: Started cri-containerd-9425325fd2ac7dc432e996ec9995a7edfe4501fa5e2090f7af056065df121e81.scope - libcontainer container 9425325fd2ac7dc432e996ec9995a7edfe4501fa5e2090f7af056065df121e81. Jul 15 05:16:15.212059 containerd[1996]: time="2025-07-15T05:16:15.212012582Z" level=info msg="StartContainer for \"9425325fd2ac7dc432e996ec9995a7edfe4501fa5e2090f7af056065df121e81\" returns successfully" Jul 15 05:16:15.677161 kubelet[3322]: I0715 05:16:15.677101 3322 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-747864d56d-7j2b9" podStartSLOduration=1.4045968229999999 podStartE2EDuration="3.677083622s" podCreationTimestamp="2025-07-15 05:16:12 +0000 UTC" firstStartedPulling="2025-07-15 05:16:12.848382678 +0000 UTC m=+6.440717619" lastFinishedPulling="2025-07-15 05:16:15.120869466 +0000 UTC m=+8.713204418" observedRunningTime="2025-07-15 05:16:15.662175136 +0000 UTC m=+9.254510097" watchObservedRunningTime="2025-07-15 05:16:15.677083622 +0000 UTC m=+9.269418582" Jul 15 05:16:22.135364 sudo[2387]: pam_unix(sudo:session): session closed for user root Jul 15 05:16:22.160641 sshd[2386]: Connection closed by 139.178.89.65 port 41196 Jul 15 05:16:22.159969 sshd-session[2383]: pam_unix(sshd:session): session closed for user core Jul 15 05:16:22.168173 systemd-logind[1980]: Session 9 logged out. Waiting for processes to exit. Jul 15 05:16:22.168950 systemd[1]: sshd@8-172.31.18.224:22-139.178.89.65:41196.service: Deactivated successfully. Jul 15 05:16:22.174542 systemd[1]: session-9.scope: Deactivated successfully. Jul 15 05:16:22.175174 systemd[1]: session-9.scope: Consumed 5.282s CPU time, 152.3M memory peak. Jul 15 05:16:22.179644 systemd-logind[1980]: Removed session 9. Jul 15 05:16:26.840931 kubelet[3322]: I0715 05:16:26.840860 3322 status_manager.go:890] "Failed to get status for pod" podUID="ee540280-1638-419f-a936-bf87d6ddf3d3" pod="calico-system/calico-typha-57d5dd694b-9qtxf" err="pods \"calico-typha-57d5dd694b-9qtxf\" is forbidden: User \"system:node:ip-172-31-18-224\" cannot get resource \"pods\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ip-172-31-18-224' and this object" Jul 15 05:16:26.841416 kubelet[3322]: W0715 05:16:26.840991 3322 reflector.go:569] object-"calico-system"/"tigera-ca-bundle": failed to list *v1.ConfigMap: configmaps "tigera-ca-bundle" is forbidden: User "system:node:ip-172-31-18-224" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ip-172-31-18-224' and this object Jul 15 05:16:26.841416 kubelet[3322]: E0715 05:16:26.841030 3322 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"tigera-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"tigera-ca-bundle\" is forbidden: User \"system:node:ip-172-31-18-224\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ip-172-31-18-224' and this object" logger="UnhandledError" Jul 15 05:16:26.842758 kubelet[3322]: W0715 05:16:26.842425 3322 reflector.go:569] object-"calico-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ip-172-31-18-224" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ip-172-31-18-224' and this object Jul 15 05:16:26.842758 kubelet[3322]: E0715 05:16:26.842579 3322 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ip-172-31-18-224\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ip-172-31-18-224' and this object" logger="UnhandledError" Jul 15 05:16:26.845317 systemd[1]: Created slice kubepods-besteffort-podee540280_1638_419f_a936_bf87d6ddf3d3.slice - libcontainer container kubepods-besteffort-podee540280_1638_419f_a936_bf87d6ddf3d3.slice. Jul 15 05:16:26.909679 kubelet[3322]: I0715 05:16:26.909622 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/ee540280-1638-419f-a936-bf87d6ddf3d3-typha-certs\") pod \"calico-typha-57d5dd694b-9qtxf\" (UID: \"ee540280-1638-419f-a936-bf87d6ddf3d3\") " pod="calico-system/calico-typha-57d5dd694b-9qtxf" Jul 15 05:16:26.909844 kubelet[3322]: I0715 05:16:26.909687 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ee540280-1638-419f-a936-bf87d6ddf3d3-tigera-ca-bundle\") pod \"calico-typha-57d5dd694b-9qtxf\" (UID: \"ee540280-1638-419f-a936-bf87d6ddf3d3\") " pod="calico-system/calico-typha-57d5dd694b-9qtxf" Jul 15 05:16:26.909844 kubelet[3322]: I0715 05:16:26.909718 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-697bd\" (UniqueName: \"kubernetes.io/projected/ee540280-1638-419f-a936-bf87d6ddf3d3-kube-api-access-697bd\") pod \"calico-typha-57d5dd694b-9qtxf\" (UID: \"ee540280-1638-419f-a936-bf87d6ddf3d3\") " pod="calico-system/calico-typha-57d5dd694b-9qtxf" Jul 15 05:16:27.099883 systemd[1]: Created slice kubepods-besteffort-podd2cbf88e_2830_4579_bb9e_edbf674b313e.slice - libcontainer container kubepods-besteffort-podd2cbf88e_2830_4579_bb9e_edbf674b313e.slice. Jul 15 05:16:27.208732 kubelet[3322]: E0715 05:16:27.208325 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-plmkb" podUID="792c1079-d6ad-4977-9449-eb7585301bdc" Jul 15 05:16:27.211662 kubelet[3322]: I0715 05:16:27.211618 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/d2cbf88e-2830-4579-bb9e-edbf674b313e-flexvol-driver-host\") pod \"calico-node-h87qb\" (UID: \"d2cbf88e-2830-4579-bb9e-edbf674b313e\") " pod="calico-system/calico-node-h87qb" Jul 15 05:16:27.211986 kubelet[3322]: I0715 05:16:27.211867 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/d2cbf88e-2830-4579-bb9e-edbf674b313e-cni-net-dir\") pod \"calico-node-h87qb\" (UID: \"d2cbf88e-2830-4579-bb9e-edbf674b313e\") " pod="calico-system/calico-node-h87qb" Jul 15 05:16:27.211986 kubelet[3322]: I0715 05:16:27.211890 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d2cbf88e-2830-4579-bb9e-edbf674b313e-xtables-lock\") pod \"calico-node-h87qb\" (UID: \"d2cbf88e-2830-4579-bb9e-edbf674b313e\") " pod="calico-system/calico-node-h87qb" Jul 15 05:16:27.211986 kubelet[3322]: I0715 05:16:27.211949 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/d2cbf88e-2830-4579-bb9e-edbf674b313e-cni-bin-dir\") pod \"calico-node-h87qb\" (UID: \"d2cbf88e-2830-4579-bb9e-edbf674b313e\") " pod="calico-system/calico-node-h87qb" Jul 15 05:16:27.211986 kubelet[3322]: I0715 05:16:27.211964 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d2cbf88e-2830-4579-bb9e-edbf674b313e-tigera-ca-bundle\") pod \"calico-node-h87qb\" (UID: \"d2cbf88e-2830-4579-bb9e-edbf674b313e\") " pod="calico-system/calico-node-h87qb" Jul 15 05:16:27.212247 kubelet[3322]: I0715 05:16:27.212156 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/d2cbf88e-2830-4579-bb9e-edbf674b313e-node-certs\") pod \"calico-node-h87qb\" (UID: \"d2cbf88e-2830-4579-bb9e-edbf674b313e\") " pod="calico-system/calico-node-h87qb" Jul 15 05:16:27.212247 kubelet[3322]: I0715 05:16:27.212175 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d2cbf88e-2830-4579-bb9e-edbf674b313e-var-lib-calico\") pod \"calico-node-h87qb\" (UID: \"d2cbf88e-2830-4579-bb9e-edbf674b313e\") " pod="calico-system/calico-node-h87qb" Jul 15 05:16:27.212408 kubelet[3322]: I0715 05:16:27.212313 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/d2cbf88e-2830-4579-bb9e-edbf674b313e-var-run-calico\") pod \"calico-node-h87qb\" (UID: \"d2cbf88e-2830-4579-bb9e-edbf674b313e\") " pod="calico-system/calico-node-h87qb" Jul 15 05:16:27.212408 kubelet[3322]: I0715 05:16:27.212334 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/d2cbf88e-2830-4579-bb9e-edbf674b313e-cni-log-dir\") pod \"calico-node-h87qb\" (UID: \"d2cbf88e-2830-4579-bb9e-edbf674b313e\") " pod="calico-system/calico-node-h87qb" Jul 15 05:16:27.212542 kubelet[3322]: I0715 05:16:27.212349 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/d2cbf88e-2830-4579-bb9e-edbf674b313e-policysync\") pod \"calico-node-h87qb\" (UID: \"d2cbf88e-2830-4579-bb9e-edbf674b313e\") " pod="calico-system/calico-node-h87qb" Jul 15 05:16:27.212542 kubelet[3322]: I0715 05:16:27.212507 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h77m6\" (UniqueName: \"kubernetes.io/projected/d2cbf88e-2830-4579-bb9e-edbf674b313e-kube-api-access-h77m6\") pod \"calico-node-h87qb\" (UID: \"d2cbf88e-2830-4579-bb9e-edbf674b313e\") " pod="calico-system/calico-node-h87qb" Jul 15 05:16:27.212542 kubelet[3322]: I0715 05:16:27.212525 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d2cbf88e-2830-4579-bb9e-edbf674b313e-lib-modules\") pod \"calico-node-h87qb\" (UID: \"d2cbf88e-2830-4579-bb9e-edbf674b313e\") " pod="calico-system/calico-node-h87qb" Jul 15 05:16:27.313316 kubelet[3322]: I0715 05:16:27.313253 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/792c1079-d6ad-4977-9449-eb7585301bdc-kubelet-dir\") pod \"csi-node-driver-plmkb\" (UID: \"792c1079-d6ad-4977-9449-eb7585301bdc\") " pod="calico-system/csi-node-driver-plmkb" Jul 15 05:16:27.313471 kubelet[3322]: I0715 05:16:27.313379 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/792c1079-d6ad-4977-9449-eb7585301bdc-registration-dir\") pod \"csi-node-driver-plmkb\" (UID: \"792c1079-d6ad-4977-9449-eb7585301bdc\") " pod="calico-system/csi-node-driver-plmkb" Jul 15 05:16:27.313471 kubelet[3322]: I0715 05:16:27.313451 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/792c1079-d6ad-4977-9449-eb7585301bdc-socket-dir\") pod \"csi-node-driver-plmkb\" (UID: \"792c1079-d6ad-4977-9449-eb7585301bdc\") " pod="calico-system/csi-node-driver-plmkb" Jul 15 05:16:27.313570 kubelet[3322]: I0715 05:16:27.313473 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/792c1079-d6ad-4977-9449-eb7585301bdc-varrun\") pod \"csi-node-driver-plmkb\" (UID: \"792c1079-d6ad-4977-9449-eb7585301bdc\") " pod="calico-system/csi-node-driver-plmkb" Jul 15 05:16:27.313570 kubelet[3322]: I0715 05:16:27.313534 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7p94b\" (UniqueName: \"kubernetes.io/projected/792c1079-d6ad-4977-9449-eb7585301bdc-kube-api-access-7p94b\") pod \"csi-node-driver-plmkb\" (UID: \"792c1079-d6ad-4977-9449-eb7585301bdc\") " pod="calico-system/csi-node-driver-plmkb" Jul 15 05:16:27.324797 kubelet[3322]: E0715 05:16:27.320624 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:27.324797 kubelet[3322]: W0715 05:16:27.320654 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:27.324797 kubelet[3322]: E0715 05:16:27.322372 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:27.324797 kubelet[3322]: E0715 05:16:27.323869 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:27.324797 kubelet[3322]: W0715 05:16:27.323890 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:27.324797 kubelet[3322]: E0715 05:16:27.323954 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:27.324797 kubelet[3322]: E0715 05:16:27.324227 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:27.324797 kubelet[3322]: W0715 05:16:27.324240 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:27.324797 kubelet[3322]: E0715 05:16:27.324268 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:27.324797 kubelet[3322]: E0715 05:16:27.324480 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:27.325351 kubelet[3322]: W0715 05:16:27.324489 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:27.325351 kubelet[3322]: E0715 05:16:27.324567 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:27.325351 kubelet[3322]: E0715 05:16:27.325029 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:27.325351 kubelet[3322]: W0715 05:16:27.325042 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:27.325351 kubelet[3322]: E0715 05:16:27.325063 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:27.329934 kubelet[3322]: E0715 05:16:27.325809 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:27.329934 kubelet[3322]: W0715 05:16:27.325826 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:27.329934 kubelet[3322]: E0715 05:16:27.325860 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:27.329934 kubelet[3322]: E0715 05:16:27.326146 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:27.329934 kubelet[3322]: W0715 05:16:27.326156 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:27.329934 kubelet[3322]: E0715 05:16:27.326168 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:27.414579 kubelet[3322]: E0715 05:16:27.414495 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:27.414579 kubelet[3322]: W0715 05:16:27.414522 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:27.414579 kubelet[3322]: E0715 05:16:27.414548 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:27.415249 kubelet[3322]: E0715 05:16:27.415205 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:27.415249 kubelet[3322]: W0715 05:16:27.415225 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:27.415568 kubelet[3322]: E0715 05:16:27.415429 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:27.415832 kubelet[3322]: E0715 05:16:27.415679 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:27.415832 kubelet[3322]: W0715 05:16:27.415690 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:27.415832 kubelet[3322]: E0715 05:16:27.415704 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:27.416108 kubelet[3322]: E0715 05:16:27.416097 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:27.416176 kubelet[3322]: W0715 05:16:27.416166 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:27.416264 kubelet[3322]: E0715 05:16:27.416252 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:27.416930 kubelet[3322]: E0715 05:16:27.416648 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:27.417054 kubelet[3322]: W0715 05:16:27.417029 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:27.417179 kubelet[3322]: E0715 05:16:27.417134 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:27.417553 kubelet[3322]: E0715 05:16:27.417507 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:27.419039 kubelet[3322]: W0715 05:16:27.418947 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:27.419039 kubelet[3322]: E0715 05:16:27.418988 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:27.419376 kubelet[3322]: E0715 05:16:27.419345 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:27.419376 kubelet[3322]: W0715 05:16:27.419359 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:27.419548 kubelet[3322]: E0715 05:16:27.419536 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:27.419808 kubelet[3322]: E0715 05:16:27.419795 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:27.419920 kubelet[3322]: W0715 05:16:27.419877 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:27.420022 kubelet[3322]: E0715 05:16:27.419988 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:27.420294 kubelet[3322]: E0715 05:16:27.420265 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:27.420294 kubelet[3322]: W0715 05:16:27.420278 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:27.420594 kubelet[3322]: E0715 05:16:27.420532 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:27.420719 kubelet[3322]: E0715 05:16:27.420694 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:27.420719 kubelet[3322]: W0715 05:16:27.420705 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:27.420885 kubelet[3322]: E0715 05:16:27.420872 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:27.421272 kubelet[3322]: E0715 05:16:27.421232 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:27.421272 kubelet[3322]: W0715 05:16:27.421246 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:27.421924 kubelet[3322]: E0715 05:16:27.421643 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:27.422196 kubelet[3322]: E0715 05:16:27.422137 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:27.422511 kubelet[3322]: W0715 05:16:27.422358 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:27.422748 kubelet[3322]: E0715 05:16:27.422614 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:27.423457 kubelet[3322]: E0715 05:16:27.423282 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:27.423457 kubelet[3322]: W0715 05:16:27.423296 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:27.424011 kubelet[3322]: E0715 05:16:27.423839 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:27.424961 kubelet[3322]: E0715 05:16:27.424129 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:27.424961 kubelet[3322]: W0715 05:16:27.424251 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:27.425153 kubelet[3322]: E0715 05:16:27.425137 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:27.425411 kubelet[3322]: E0715 05:16:27.425398 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:27.425529 kubelet[3322]: W0715 05:16:27.425491 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:27.425713 kubelet[3322]: E0715 05:16:27.425700 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:27.426988 kubelet[3322]: E0715 05:16:27.425943 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:27.427104 kubelet[3322]: W0715 05:16:27.426971 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:27.427360 kubelet[3322]: E0715 05:16:27.427302 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:27.427502 kubelet[3322]: E0715 05:16:27.427474 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:27.427502 kubelet[3322]: W0715 05:16:27.427487 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:27.427762 kubelet[3322]: E0715 05:16:27.427725 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:27.427877 kubelet[3322]: E0715 05:16:27.427867 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:27.428023 kubelet[3322]: W0715 05:16:27.427959 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:27.428122 kubelet[3322]: E0715 05:16:27.428047 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:27.430058 kubelet[3322]: E0715 05:16:27.430023 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:27.430058 kubelet[3322]: W0715 05:16:27.430039 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:27.430416 kubelet[3322]: E0715 05:16:27.430301 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:27.430562 kubelet[3322]: E0715 05:16:27.430552 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:27.430661 kubelet[3322]: W0715 05:16:27.430613 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:27.430746 kubelet[3322]: E0715 05:16:27.430734 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:27.431013 kubelet[3322]: E0715 05:16:27.431001 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:27.431166 kubelet[3322]: W0715 05:16:27.431087 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:27.431246 kubelet[3322]: E0715 05:16:27.431234 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:27.431661 kubelet[3322]: E0715 05:16:27.431529 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:27.431661 kubelet[3322]: W0715 05:16:27.431542 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:27.431661 kubelet[3322]: E0715 05:16:27.431557 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:27.431876 kubelet[3322]: E0715 05:16:27.431865 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:27.432209 kubelet[3322]: W0715 05:16:27.432063 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:27.432209 kubelet[3322]: E0715 05:16:27.432085 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:27.432405 kubelet[3322]: E0715 05:16:27.432385 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:27.433970 kubelet[3322]: W0715 05:16:27.432470 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:27.433970 kubelet[3322]: E0715 05:16:27.432490 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:27.434336 kubelet[3322]: E0715 05:16:27.434323 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:27.434447 kubelet[3322]: W0715 05:16:27.434404 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:27.434447 kubelet[3322]: E0715 05:16:27.434422 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:28.011341 kubelet[3322]: E0715 05:16:28.011302 3322 configmap.go:193] Couldn't get configMap calico-system/tigera-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Jul 15 05:16:28.011713 kubelet[3322]: E0715 05:16:28.011401 3322 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ee540280-1638-419f-a936-bf87d6ddf3d3-tigera-ca-bundle podName:ee540280-1638-419f-a936-bf87d6ddf3d3 nodeName:}" failed. No retries permitted until 2025-07-15 05:16:28.511375232 +0000 UTC m=+22.103710184 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tigera-ca-bundle" (UniqueName: "kubernetes.io/configmap/ee540280-1638-419f-a936-bf87d6ddf3d3-tigera-ca-bundle") pod "calico-typha-57d5dd694b-9qtxf" (UID: "ee540280-1638-419f-a936-bf87d6ddf3d3") : failed to sync configmap cache: timed out waiting for the condition Jul 15 05:16:28.031869 kubelet[3322]: E0715 05:16:28.031788 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:28.031869 kubelet[3322]: W0715 05:16:28.031815 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:28.031869 kubelet[3322]: E0715 05:16:28.031834 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:28.058107 kubelet[3322]: E0715 05:16:28.058050 3322 projected.go:288] Couldn't get configMap calico-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jul 15 05:16:28.058107 kubelet[3322]: E0715 05:16:28.058097 3322 projected.go:194] Error preparing data for projected volume kube-api-access-697bd for pod calico-system/calico-typha-57d5dd694b-9qtxf: failed to sync configmap cache: timed out waiting for the condition Jul 15 05:16:28.058277 kubelet[3322]: E0715 05:16:28.058174 3322 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ee540280-1638-419f-a936-bf87d6ddf3d3-kube-api-access-697bd podName:ee540280-1638-419f-a936-bf87d6ddf3d3 nodeName:}" failed. No retries permitted until 2025-07-15 05:16:28.558155592 +0000 UTC m=+22.150490533 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-697bd" (UniqueName: "kubernetes.io/projected/ee540280-1638-419f-a936-bf87d6ddf3d3-kube-api-access-697bd") pod "calico-typha-57d5dd694b-9qtxf" (UID: "ee540280-1638-419f-a936-bf87d6ddf3d3") : failed to sync configmap cache: timed out waiting for the condition Jul 15 05:16:28.076216 kubelet[3322]: E0715 05:16:28.076109 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:28.076216 kubelet[3322]: W0715 05:16:28.076151 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:28.076216 kubelet[3322]: E0715 05:16:28.076172 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:28.133575 kubelet[3322]: E0715 05:16:28.133518 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:28.133575 kubelet[3322]: W0715 05:16:28.133547 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:28.133575 kubelet[3322]: E0715 05:16:28.133569 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:28.133933 kubelet[3322]: E0715 05:16:28.133840 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:28.133933 kubelet[3322]: W0715 05:16:28.133867 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:28.133933 kubelet[3322]: E0715 05:16:28.133881 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:28.235196 kubelet[3322]: E0715 05:16:28.235158 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:28.235196 kubelet[3322]: W0715 05:16:28.235185 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:28.235196 kubelet[3322]: E0715 05:16:28.235207 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:28.235425 kubelet[3322]: E0715 05:16:28.235397 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:28.235425 kubelet[3322]: W0715 05:16:28.235420 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:28.235425 kubelet[3322]: E0715 05:16:28.235431 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:28.330552 kubelet[3322]: E0715 05:16:28.330382 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:28.330552 kubelet[3322]: W0715 05:16:28.330409 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:28.330552 kubelet[3322]: E0715 05:16:28.330434 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:28.332211 kubelet[3322]: E0715 05:16:28.332183 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:28.332211 kubelet[3322]: W0715 05:16:28.332209 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:28.332368 kubelet[3322]: E0715 05:16:28.332231 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:28.336884 kubelet[3322]: E0715 05:16:28.336856 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:28.336884 kubelet[3322]: W0715 05:16:28.336882 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:28.338966 kubelet[3322]: E0715 05:16:28.338933 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:28.340193 kubelet[3322]: E0715 05:16:28.340167 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:28.340299 kubelet[3322]: W0715 05:16:28.340194 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:28.340299 kubelet[3322]: E0715 05:16:28.340217 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:28.441332 kubelet[3322]: E0715 05:16:28.441296 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:28.441332 kubelet[3322]: W0715 05:16:28.441322 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:28.441558 kubelet[3322]: E0715 05:16:28.441346 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:28.441640 kubelet[3322]: E0715 05:16:28.441618 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:28.441640 kubelet[3322]: W0715 05:16:28.441636 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:28.441749 kubelet[3322]: E0715 05:16:28.441653 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:28.542414 kubelet[3322]: E0715 05:16:28.542378 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:28.542414 kubelet[3322]: W0715 05:16:28.542403 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:28.542554 kubelet[3322]: E0715 05:16:28.542424 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:28.543243 kubelet[3322]: E0715 05:16:28.542676 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:28.543243 kubelet[3322]: W0715 05:16:28.542874 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:28.543243 kubelet[3322]: E0715 05:16:28.542892 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:28.543243 kubelet[3322]: E0715 05:16:28.543121 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:28.543243 kubelet[3322]: W0715 05:16:28.543128 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:28.543243 kubelet[3322]: E0715 05:16:28.543138 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:28.543585 kubelet[3322]: E0715 05:16:28.543277 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:28.543585 kubelet[3322]: W0715 05:16:28.543284 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:28.543585 kubelet[3322]: E0715 05:16:28.543290 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:28.543585 kubelet[3322]: E0715 05:16:28.543404 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:28.543585 kubelet[3322]: W0715 05:16:28.543410 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:28.543585 kubelet[3322]: E0715 05:16:28.543416 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:28.543585 kubelet[3322]: E0715 05:16:28.543571 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:28.543585 kubelet[3322]: W0715 05:16:28.543577 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:28.543585 kubelet[3322]: E0715 05:16:28.543584 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:28.544428 kubelet[3322]: E0715 05:16:28.544409 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:28.544428 kubelet[3322]: W0715 05:16:28.544422 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:28.544428 kubelet[3322]: E0715 05:16:28.544433 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:28.607391 containerd[1996]: time="2025-07-15T05:16:28.607074097Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-h87qb,Uid:d2cbf88e-2830-4579-bb9e-edbf674b313e,Namespace:calico-system,Attempt:0,}" Jul 15 05:16:28.645097 kubelet[3322]: E0715 05:16:28.645066 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:28.645097 kubelet[3322]: W0715 05:16:28.645090 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:28.645240 kubelet[3322]: E0715 05:16:28.645126 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:28.645452 kubelet[3322]: E0715 05:16:28.645435 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:28.645579 kubelet[3322]: W0715 05:16:28.645515 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:28.645579 kubelet[3322]: E0715 05:16:28.645532 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:28.646221 containerd[1996]: time="2025-07-15T05:16:28.646103315Z" level=info msg="connecting to shim 1533fb7c189b7a94c1e8b11da650f25e59673701c4e1d78129505ee620acaf1b" address="unix:///run/containerd/s/5923df5522682357a8afcf4f2585eae96d8250bf4fb99b3781e58891c0dcb5c4" namespace=k8s.io protocol=ttrpc version=3 Jul 15 05:16:28.646299 kubelet[3322]: E0715 05:16:28.646134 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:28.646299 kubelet[3322]: W0715 05:16:28.646144 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:28.646299 kubelet[3322]: E0715 05:16:28.646156 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:28.647310 kubelet[3322]: E0715 05:16:28.647072 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:28.647310 kubelet[3322]: W0715 05:16:28.647084 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:28.647310 kubelet[3322]: E0715 05:16:28.647095 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:28.647850 kubelet[3322]: E0715 05:16:28.647838 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:28.648143 kubelet[3322]: W0715 05:16:28.648065 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:28.648143 kubelet[3322]: E0715 05:16:28.648082 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:28.659472 kubelet[3322]: E0715 05:16:28.658803 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:28.659472 kubelet[3322]: W0715 05:16:28.659067 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:28.659472 kubelet[3322]: E0715 05:16:28.659089 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:28.682154 systemd[1]: Started cri-containerd-1533fb7c189b7a94c1e8b11da650f25e59673701c4e1d78129505ee620acaf1b.scope - libcontainer container 1533fb7c189b7a94c1e8b11da650f25e59673701c4e1d78129505ee620acaf1b. Jul 15 05:16:28.719261 containerd[1996]: time="2025-07-15T05:16:28.719217002Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-h87qb,Uid:d2cbf88e-2830-4579-bb9e-edbf674b313e,Namespace:calico-system,Attempt:0,} returns sandbox id \"1533fb7c189b7a94c1e8b11da650f25e59673701c4e1d78129505ee620acaf1b\"" Jul 15 05:16:28.721297 containerd[1996]: time="2025-07-15T05:16:28.721267348Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 15 05:16:28.951124 containerd[1996]: time="2025-07-15T05:16:28.950636468Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-57d5dd694b-9qtxf,Uid:ee540280-1638-419f-a936-bf87d6ddf3d3,Namespace:calico-system,Attempt:0,}" Jul 15 05:16:28.982359 containerd[1996]: time="2025-07-15T05:16:28.982313661Z" level=info msg="connecting to shim 1581437de0368dcac0bfb33f19b98f20d09ba6b22c8c7fadcba928a6472c8bae" address="unix:///run/containerd/s/16bfe0fa8abe850039fcf3c10acc1c7124ab97cfe44d7ceafd2a250d879425e9" namespace=k8s.io protocol=ttrpc version=3 Jul 15 05:16:29.011990 systemd[1]: Started cri-containerd-1581437de0368dcac0bfb33f19b98f20d09ba6b22c8c7fadcba928a6472c8bae.scope - libcontainer container 1581437de0368dcac0bfb33f19b98f20d09ba6b22c8c7fadcba928a6472c8bae. Jul 15 05:16:29.108777 containerd[1996]: time="2025-07-15T05:16:29.108743192Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-57d5dd694b-9qtxf,Uid:ee540280-1638-419f-a936-bf87d6ddf3d3,Namespace:calico-system,Attempt:0,} returns sandbox id \"1581437de0368dcac0bfb33f19b98f20d09ba6b22c8c7fadcba928a6472c8bae\"" Jul 15 05:16:29.569250 kubelet[3322]: E0715 05:16:29.568211 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-plmkb" podUID="792c1079-d6ad-4977-9449-eb7585301bdc" Jul 15 05:16:30.174734 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4187507249.mount: Deactivated successfully. Jul 15 05:16:30.354522 containerd[1996]: time="2025-07-15T05:16:30.354466498Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:16:30.356312 containerd[1996]: time="2025-07-15T05:16:30.356273419Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=5939797" Jul 15 05:16:30.359346 containerd[1996]: time="2025-07-15T05:16:30.359277896Z" level=info msg="ImageCreate event name:\"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:16:30.363855 containerd[1996]: time="2025-07-15T05:16:30.363410165Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:16:30.364672 containerd[1996]: time="2025-07-15T05:16:30.364626697Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5939619\" in 1.643318978s" Jul 15 05:16:30.364792 containerd[1996]: time="2025-07-15T05:16:30.364674717Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\"" Jul 15 05:16:30.366503 containerd[1996]: time="2025-07-15T05:16:30.366424028Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 15 05:16:30.380034 containerd[1996]: time="2025-07-15T05:16:30.378213407Z" level=info msg="CreateContainer within sandbox \"1533fb7c189b7a94c1e8b11da650f25e59673701c4e1d78129505ee620acaf1b\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 15 05:16:30.405388 containerd[1996]: time="2025-07-15T05:16:30.405341357Z" level=info msg="Container 553dba33b1e06aaba217cb4c5d279de85b75ae5ae86b9b93c4970688d117f0cd: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:16:30.444029 containerd[1996]: time="2025-07-15T05:16:30.443888240Z" level=info msg="CreateContainer within sandbox \"1533fb7c189b7a94c1e8b11da650f25e59673701c4e1d78129505ee620acaf1b\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"553dba33b1e06aaba217cb4c5d279de85b75ae5ae86b9b93c4970688d117f0cd\"" Jul 15 05:16:30.445107 containerd[1996]: time="2025-07-15T05:16:30.445063390Z" level=info msg="StartContainer for \"553dba33b1e06aaba217cb4c5d279de85b75ae5ae86b9b93c4970688d117f0cd\"" Jul 15 05:16:30.448001 containerd[1996]: time="2025-07-15T05:16:30.447963662Z" level=info msg="connecting to shim 553dba33b1e06aaba217cb4c5d279de85b75ae5ae86b9b93c4970688d117f0cd" address="unix:///run/containerd/s/5923df5522682357a8afcf4f2585eae96d8250bf4fb99b3781e58891c0dcb5c4" protocol=ttrpc version=3 Jul 15 05:16:30.481112 systemd[1]: Started cri-containerd-553dba33b1e06aaba217cb4c5d279de85b75ae5ae86b9b93c4970688d117f0cd.scope - libcontainer container 553dba33b1e06aaba217cb4c5d279de85b75ae5ae86b9b93c4970688d117f0cd. Jul 15 05:16:30.535423 containerd[1996]: time="2025-07-15T05:16:30.535379060Z" level=info msg="StartContainer for \"553dba33b1e06aaba217cb4c5d279de85b75ae5ae86b9b93c4970688d117f0cd\" returns successfully" Jul 15 05:16:30.546830 systemd[1]: cri-containerd-553dba33b1e06aaba217cb4c5d279de85b75ae5ae86b9b93c4970688d117f0cd.scope: Deactivated successfully. Jul 15 05:16:30.581112 containerd[1996]: time="2025-07-15T05:16:30.580988181Z" level=info msg="received exit event container_id:\"553dba33b1e06aaba217cb4c5d279de85b75ae5ae86b9b93c4970688d117f0cd\" id:\"553dba33b1e06aaba217cb4c5d279de85b75ae5ae86b9b93c4970688d117f0cd\" pid:4079 exited_at:{seconds:1752556590 nanos:549144426}" Jul 15 05:16:30.581112 containerd[1996]: time="2025-07-15T05:16:30.581068757Z" level=info msg="TaskExit event in podsandbox handler container_id:\"553dba33b1e06aaba217cb4c5d279de85b75ae5ae86b9b93c4970688d117f0cd\" id:\"553dba33b1e06aaba217cb4c5d279de85b75ae5ae86b9b93c4970688d117f0cd\" pid:4079 exited_at:{seconds:1752556590 nanos:549144426}" Jul 15 05:16:30.612893 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-553dba33b1e06aaba217cb4c5d279de85b75ae5ae86b9b93c4970688d117f0cd-rootfs.mount: Deactivated successfully. Jul 15 05:16:31.570004 kubelet[3322]: E0715 05:16:31.568591 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-plmkb" podUID="792c1079-d6ad-4977-9449-eb7585301bdc" Jul 15 05:16:32.548496 containerd[1996]: time="2025-07-15T05:16:32.548345133Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:16:32.549627 containerd[1996]: time="2025-07-15T05:16:32.549575358Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=33740523" Jul 15 05:16:32.551214 containerd[1996]: time="2025-07-15T05:16:32.551154854Z" level=info msg="ImageCreate event name:\"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:16:32.554016 containerd[1996]: time="2025-07-15T05:16:32.553828143Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:16:32.554666 containerd[1996]: time="2025-07-15T05:16:32.554523976Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"35233218\" in 2.188065473s" Jul 15 05:16:32.554666 containerd[1996]: time="2025-07-15T05:16:32.554555286Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\"" Jul 15 05:16:32.555924 containerd[1996]: time="2025-07-15T05:16:32.555886221Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 15 05:16:32.575095 containerd[1996]: time="2025-07-15T05:16:32.575015246Z" level=info msg="CreateContainer within sandbox \"1581437de0368dcac0bfb33f19b98f20d09ba6b22c8c7fadcba928a6472c8bae\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 15 05:16:32.602929 containerd[1996]: time="2025-07-15T05:16:32.602056936Z" level=info msg="Container 4005ad1a39237866bea5acd8d982992843be30d3adca1830b36c44443f5928d1: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:16:32.619275 containerd[1996]: time="2025-07-15T05:16:32.619229102Z" level=info msg="CreateContainer within sandbox \"1581437de0368dcac0bfb33f19b98f20d09ba6b22c8c7fadcba928a6472c8bae\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"4005ad1a39237866bea5acd8d982992843be30d3adca1830b36c44443f5928d1\"" Jul 15 05:16:32.620225 containerd[1996]: time="2025-07-15T05:16:32.620156884Z" level=info msg="StartContainer for \"4005ad1a39237866bea5acd8d982992843be30d3adca1830b36c44443f5928d1\"" Jul 15 05:16:32.622332 containerd[1996]: time="2025-07-15T05:16:32.622288377Z" level=info msg="connecting to shim 4005ad1a39237866bea5acd8d982992843be30d3adca1830b36c44443f5928d1" address="unix:///run/containerd/s/16bfe0fa8abe850039fcf3c10acc1c7124ab97cfe44d7ceafd2a250d879425e9" protocol=ttrpc version=3 Jul 15 05:16:32.657159 systemd[1]: Started cri-containerd-4005ad1a39237866bea5acd8d982992843be30d3adca1830b36c44443f5928d1.scope - libcontainer container 4005ad1a39237866bea5acd8d982992843be30d3adca1830b36c44443f5928d1. Jul 15 05:16:32.737184 containerd[1996]: time="2025-07-15T05:16:32.737121508Z" level=info msg="StartContainer for \"4005ad1a39237866bea5acd8d982992843be30d3adca1830b36c44443f5928d1\" returns successfully" Jul 15 05:16:33.572615 kubelet[3322]: E0715 05:16:33.572572 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-plmkb" podUID="792c1079-d6ad-4977-9449-eb7585301bdc" Jul 15 05:16:34.759944 kubelet[3322]: I0715 05:16:34.759619 3322 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 15 05:16:35.568464 kubelet[3322]: E0715 05:16:35.568003 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-plmkb" podUID="792c1079-d6ad-4977-9449-eb7585301bdc" Jul 15 05:16:37.037327 containerd[1996]: time="2025-07-15T05:16:37.037258639Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:16:37.038826 containerd[1996]: time="2025-07-15T05:16:37.038529160Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=70436221" Jul 15 05:16:37.040062 containerd[1996]: time="2025-07-15T05:16:37.039799041Z" level=info msg="ImageCreate event name:\"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:16:37.042867 containerd[1996]: time="2025-07-15T05:16:37.042830481Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:16:37.043333 containerd[1996]: time="2025-07-15T05:16:37.043303050Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"71928924\" in 4.487373817s" Jul 15 05:16:37.043333 containerd[1996]: time="2025-07-15T05:16:37.043335968Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\"" Jul 15 05:16:37.070400 containerd[1996]: time="2025-07-15T05:16:37.070357909Z" level=info msg="CreateContainer within sandbox \"1533fb7c189b7a94c1e8b11da650f25e59673701c4e1d78129505ee620acaf1b\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 15 05:16:37.089754 containerd[1996]: time="2025-07-15T05:16:37.087057743Z" level=info msg="Container b07304465f9e802982bf56620877b1f476195cb773579eda1386fbf405780864: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:16:37.100373 containerd[1996]: time="2025-07-15T05:16:37.100283193Z" level=info msg="CreateContainer within sandbox \"1533fb7c189b7a94c1e8b11da650f25e59673701c4e1d78129505ee620acaf1b\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"b07304465f9e802982bf56620877b1f476195cb773579eda1386fbf405780864\"" Jul 15 05:16:37.106161 containerd[1996]: time="2025-07-15T05:16:37.106054184Z" level=info msg="StartContainer for \"b07304465f9e802982bf56620877b1f476195cb773579eda1386fbf405780864\"" Jul 15 05:16:37.108550 containerd[1996]: time="2025-07-15T05:16:37.108490201Z" level=info msg="connecting to shim b07304465f9e802982bf56620877b1f476195cb773579eda1386fbf405780864" address="unix:///run/containerd/s/5923df5522682357a8afcf4f2585eae96d8250bf4fb99b3781e58891c0dcb5c4" protocol=ttrpc version=3 Jul 15 05:16:37.138161 systemd[1]: Started cri-containerd-b07304465f9e802982bf56620877b1f476195cb773579eda1386fbf405780864.scope - libcontainer container b07304465f9e802982bf56620877b1f476195cb773579eda1386fbf405780864. Jul 15 05:16:37.201184 containerd[1996]: time="2025-07-15T05:16:37.201089257Z" level=info msg="StartContainer for \"b07304465f9e802982bf56620877b1f476195cb773579eda1386fbf405780864\" returns successfully" Jul 15 05:16:37.568415 kubelet[3322]: E0715 05:16:37.568024 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-plmkb" podUID="792c1079-d6ad-4977-9449-eb7585301bdc" Jul 15 05:16:37.836157 kubelet[3322]: I0715 05:16:37.836001 3322 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-57d5dd694b-9qtxf" podStartSLOduration=8.390805727 podStartE2EDuration="11.835962557s" podCreationTimestamp="2025-07-15 05:16:26 +0000 UTC" firstStartedPulling="2025-07-15 05:16:29.110427583 +0000 UTC m=+22.702762524" lastFinishedPulling="2025-07-15 05:16:32.555584401 +0000 UTC m=+26.147919354" observedRunningTime="2025-07-15 05:16:33.777222086 +0000 UTC m=+27.369557044" watchObservedRunningTime="2025-07-15 05:16:37.835962557 +0000 UTC m=+31.428297518" Jul 15 05:16:38.034775 systemd[1]: cri-containerd-b07304465f9e802982bf56620877b1f476195cb773579eda1386fbf405780864.scope: Deactivated successfully. Jul 15 05:16:38.035203 systemd[1]: cri-containerd-b07304465f9e802982bf56620877b1f476195cb773579eda1386fbf405780864.scope: Consumed 560ms CPU time, 167.7M memory peak, 12.9M read from disk, 171.2M written to disk. Jul 15 05:16:38.127334 kubelet[3322]: I0715 05:16:38.127096 3322 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 15 05:16:38.184332 containerd[1996]: time="2025-07-15T05:16:38.184063131Z" level=info msg="received exit event container_id:\"b07304465f9e802982bf56620877b1f476195cb773579eda1386fbf405780864\" id:\"b07304465f9e802982bf56620877b1f476195cb773579eda1386fbf405780864\" pid:4177 exited_at:{seconds:1752556598 nanos:173179368}" Jul 15 05:16:38.186915 containerd[1996]: time="2025-07-15T05:16:38.184138034Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b07304465f9e802982bf56620877b1f476195cb773579eda1386fbf405780864\" id:\"b07304465f9e802982bf56620877b1f476195cb773579eda1386fbf405780864\" pid:4177 exited_at:{seconds:1752556598 nanos:173179368}" Jul 15 05:16:38.221237 systemd[1]: Created slice kubepods-burstable-pod09c0c105_5305_45ec_9f9e_1db93f47968c.slice - libcontainer container kubepods-burstable-pod09c0c105_5305_45ec_9f9e_1db93f47968c.slice. Jul 15 05:16:38.242705 systemd[1]: Created slice kubepods-besteffort-pod00cde189_05db_4c0c_92a4_d78eaf0ed38b.slice - libcontainer container kubepods-besteffort-pod00cde189_05db_4c0c_92a4_d78eaf0ed38b.slice. Jul 15 05:16:38.276698 systemd[1]: Created slice kubepods-besteffort-podb8571cd4_fa32_4633_874d_3745d4f318bb.slice - libcontainer container kubepods-besteffort-podb8571cd4_fa32_4633_874d_3745d4f318bb.slice. Jul 15 05:16:38.298088 systemd[1]: Created slice kubepods-besteffort-pod62dd5f46_edc9_4fbc_a34e_dcbf00a60624.slice - libcontainer container kubepods-besteffort-pod62dd5f46_edc9_4fbc_a34e_dcbf00a60624.slice. Jul 15 05:16:38.322549 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b07304465f9e802982bf56620877b1f476195cb773579eda1386fbf405780864-rootfs.mount: Deactivated successfully. Jul 15 05:16:38.323094 kubelet[3322]: I0715 05:16:38.323020 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/87b9403b-286e-449f-b792-8973989d361e-calico-apiserver-certs\") pod \"calico-apiserver-5d8549b7d9-d5z85\" (UID: \"87b9403b-286e-449f-b792-8973989d361e\") " pod="calico-apiserver/calico-apiserver-5d8549b7d9-d5z85" Jul 15 05:16:38.323332 kubelet[3322]: I0715 05:16:38.323231 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/09c0c105-5305-45ec-9f9e-1db93f47968c-config-volume\") pod \"coredns-668d6bf9bc-nwgx4\" (UID: \"09c0c105-5305-45ec-9f9e-1db93f47968c\") " pod="kube-system/coredns-668d6bf9bc-nwgx4" Jul 15 05:16:38.323472 kubelet[3322]: I0715 05:16:38.323431 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-495rt\" (UniqueName: \"kubernetes.io/projected/00cde189-05db-4c0c-92a4-d78eaf0ed38b-kube-api-access-495rt\") pod \"calico-kube-controllers-665c85449c-swjcb\" (UID: \"00cde189-05db-4c0c-92a4-d78eaf0ed38b\") " pod="calico-system/calico-kube-controllers-665c85449c-swjcb" Jul 15 05:16:38.323742 kubelet[3322]: I0715 05:16:38.323647 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/62dd5f46-edc9-4fbc-a34e-dcbf00a60624-goldmane-key-pair\") pod \"goldmane-768f4c5c69-bd8wp\" (UID: \"62dd5f46-edc9-4fbc-a34e-dcbf00a60624\") " pod="calico-system/goldmane-768f4c5c69-bd8wp" Jul 15 05:16:38.324469 kubelet[3322]: I0715 05:16:38.323876 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gq7dz\" (UniqueName: \"kubernetes.io/projected/87b9403b-286e-449f-b792-8973989d361e-kube-api-access-gq7dz\") pod \"calico-apiserver-5d8549b7d9-d5z85\" (UID: \"87b9403b-286e-449f-b792-8973989d361e\") " pod="calico-apiserver/calico-apiserver-5d8549b7d9-d5z85" Jul 15 05:16:38.324469 kubelet[3322]: I0715 05:16:38.323960 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8btxb\" (UniqueName: \"kubernetes.io/projected/ddb65bd6-da34-49ee-a1d2-f42709d6e6d2-kube-api-access-8btxb\") pod \"coredns-668d6bf9bc-rzzg2\" (UID: \"ddb65bd6-da34-49ee-a1d2-f42709d6e6d2\") " pod="kube-system/coredns-668d6bf9bc-rzzg2" Jul 15 05:16:38.324880 kubelet[3322]: I0715 05:16:38.324799 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/62dd5f46-edc9-4fbc-a34e-dcbf00a60624-config\") pod \"goldmane-768f4c5c69-bd8wp\" (UID: \"62dd5f46-edc9-4fbc-a34e-dcbf00a60624\") " pod="calico-system/goldmane-768f4c5c69-bd8wp" Jul 15 05:16:38.325590 kubelet[3322]: I0715 05:16:38.324945 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/00cde189-05db-4c0c-92a4-d78eaf0ed38b-tigera-ca-bundle\") pod \"calico-kube-controllers-665c85449c-swjcb\" (UID: \"00cde189-05db-4c0c-92a4-d78eaf0ed38b\") " pod="calico-system/calico-kube-controllers-665c85449c-swjcb" Jul 15 05:16:38.325919 kubelet[3322]: I0715 05:16:38.325829 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rl6qb\" (UniqueName: \"kubernetes.io/projected/62dd5f46-edc9-4fbc-a34e-dcbf00a60624-kube-api-access-rl6qb\") pod \"goldmane-768f4c5c69-bd8wp\" (UID: \"62dd5f46-edc9-4fbc-a34e-dcbf00a60624\") " pod="calico-system/goldmane-768f4c5c69-bd8wp" Jul 15 05:16:38.325919 kubelet[3322]: I0715 05:16:38.325867 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b8571cd4-fa32-4633-874d-3745d4f318bb-whisker-ca-bundle\") pod \"whisker-6988bd67f9-llbxq\" (UID: \"b8571cd4-fa32-4633-874d-3745d4f318bb\") " pod="calico-system/whisker-6988bd67f9-llbxq" Jul 15 05:16:38.326231 systemd[1]: Created slice kubepods-burstable-podddb65bd6_da34_49ee_a1d2_f42709d6e6d2.slice - libcontainer container kubepods-burstable-podddb65bd6_da34_49ee_a1d2_f42709d6e6d2.slice. Jul 15 05:16:38.326456 kubelet[3322]: I0715 05:16:38.325894 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/b8571cd4-fa32-4633-874d-3745d4f318bb-whisker-backend-key-pair\") pod \"whisker-6988bd67f9-llbxq\" (UID: \"b8571cd4-fa32-4633-874d-3745d4f318bb\") " pod="calico-system/whisker-6988bd67f9-llbxq" Jul 15 05:16:38.327137 kubelet[3322]: I0715 05:16:38.326760 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27pmv\" (UniqueName: \"kubernetes.io/projected/09c0c105-5305-45ec-9f9e-1db93f47968c-kube-api-access-27pmv\") pod \"coredns-668d6bf9bc-nwgx4\" (UID: \"09c0c105-5305-45ec-9f9e-1db93f47968c\") " pod="kube-system/coredns-668d6bf9bc-nwgx4" Jul 15 05:16:38.327609 kubelet[3322]: I0715 05:16:38.327367 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ddb65bd6-da34-49ee-a1d2-f42709d6e6d2-config-volume\") pod \"coredns-668d6bf9bc-rzzg2\" (UID: \"ddb65bd6-da34-49ee-a1d2-f42709d6e6d2\") " pod="kube-system/coredns-668d6bf9bc-rzzg2" Jul 15 05:16:38.327609 kubelet[3322]: I0715 05:16:38.327436 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/34fd6f0e-186d-449e-b768-2f198ebe186d-calico-apiserver-certs\") pod \"calico-apiserver-5d8549b7d9-2jwmz\" (UID: \"34fd6f0e-186d-449e-b768-2f198ebe186d\") " pod="calico-apiserver/calico-apiserver-5d8549b7d9-2jwmz" Jul 15 05:16:38.327609 kubelet[3322]: I0715 05:16:38.327477 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m229d\" (UniqueName: \"kubernetes.io/projected/34fd6f0e-186d-449e-b768-2f198ebe186d-kube-api-access-m229d\") pod \"calico-apiserver-5d8549b7d9-2jwmz\" (UID: \"34fd6f0e-186d-449e-b768-2f198ebe186d\") " pod="calico-apiserver/calico-apiserver-5d8549b7d9-2jwmz" Jul 15 05:16:38.327609 kubelet[3322]: I0715 05:16:38.327540 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/62dd5f46-edc9-4fbc-a34e-dcbf00a60624-goldmane-ca-bundle\") pod \"goldmane-768f4c5c69-bd8wp\" (UID: \"62dd5f46-edc9-4fbc-a34e-dcbf00a60624\") " pod="calico-system/goldmane-768f4c5c69-bd8wp" Jul 15 05:16:38.327609 kubelet[3322]: I0715 05:16:38.327600 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6kpg\" (UniqueName: \"kubernetes.io/projected/b8571cd4-fa32-4633-874d-3745d4f318bb-kube-api-access-r6kpg\") pod \"whisker-6988bd67f9-llbxq\" (UID: \"b8571cd4-fa32-4633-874d-3745d4f318bb\") " pod="calico-system/whisker-6988bd67f9-llbxq" Jul 15 05:16:38.341616 systemd[1]: Created slice kubepods-besteffort-pod34fd6f0e_186d_449e_b768_2f198ebe186d.slice - libcontainer container kubepods-besteffort-pod34fd6f0e_186d_449e_b768_2f198ebe186d.slice. Jul 15 05:16:38.354664 systemd[1]: Created slice kubepods-besteffort-pod87b9403b_286e_449f_b792_8973989d361e.slice - libcontainer container kubepods-besteffort-pod87b9403b_286e_449f_b792_8973989d361e.slice. Jul 15 05:16:38.536369 containerd[1996]: time="2025-07-15T05:16:38.536072286Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-nwgx4,Uid:09c0c105-5305-45ec-9f9e-1db93f47968c,Namespace:kube-system,Attempt:0,}" Jul 15 05:16:38.553345 containerd[1996]: time="2025-07-15T05:16:38.553296842Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-665c85449c-swjcb,Uid:00cde189-05db-4c0c-92a4-d78eaf0ed38b,Namespace:calico-system,Attempt:0,}" Jul 15 05:16:38.597215 containerd[1996]: time="2025-07-15T05:16:38.597134908Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6988bd67f9-llbxq,Uid:b8571cd4-fa32-4633-874d-3745d4f318bb,Namespace:calico-system,Attempt:0,}" Jul 15 05:16:38.624570 containerd[1996]: time="2025-07-15T05:16:38.624516491Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-bd8wp,Uid:62dd5f46-edc9-4fbc-a34e-dcbf00a60624,Namespace:calico-system,Attempt:0,}" Jul 15 05:16:38.648690 containerd[1996]: time="2025-07-15T05:16:38.648637483Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rzzg2,Uid:ddb65bd6-da34-49ee-a1d2-f42709d6e6d2,Namespace:kube-system,Attempt:0,}" Jul 15 05:16:38.650957 containerd[1996]: time="2025-07-15T05:16:38.650526605Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d8549b7d9-2jwmz,Uid:34fd6f0e-186d-449e-b768-2f198ebe186d,Namespace:calico-apiserver,Attempt:0,}" Jul 15 05:16:38.670399 containerd[1996]: time="2025-07-15T05:16:38.670179753Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d8549b7d9-d5z85,Uid:87b9403b-286e-449f-b792-8973989d361e,Namespace:calico-apiserver,Attempt:0,}" Jul 15 05:16:38.813605 containerd[1996]: time="2025-07-15T05:16:38.813271891Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 15 05:16:38.980014 containerd[1996]: time="2025-07-15T05:16:38.979174321Z" level=error msg="Failed to destroy network for sandbox \"7664aa8b038b9186871ccfe95c75d76be9ab9a275b60ad148de66b54ac1f019d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:16:39.027993 containerd[1996]: time="2025-07-15T05:16:38.983096044Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rzzg2,Uid:ddb65bd6-da34-49ee-a1d2-f42709d6e6d2,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7664aa8b038b9186871ccfe95c75d76be9ab9a275b60ad148de66b54ac1f019d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:16:39.033005 containerd[1996]: time="2025-07-15T05:16:39.013288609Z" level=error msg="Failed to destroy network for sandbox \"053b6a885d9ac2ba38daf92335d51f46c1aa7fc9cd677199c5d21d942f7a523a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:16:39.033005 containerd[1996]: time="2025-07-15T05:16:39.031238257Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d8549b7d9-d5z85,Uid:87b9403b-286e-449f-b792-8973989d361e,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"053b6a885d9ac2ba38daf92335d51f46c1aa7fc9cd677199c5d21d942f7a523a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:16:39.061746 containerd[1996]: time="2025-07-15T05:16:39.061557580Z" level=error msg="Failed to destroy network for sandbox \"3c8d42e1b1185c9bd619624fc63eeefc9664de590e69ef1a13217afc850209f7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:16:39.064462 containerd[1996]: time="2025-07-15T05:16:39.063918385Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-665c85449c-swjcb,Uid:00cde189-05db-4c0c-92a4-d78eaf0ed38b,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c8d42e1b1185c9bd619624fc63eeefc9664de590e69ef1a13217afc850209f7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:16:39.065611 kubelet[3322]: E0715 05:16:39.065083 3322 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7664aa8b038b9186871ccfe95c75d76be9ab9a275b60ad148de66b54ac1f019d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:16:39.065611 kubelet[3322]: E0715 05:16:39.065188 3322 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7664aa8b038b9186871ccfe95c75d76be9ab9a275b60ad148de66b54ac1f019d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-rzzg2" Jul 15 05:16:39.065611 kubelet[3322]: E0715 05:16:39.065217 3322 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7664aa8b038b9186871ccfe95c75d76be9ab9a275b60ad148de66b54ac1f019d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-rzzg2" Jul 15 05:16:39.067204 kubelet[3322]: E0715 05:16:39.065273 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-rzzg2_kube-system(ddb65bd6-da34-49ee-a1d2-f42709d6e6d2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-rzzg2_kube-system(ddb65bd6-da34-49ee-a1d2-f42709d6e6d2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7664aa8b038b9186871ccfe95c75d76be9ab9a275b60ad148de66b54ac1f019d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-rzzg2" podUID="ddb65bd6-da34-49ee-a1d2-f42709d6e6d2" Jul 15 05:16:39.067204 kubelet[3322]: E0715 05:16:39.066518 3322 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"053b6a885d9ac2ba38daf92335d51f46c1aa7fc9cd677199c5d21d942f7a523a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:16:39.067204 kubelet[3322]: E0715 05:16:39.066584 3322 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"053b6a885d9ac2ba38daf92335d51f46c1aa7fc9cd677199c5d21d942f7a523a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d8549b7d9-d5z85" Jul 15 05:16:39.067380 kubelet[3322]: E0715 05:16:39.066613 3322 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"053b6a885d9ac2ba38daf92335d51f46c1aa7fc9cd677199c5d21d942f7a523a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d8549b7d9-d5z85" Jul 15 05:16:39.067380 kubelet[3322]: E0715 05:16:39.066749 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5d8549b7d9-d5z85_calico-apiserver(87b9403b-286e-449f-b792-8973989d361e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5d8549b7d9-d5z85_calico-apiserver(87b9403b-286e-449f-b792-8973989d361e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"053b6a885d9ac2ba38daf92335d51f46c1aa7fc9cd677199c5d21d942f7a523a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5d8549b7d9-d5z85" podUID="87b9403b-286e-449f-b792-8973989d361e" Jul 15 05:16:39.068556 kubelet[3322]: E0715 05:16:39.068302 3322 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c8d42e1b1185c9bd619624fc63eeefc9664de590e69ef1a13217afc850209f7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:16:39.068556 kubelet[3322]: E0715 05:16:39.068352 3322 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c8d42e1b1185c9bd619624fc63eeefc9664de590e69ef1a13217afc850209f7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-665c85449c-swjcb" Jul 15 05:16:39.068556 kubelet[3322]: E0715 05:16:39.068380 3322 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c8d42e1b1185c9bd619624fc63eeefc9664de590e69ef1a13217afc850209f7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-665c85449c-swjcb" Jul 15 05:16:39.068702 kubelet[3322]: E0715 05:16:39.068598 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-665c85449c-swjcb_calico-system(00cde189-05db-4c0c-92a4-d78eaf0ed38b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-665c85449c-swjcb_calico-system(00cde189-05db-4c0c-92a4-d78eaf0ed38b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3c8d42e1b1185c9bd619624fc63eeefc9664de590e69ef1a13217afc850209f7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-665c85449c-swjcb" podUID="00cde189-05db-4c0c-92a4-d78eaf0ed38b" Jul 15 05:16:39.070887 containerd[1996]: time="2025-07-15T05:16:39.070810549Z" level=error msg="Failed to destroy network for sandbox \"1f24d8830240fb5ca0c6774523b94ea5b8b94a0565f81b738221b766e0b1ae51\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:16:39.072836 containerd[1996]: time="2025-07-15T05:16:39.072783279Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6988bd67f9-llbxq,Uid:b8571cd4-fa32-4633-874d-3745d4f318bb,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f24d8830240fb5ca0c6774523b94ea5b8b94a0565f81b738221b766e0b1ae51\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:16:39.073042 kubelet[3322]: E0715 05:16:39.073010 3322 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f24d8830240fb5ca0c6774523b94ea5b8b94a0565f81b738221b766e0b1ae51\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:16:39.073121 kubelet[3322]: E0715 05:16:39.073067 3322 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f24d8830240fb5ca0c6774523b94ea5b8b94a0565f81b738221b766e0b1ae51\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6988bd67f9-llbxq" Jul 15 05:16:39.073121 kubelet[3322]: E0715 05:16:39.073096 3322 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f24d8830240fb5ca0c6774523b94ea5b8b94a0565f81b738221b766e0b1ae51\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6988bd67f9-llbxq" Jul 15 05:16:39.074321 kubelet[3322]: E0715 05:16:39.074171 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6988bd67f9-llbxq_calico-system(b8571cd4-fa32-4633-874d-3745d4f318bb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6988bd67f9-llbxq_calico-system(b8571cd4-fa32-4633-874d-3745d4f318bb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1f24d8830240fb5ca0c6774523b94ea5b8b94a0565f81b738221b766e0b1ae51\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6988bd67f9-llbxq" podUID="b8571cd4-fa32-4633-874d-3745d4f318bb" Jul 15 05:16:39.074830 containerd[1996]: time="2025-07-15T05:16:39.074452732Z" level=error msg="Failed to destroy network for sandbox \"acfd0a2ae2ceee64ba7034ad1c885cd51ca9c3bf0c22abf067c2474e37285a8f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:16:39.076353 containerd[1996]: time="2025-07-15T05:16:39.076301560Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-nwgx4,Uid:09c0c105-5305-45ec-9f9e-1db93f47968c,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"acfd0a2ae2ceee64ba7034ad1c885cd51ca9c3bf0c22abf067c2474e37285a8f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:16:39.076602 kubelet[3322]: E0715 05:16:39.076570 3322 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"acfd0a2ae2ceee64ba7034ad1c885cd51ca9c3bf0c22abf067c2474e37285a8f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:16:39.076681 kubelet[3322]: E0715 05:16:39.076626 3322 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"acfd0a2ae2ceee64ba7034ad1c885cd51ca9c3bf0c22abf067c2474e37285a8f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-nwgx4" Jul 15 05:16:39.076681 kubelet[3322]: E0715 05:16:39.076655 3322 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"acfd0a2ae2ceee64ba7034ad1c885cd51ca9c3bf0c22abf067c2474e37285a8f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-nwgx4" Jul 15 05:16:39.076777 kubelet[3322]: E0715 05:16:39.076705 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-nwgx4_kube-system(09c0c105-5305-45ec-9f9e-1db93f47968c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-nwgx4_kube-system(09c0c105-5305-45ec-9f9e-1db93f47968c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"acfd0a2ae2ceee64ba7034ad1c885cd51ca9c3bf0c22abf067c2474e37285a8f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-nwgx4" podUID="09c0c105-5305-45ec-9f9e-1db93f47968c" Jul 15 05:16:39.077826 containerd[1996]: time="2025-07-15T05:16:39.077773417Z" level=error msg="Failed to destroy network for sandbox \"0d8c5a85a8eb99768c0e896e64e0195dbcb625a51596aa32fc0e2078203911ef\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:16:39.079548 containerd[1996]: time="2025-07-15T05:16:39.079512886Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-bd8wp,Uid:62dd5f46-edc9-4fbc-a34e-dcbf00a60624,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d8c5a85a8eb99768c0e896e64e0195dbcb625a51596aa32fc0e2078203911ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:16:39.082017 kubelet[3322]: E0715 05:16:39.081965 3322 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d8c5a85a8eb99768c0e896e64e0195dbcb625a51596aa32fc0e2078203911ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:16:39.082121 kubelet[3322]: E0715 05:16:39.082041 3322 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d8c5a85a8eb99768c0e896e64e0195dbcb625a51596aa32fc0e2078203911ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-bd8wp" Jul 15 05:16:39.082121 kubelet[3322]: E0715 05:16:39.082068 3322 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d8c5a85a8eb99768c0e896e64e0195dbcb625a51596aa32fc0e2078203911ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-bd8wp" Jul 15 05:16:39.082235 kubelet[3322]: E0715 05:16:39.082136 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-768f4c5c69-bd8wp_calico-system(62dd5f46-edc9-4fbc-a34e-dcbf00a60624)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-768f4c5c69-bd8wp_calico-system(62dd5f46-edc9-4fbc-a34e-dcbf00a60624)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0d8c5a85a8eb99768c0e896e64e0195dbcb625a51596aa32fc0e2078203911ef\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-bd8wp" podUID="62dd5f46-edc9-4fbc-a34e-dcbf00a60624" Jul 15 05:16:39.083427 containerd[1996]: time="2025-07-15T05:16:39.083323016Z" level=error msg="Failed to destroy network for sandbox \"971002b238ae90dcec7a530d6d01e9fa7ee880aa45e2eaabe5af5f2039fd0c47\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:16:39.084455 containerd[1996]: time="2025-07-15T05:16:39.084423481Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d8549b7d9-2jwmz,Uid:34fd6f0e-186d-449e-b768-2f198ebe186d,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"971002b238ae90dcec7a530d6d01e9fa7ee880aa45e2eaabe5af5f2039fd0c47\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:16:39.084789 kubelet[3322]: E0715 05:16:39.084756 3322 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"971002b238ae90dcec7a530d6d01e9fa7ee880aa45e2eaabe5af5f2039fd0c47\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:16:39.084867 kubelet[3322]: E0715 05:16:39.084806 3322 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"971002b238ae90dcec7a530d6d01e9fa7ee880aa45e2eaabe5af5f2039fd0c47\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d8549b7d9-2jwmz" Jul 15 05:16:39.084867 kubelet[3322]: E0715 05:16:39.084830 3322 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"971002b238ae90dcec7a530d6d01e9fa7ee880aa45e2eaabe5af5f2039fd0c47\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d8549b7d9-2jwmz" Jul 15 05:16:39.085041 kubelet[3322]: E0715 05:16:39.084889 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5d8549b7d9-2jwmz_calico-apiserver(34fd6f0e-186d-449e-b768-2f198ebe186d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5d8549b7d9-2jwmz_calico-apiserver(34fd6f0e-186d-449e-b768-2f198ebe186d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"971002b238ae90dcec7a530d6d01e9fa7ee880aa45e2eaabe5af5f2039fd0c47\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5d8549b7d9-2jwmz" podUID="34fd6f0e-186d-449e-b768-2f198ebe186d" Jul 15 05:16:39.574631 systemd[1]: Created slice kubepods-besteffort-pod792c1079_d6ad_4977_9449_eb7585301bdc.slice - libcontainer container kubepods-besteffort-pod792c1079_d6ad_4977_9449_eb7585301bdc.slice. Jul 15 05:16:39.582953 containerd[1996]: time="2025-07-15T05:16:39.582896181Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-plmkb,Uid:792c1079-d6ad-4977-9449-eb7585301bdc,Namespace:calico-system,Attempt:0,}" Jul 15 05:16:39.668880 containerd[1996]: time="2025-07-15T05:16:39.668825576Z" level=error msg="Failed to destroy network for sandbox \"53435bc276aa38350423d1453cf31071aef629a8deff806dd5bc1130af0bc2e8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:16:39.673887 systemd[1]: run-netns-cni\x2d5c3254a0\x2d1f1a\x2d7cd9\x2dae84\x2d65e46646644f.mount: Deactivated successfully. Jul 15 05:16:39.675716 containerd[1996]: time="2025-07-15T05:16:39.674525797Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-plmkb,Uid:792c1079-d6ad-4977-9449-eb7585301bdc,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"53435bc276aa38350423d1453cf31071aef629a8deff806dd5bc1130af0bc2e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:16:39.675889 kubelet[3322]: E0715 05:16:39.674778 3322 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"53435bc276aa38350423d1453cf31071aef629a8deff806dd5bc1130af0bc2e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:16:39.675889 kubelet[3322]: E0715 05:16:39.675432 3322 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"53435bc276aa38350423d1453cf31071aef629a8deff806dd5bc1130af0bc2e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-plmkb" Jul 15 05:16:39.675889 kubelet[3322]: E0715 05:16:39.675492 3322 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"53435bc276aa38350423d1453cf31071aef629a8deff806dd5bc1130af0bc2e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-plmkb" Jul 15 05:16:39.676529 kubelet[3322]: E0715 05:16:39.675594 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-plmkb_calico-system(792c1079-d6ad-4977-9449-eb7585301bdc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-plmkb_calico-system(792c1079-d6ad-4977-9449-eb7585301bdc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"53435bc276aa38350423d1453cf31071aef629a8deff806dd5bc1130af0bc2e8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-plmkb" podUID="792c1079-d6ad-4977-9449-eb7585301bdc" Jul 15 05:16:42.970079 kubelet[3322]: I0715 05:16:42.970008 3322 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 15 05:16:45.385251 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2725095097.mount: Deactivated successfully. Jul 15 05:16:45.482730 containerd[1996]: time="2025-07-15T05:16:45.482487773Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Jul 15 05:16:45.485810 containerd[1996]: time="2025-07-15T05:16:45.456833012Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:16:45.493546 containerd[1996]: time="2025-07-15T05:16:45.493497061Z" level=info msg="ImageCreate event name:\"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:16:45.495753 containerd[1996]: time="2025-07-15T05:16:45.495719682Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:16:45.497466 containerd[1996]: time="2025-07-15T05:16:45.497426772Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"158500025\" in 6.684110212s" Jul 15 05:16:45.497466 containerd[1996]: time="2025-07-15T05:16:45.497466622Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\"" Jul 15 05:16:45.521273 containerd[1996]: time="2025-07-15T05:16:45.521209264Z" level=info msg="CreateContainer within sandbox \"1533fb7c189b7a94c1e8b11da650f25e59673701c4e1d78129505ee620acaf1b\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 15 05:16:45.568753 containerd[1996]: time="2025-07-15T05:16:45.568704315Z" level=info msg="Container e1dac7f605fa00166c119ba0415a1e7136b369767826afc6fb0b14babfdda518: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:16:45.571104 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1800162764.mount: Deactivated successfully. Jul 15 05:16:45.608097 containerd[1996]: time="2025-07-15T05:16:45.608050344Z" level=info msg="CreateContainer within sandbox \"1533fb7c189b7a94c1e8b11da650f25e59673701c4e1d78129505ee620acaf1b\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"e1dac7f605fa00166c119ba0415a1e7136b369767826afc6fb0b14babfdda518\"" Jul 15 05:16:45.610204 containerd[1996]: time="2025-07-15T05:16:45.608674831Z" level=info msg="StartContainer for \"e1dac7f605fa00166c119ba0415a1e7136b369767826afc6fb0b14babfdda518\"" Jul 15 05:16:45.612711 containerd[1996]: time="2025-07-15T05:16:45.612644654Z" level=info msg="connecting to shim e1dac7f605fa00166c119ba0415a1e7136b369767826afc6fb0b14babfdda518" address="unix:///run/containerd/s/5923df5522682357a8afcf4f2585eae96d8250bf4fb99b3781e58891c0dcb5c4" protocol=ttrpc version=3 Jul 15 05:16:45.747156 systemd[1]: Started cri-containerd-e1dac7f605fa00166c119ba0415a1e7136b369767826afc6fb0b14babfdda518.scope - libcontainer container e1dac7f605fa00166c119ba0415a1e7136b369767826afc6fb0b14babfdda518. Jul 15 05:16:45.804219 containerd[1996]: time="2025-07-15T05:16:45.804174810Z" level=info msg="StartContainer for \"e1dac7f605fa00166c119ba0415a1e7136b369767826afc6fb0b14babfdda518\" returns successfully" Jul 15 05:16:46.128097 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 15 05:16:46.129698 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 15 05:16:46.402313 kubelet[3322]: I0715 05:16:46.402202 3322 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-h87qb" podStartSLOduration=2.624948774 podStartE2EDuration="19.402183853s" podCreationTimestamp="2025-07-15 05:16:27 +0000 UTC" firstStartedPulling="2025-07-15 05:16:28.720932471 +0000 UTC m=+22.313267425" lastFinishedPulling="2025-07-15 05:16:45.498167563 +0000 UTC m=+39.090502504" observedRunningTime="2025-07-15 05:16:45.864450866 +0000 UTC m=+39.456785826" watchObservedRunningTime="2025-07-15 05:16:46.402183853 +0000 UTC m=+39.994518825" Jul 15 05:16:46.499022 kubelet[3322]: I0715 05:16:46.498975 3322 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b8571cd4-fa32-4633-874d-3745d4f318bb-whisker-ca-bundle\") pod \"b8571cd4-fa32-4633-874d-3745d4f318bb\" (UID: \"b8571cd4-fa32-4633-874d-3745d4f318bb\") " Jul 15 05:16:46.499590 kubelet[3322]: I0715 05:16:46.499533 3322 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/b8571cd4-fa32-4633-874d-3745d4f318bb-whisker-backend-key-pair\") pod \"b8571cd4-fa32-4633-874d-3745d4f318bb\" (UID: \"b8571cd4-fa32-4633-874d-3745d4f318bb\") " Jul 15 05:16:46.500039 kubelet[3322]: I0715 05:16:46.499881 3322 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r6kpg\" (UniqueName: \"kubernetes.io/projected/b8571cd4-fa32-4633-874d-3745d4f318bb-kube-api-access-r6kpg\") pod \"b8571cd4-fa32-4633-874d-3745d4f318bb\" (UID: \"b8571cd4-fa32-4633-874d-3745d4f318bb\") " Jul 15 05:16:46.503374 kubelet[3322]: I0715 05:16:46.499468 3322 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b8571cd4-fa32-4633-874d-3745d4f318bb-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "b8571cd4-fa32-4633-874d-3745d4f318bb" (UID: "b8571cd4-fa32-4633-874d-3745d4f318bb"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 15 05:16:46.528462 kubelet[3322]: I0715 05:16:46.528392 3322 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8571cd4-fa32-4633-874d-3745d4f318bb-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "b8571cd4-fa32-4633-874d-3745d4f318bb" (UID: "b8571cd4-fa32-4633-874d-3745d4f318bb"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 15 05:16:46.528751 kubelet[3322]: I0715 05:16:46.528714 3322 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8571cd4-fa32-4633-874d-3745d4f318bb-kube-api-access-r6kpg" (OuterVolumeSpecName: "kube-api-access-r6kpg") pod "b8571cd4-fa32-4633-874d-3745d4f318bb" (UID: "b8571cd4-fa32-4633-874d-3745d4f318bb"). InnerVolumeSpecName "kube-api-access-r6kpg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 15 05:16:46.529173 systemd[1]: var-lib-kubelet-pods-b8571cd4\x2dfa32\x2d4633\x2d874d\x2d3745d4f318bb-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dr6kpg.mount: Deactivated successfully. Jul 15 05:16:46.529580 systemd[1]: var-lib-kubelet-pods-b8571cd4\x2dfa32\x2d4633\x2d874d\x2d3745d4f318bb-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 15 05:16:46.585121 systemd[1]: Removed slice kubepods-besteffort-podb8571cd4_fa32_4633_874d_3745d4f318bb.slice - libcontainer container kubepods-besteffort-podb8571cd4_fa32_4633_874d_3745d4f318bb.slice. Jul 15 05:16:46.600932 kubelet[3322]: I0715 05:16:46.600868 3322 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b8571cd4-fa32-4633-874d-3745d4f318bb-whisker-ca-bundle\") on node \"ip-172-31-18-224\" DevicePath \"\"" Jul 15 05:16:46.600932 kubelet[3322]: I0715 05:16:46.600935 3322 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/b8571cd4-fa32-4633-874d-3745d4f318bb-whisker-backend-key-pair\") on node \"ip-172-31-18-224\" DevicePath \"\"" Jul 15 05:16:46.601116 kubelet[3322]: I0715 05:16:46.600947 3322 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-r6kpg\" (UniqueName: \"kubernetes.io/projected/b8571cd4-fa32-4633-874d-3745d4f318bb-kube-api-access-r6kpg\") on node \"ip-172-31-18-224\" DevicePath \"\"" Jul 15 05:16:46.849006 kubelet[3322]: I0715 05:16:46.848826 3322 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 15 05:16:46.991488 systemd[1]: Created slice kubepods-besteffort-pod75e5c8ee_495c_4937_8291_82b8ea7c9cfb.slice - libcontainer container kubepods-besteffort-pod75e5c8ee_495c_4937_8291_82b8ea7c9cfb.slice. Jul 15 05:16:47.105653 kubelet[3322]: I0715 05:16:47.105424 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/75e5c8ee-495c-4937-8291-82b8ea7c9cfb-whisker-backend-key-pair\") pod \"whisker-5776c95fbf-qzrx2\" (UID: \"75e5c8ee-495c-4937-8291-82b8ea7c9cfb\") " pod="calico-system/whisker-5776c95fbf-qzrx2" Jul 15 05:16:47.105653 kubelet[3322]: I0715 05:16:47.105537 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9sp4\" (UniqueName: \"kubernetes.io/projected/75e5c8ee-495c-4937-8291-82b8ea7c9cfb-kube-api-access-p9sp4\") pod \"whisker-5776c95fbf-qzrx2\" (UID: \"75e5c8ee-495c-4937-8291-82b8ea7c9cfb\") " pod="calico-system/whisker-5776c95fbf-qzrx2" Jul 15 05:16:47.105653 kubelet[3322]: I0715 05:16:47.105588 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/75e5c8ee-495c-4937-8291-82b8ea7c9cfb-whisker-ca-bundle\") pod \"whisker-5776c95fbf-qzrx2\" (UID: \"75e5c8ee-495c-4937-8291-82b8ea7c9cfb\") " pod="calico-system/whisker-5776c95fbf-qzrx2" Jul 15 05:16:47.297034 containerd[1996]: time="2025-07-15T05:16:47.296990704Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5776c95fbf-qzrx2,Uid:75e5c8ee-495c-4937-8291-82b8ea7c9cfb,Namespace:calico-system,Attempt:0,}" Jul 15 05:16:47.874351 (udev-worker)[4480]: Network interface NamePolicy= disabled on kernel command line. Jul 15 05:16:47.880677 systemd-networkd[1854]: calie2e6e0f2945: Link UP Jul 15 05:16:47.882225 systemd-networkd[1854]: calie2e6e0f2945: Gained carrier Jul 15 05:16:47.935640 containerd[1996]: 2025-07-15 05:16:47.333 [INFO][4506] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 15 05:16:47.935640 containerd[1996]: 2025-07-15 05:16:47.384 [INFO][4506] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--224-k8s-whisker--5776c95fbf--qzrx2-eth0 whisker-5776c95fbf- calico-system 75e5c8ee-495c-4937-8291-82b8ea7c9cfb 905 0 2025-07-15 05:16:46 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:5776c95fbf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-18-224 whisker-5776c95fbf-qzrx2 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calie2e6e0f2945 [] [] }} ContainerID="635ae8f9e3c03edeccb4a3ba42d1480f0809ff01c76928137ff81faafc035e85" Namespace="calico-system" Pod="whisker-5776c95fbf-qzrx2" WorkloadEndpoint="ip--172--31--18--224-k8s-whisker--5776c95fbf--qzrx2-" Jul 15 05:16:47.935640 containerd[1996]: 2025-07-15 05:16:47.384 [INFO][4506] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="635ae8f9e3c03edeccb4a3ba42d1480f0809ff01c76928137ff81faafc035e85" Namespace="calico-system" Pod="whisker-5776c95fbf-qzrx2" WorkloadEndpoint="ip--172--31--18--224-k8s-whisker--5776c95fbf--qzrx2-eth0" Jul 15 05:16:47.935640 containerd[1996]: 2025-07-15 05:16:47.738 [INFO][4520] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="635ae8f9e3c03edeccb4a3ba42d1480f0809ff01c76928137ff81faafc035e85" HandleID="k8s-pod-network.635ae8f9e3c03edeccb4a3ba42d1480f0809ff01c76928137ff81faafc035e85" Workload="ip--172--31--18--224-k8s-whisker--5776c95fbf--qzrx2-eth0" Jul 15 05:16:47.936965 containerd[1996]: 2025-07-15 05:16:47.743 [INFO][4520] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="635ae8f9e3c03edeccb4a3ba42d1480f0809ff01c76928137ff81faafc035e85" HandleID="k8s-pod-network.635ae8f9e3c03edeccb4a3ba42d1480f0809ff01c76928137ff81faafc035e85" Workload="ip--172--31--18--224-k8s-whisker--5776c95fbf--qzrx2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cf380), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-18-224", "pod":"whisker-5776c95fbf-qzrx2", "timestamp":"2025-07-15 05:16:47.738545967 +0000 UTC"}, Hostname:"ip-172-31-18-224", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 05:16:47.936965 containerd[1996]: 2025-07-15 05:16:47.743 [INFO][4520] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 05:16:47.936965 containerd[1996]: 2025-07-15 05:16:47.743 [INFO][4520] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 05:16:47.936965 containerd[1996]: 2025-07-15 05:16:47.744 [INFO][4520] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-224' Jul 15 05:16:47.936965 containerd[1996]: 2025-07-15 05:16:47.762 [INFO][4520] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.635ae8f9e3c03edeccb4a3ba42d1480f0809ff01c76928137ff81faafc035e85" host="ip-172-31-18-224" Jul 15 05:16:47.936965 containerd[1996]: 2025-07-15 05:16:47.782 [INFO][4520] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-18-224" Jul 15 05:16:47.936965 containerd[1996]: 2025-07-15 05:16:47.795 [INFO][4520] ipam/ipam.go 511: Trying affinity for 192.168.113.128/26 host="ip-172-31-18-224" Jul 15 05:16:47.936965 containerd[1996]: 2025-07-15 05:16:47.799 [INFO][4520] ipam/ipam.go 158: Attempting to load block cidr=192.168.113.128/26 host="ip-172-31-18-224" Jul 15 05:16:47.936965 containerd[1996]: 2025-07-15 05:16:47.806 [INFO][4520] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.113.128/26 host="ip-172-31-18-224" Jul 15 05:16:47.939297 containerd[1996]: 2025-07-15 05:16:47.806 [INFO][4520] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.113.128/26 handle="k8s-pod-network.635ae8f9e3c03edeccb4a3ba42d1480f0809ff01c76928137ff81faafc035e85" host="ip-172-31-18-224" Jul 15 05:16:47.939297 containerd[1996]: 2025-07-15 05:16:47.814 [INFO][4520] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.635ae8f9e3c03edeccb4a3ba42d1480f0809ff01c76928137ff81faafc035e85 Jul 15 05:16:47.939297 containerd[1996]: 2025-07-15 05:16:47.823 [INFO][4520] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.113.128/26 handle="k8s-pod-network.635ae8f9e3c03edeccb4a3ba42d1480f0809ff01c76928137ff81faafc035e85" host="ip-172-31-18-224" Jul 15 05:16:47.939297 containerd[1996]: 2025-07-15 05:16:47.833 [INFO][4520] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.113.129/26] block=192.168.113.128/26 handle="k8s-pod-network.635ae8f9e3c03edeccb4a3ba42d1480f0809ff01c76928137ff81faafc035e85" host="ip-172-31-18-224" Jul 15 05:16:47.939297 containerd[1996]: 2025-07-15 05:16:47.834 [INFO][4520] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.113.129/26] handle="k8s-pod-network.635ae8f9e3c03edeccb4a3ba42d1480f0809ff01c76928137ff81faafc035e85" host="ip-172-31-18-224" Jul 15 05:16:47.939297 containerd[1996]: 2025-07-15 05:16:47.834 [INFO][4520] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 05:16:47.939297 containerd[1996]: 2025-07-15 05:16:47.834 [INFO][4520] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.113.129/26] IPv6=[] ContainerID="635ae8f9e3c03edeccb4a3ba42d1480f0809ff01c76928137ff81faafc035e85" HandleID="k8s-pod-network.635ae8f9e3c03edeccb4a3ba42d1480f0809ff01c76928137ff81faafc035e85" Workload="ip--172--31--18--224-k8s-whisker--5776c95fbf--qzrx2-eth0" Jul 15 05:16:47.939564 containerd[1996]: 2025-07-15 05:16:47.846 [INFO][4506] cni-plugin/k8s.go 418: Populated endpoint ContainerID="635ae8f9e3c03edeccb4a3ba42d1480f0809ff01c76928137ff81faafc035e85" Namespace="calico-system" Pod="whisker-5776c95fbf-qzrx2" WorkloadEndpoint="ip--172--31--18--224-k8s-whisker--5776c95fbf--qzrx2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--224-k8s-whisker--5776c95fbf--qzrx2-eth0", GenerateName:"whisker-5776c95fbf-", Namespace:"calico-system", SelfLink:"", UID:"75e5c8ee-495c-4937-8291-82b8ea7c9cfb", ResourceVersion:"905", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 5, 16, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5776c95fbf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-224", ContainerID:"", Pod:"whisker-5776c95fbf-qzrx2", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.113.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calie2e6e0f2945", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 05:16:47.939564 containerd[1996]: 2025-07-15 05:16:47.846 [INFO][4506] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.113.129/32] ContainerID="635ae8f9e3c03edeccb4a3ba42d1480f0809ff01c76928137ff81faafc035e85" Namespace="calico-system" Pod="whisker-5776c95fbf-qzrx2" WorkloadEndpoint="ip--172--31--18--224-k8s-whisker--5776c95fbf--qzrx2-eth0" Jul 15 05:16:47.939726 containerd[1996]: 2025-07-15 05:16:47.846 [INFO][4506] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie2e6e0f2945 ContainerID="635ae8f9e3c03edeccb4a3ba42d1480f0809ff01c76928137ff81faafc035e85" Namespace="calico-system" Pod="whisker-5776c95fbf-qzrx2" WorkloadEndpoint="ip--172--31--18--224-k8s-whisker--5776c95fbf--qzrx2-eth0" Jul 15 05:16:47.939726 containerd[1996]: 2025-07-15 05:16:47.888 [INFO][4506] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="635ae8f9e3c03edeccb4a3ba42d1480f0809ff01c76928137ff81faafc035e85" Namespace="calico-system" Pod="whisker-5776c95fbf-qzrx2" WorkloadEndpoint="ip--172--31--18--224-k8s-whisker--5776c95fbf--qzrx2-eth0" Jul 15 05:16:47.939807 containerd[1996]: 2025-07-15 05:16:47.888 [INFO][4506] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="635ae8f9e3c03edeccb4a3ba42d1480f0809ff01c76928137ff81faafc035e85" Namespace="calico-system" Pod="whisker-5776c95fbf-qzrx2" WorkloadEndpoint="ip--172--31--18--224-k8s-whisker--5776c95fbf--qzrx2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--224-k8s-whisker--5776c95fbf--qzrx2-eth0", GenerateName:"whisker-5776c95fbf-", Namespace:"calico-system", SelfLink:"", UID:"75e5c8ee-495c-4937-8291-82b8ea7c9cfb", ResourceVersion:"905", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 5, 16, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5776c95fbf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-224", ContainerID:"635ae8f9e3c03edeccb4a3ba42d1480f0809ff01c76928137ff81faafc035e85", Pod:"whisker-5776c95fbf-qzrx2", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.113.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calie2e6e0f2945", MAC:"26:20:50:c6:30:01", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 05:16:47.942576 containerd[1996]: 2025-07-15 05:16:47.917 [INFO][4506] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="635ae8f9e3c03edeccb4a3ba42d1480f0809ff01c76928137ff81faafc035e85" Namespace="calico-system" Pod="whisker-5776c95fbf-qzrx2" WorkloadEndpoint="ip--172--31--18--224-k8s-whisker--5776c95fbf--qzrx2-eth0" Jul 15 05:16:48.338018 containerd[1996]: time="2025-07-15T05:16:48.336860305Z" level=info msg="connecting to shim 635ae8f9e3c03edeccb4a3ba42d1480f0809ff01c76928137ff81faafc035e85" address="unix:///run/containerd/s/edab6c7a6c16f4ee8b3acee03ab9abbc574bd55a750e0f2433f01d23582d59fc" namespace=k8s.io protocol=ttrpc version=3 Jul 15 05:16:48.403257 systemd[1]: Started cri-containerd-635ae8f9e3c03edeccb4a3ba42d1480f0809ff01c76928137ff81faafc035e85.scope - libcontainer container 635ae8f9e3c03edeccb4a3ba42d1480f0809ff01c76928137ff81faafc035e85. Jul 15 05:16:48.587029 kubelet[3322]: I0715 05:16:48.586307 3322 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8571cd4-fa32-4633-874d-3745d4f318bb" path="/var/lib/kubelet/pods/b8571cd4-fa32-4633-874d-3745d4f318bb/volumes" Jul 15 05:16:48.590821 containerd[1996]: time="2025-07-15T05:16:48.590424734Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5776c95fbf-qzrx2,Uid:75e5c8ee-495c-4937-8291-82b8ea7c9cfb,Namespace:calico-system,Attempt:0,} returns sandbox id \"635ae8f9e3c03edeccb4a3ba42d1480f0809ff01c76928137ff81faafc035e85\"" Jul 15 05:16:48.626065 containerd[1996]: time="2025-07-15T05:16:48.626009475Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 15 05:16:48.760103 systemd-networkd[1854]: vxlan.calico: Link UP Jul 15 05:16:48.760113 systemd-networkd[1854]: vxlan.calico: Gained carrier Jul 15 05:16:48.783560 (udev-worker)[4478]: Network interface NamePolicy= disabled on kernel command line. Jul 15 05:16:49.395617 systemd-networkd[1854]: calie2e6e0f2945: Gained IPv6LL Jul 15 05:16:49.578591 containerd[1996]: time="2025-07-15T05:16:49.578546690Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-bd8wp,Uid:62dd5f46-edc9-4fbc-a34e-dcbf00a60624,Namespace:calico-system,Attempt:0,}" Jul 15 05:16:49.702542 systemd-networkd[1854]: calie3812fa4608: Link UP Jul 15 05:16:49.702887 systemd-networkd[1854]: calie3812fa4608: Gained carrier Jul 15 05:16:49.728803 containerd[1996]: 2025-07-15 05:16:49.619 [INFO][4769] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--224-k8s-goldmane--768f4c5c69--bd8wp-eth0 goldmane-768f4c5c69- calico-system 62dd5f46-edc9-4fbc-a34e-dcbf00a60624 831 0 2025-07-15 05:16:27 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:768f4c5c69 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-18-224 goldmane-768f4c5c69-bd8wp eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calie3812fa4608 [] [] }} ContainerID="2f29202c2d66e63a41dc90412ad48f5d96a312081666003b86270f11e7a5f6e4" Namespace="calico-system" Pod="goldmane-768f4c5c69-bd8wp" WorkloadEndpoint="ip--172--31--18--224-k8s-goldmane--768f4c5c69--bd8wp-" Jul 15 05:16:49.728803 containerd[1996]: 2025-07-15 05:16:49.619 [INFO][4769] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2f29202c2d66e63a41dc90412ad48f5d96a312081666003b86270f11e7a5f6e4" Namespace="calico-system" Pod="goldmane-768f4c5c69-bd8wp" WorkloadEndpoint="ip--172--31--18--224-k8s-goldmane--768f4c5c69--bd8wp-eth0" Jul 15 05:16:49.728803 containerd[1996]: 2025-07-15 05:16:49.648 [INFO][4782] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2f29202c2d66e63a41dc90412ad48f5d96a312081666003b86270f11e7a5f6e4" HandleID="k8s-pod-network.2f29202c2d66e63a41dc90412ad48f5d96a312081666003b86270f11e7a5f6e4" Workload="ip--172--31--18--224-k8s-goldmane--768f4c5c69--bd8wp-eth0" Jul 15 05:16:49.729303 containerd[1996]: 2025-07-15 05:16:49.649 [INFO][4782] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2f29202c2d66e63a41dc90412ad48f5d96a312081666003b86270f11e7a5f6e4" HandleID="k8s-pod-network.2f29202c2d66e63a41dc90412ad48f5d96a312081666003b86270f11e7a5f6e4" Workload="ip--172--31--18--224-k8s-goldmane--768f4c5c69--bd8wp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f7b0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-18-224", "pod":"goldmane-768f4c5c69-bd8wp", "timestamp":"2025-07-15 05:16:49.648943303 +0000 UTC"}, Hostname:"ip-172-31-18-224", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 05:16:49.729303 containerd[1996]: 2025-07-15 05:16:49.649 [INFO][4782] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 05:16:49.729303 containerd[1996]: 2025-07-15 05:16:49.649 [INFO][4782] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 05:16:49.729303 containerd[1996]: 2025-07-15 05:16:49.649 [INFO][4782] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-224' Jul 15 05:16:49.729303 containerd[1996]: 2025-07-15 05:16:49.660 [INFO][4782] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2f29202c2d66e63a41dc90412ad48f5d96a312081666003b86270f11e7a5f6e4" host="ip-172-31-18-224" Jul 15 05:16:49.729303 containerd[1996]: 2025-07-15 05:16:49.665 [INFO][4782] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-18-224" Jul 15 05:16:49.729303 containerd[1996]: 2025-07-15 05:16:49.671 [INFO][4782] ipam/ipam.go 511: Trying affinity for 192.168.113.128/26 host="ip-172-31-18-224" Jul 15 05:16:49.729303 containerd[1996]: 2025-07-15 05:16:49.673 [INFO][4782] ipam/ipam.go 158: Attempting to load block cidr=192.168.113.128/26 host="ip-172-31-18-224" Jul 15 05:16:49.729303 containerd[1996]: 2025-07-15 05:16:49.675 [INFO][4782] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.113.128/26 host="ip-172-31-18-224" Jul 15 05:16:49.729538 containerd[1996]: 2025-07-15 05:16:49.675 [INFO][4782] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.113.128/26 handle="k8s-pod-network.2f29202c2d66e63a41dc90412ad48f5d96a312081666003b86270f11e7a5f6e4" host="ip-172-31-18-224" Jul 15 05:16:49.729538 containerd[1996]: 2025-07-15 05:16:49.679 [INFO][4782] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.2f29202c2d66e63a41dc90412ad48f5d96a312081666003b86270f11e7a5f6e4 Jul 15 05:16:49.729538 containerd[1996]: 2025-07-15 05:16:49.683 [INFO][4782] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.113.128/26 handle="k8s-pod-network.2f29202c2d66e63a41dc90412ad48f5d96a312081666003b86270f11e7a5f6e4" host="ip-172-31-18-224" Jul 15 05:16:49.729538 containerd[1996]: 2025-07-15 05:16:49.691 [INFO][4782] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.113.130/26] block=192.168.113.128/26 handle="k8s-pod-network.2f29202c2d66e63a41dc90412ad48f5d96a312081666003b86270f11e7a5f6e4" host="ip-172-31-18-224" Jul 15 05:16:49.729538 containerd[1996]: 2025-07-15 05:16:49.691 [INFO][4782] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.113.130/26] handle="k8s-pod-network.2f29202c2d66e63a41dc90412ad48f5d96a312081666003b86270f11e7a5f6e4" host="ip-172-31-18-224" Jul 15 05:16:49.729538 containerd[1996]: 2025-07-15 05:16:49.691 [INFO][4782] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 05:16:49.729538 containerd[1996]: 2025-07-15 05:16:49.691 [INFO][4782] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.113.130/26] IPv6=[] ContainerID="2f29202c2d66e63a41dc90412ad48f5d96a312081666003b86270f11e7a5f6e4" HandleID="k8s-pod-network.2f29202c2d66e63a41dc90412ad48f5d96a312081666003b86270f11e7a5f6e4" Workload="ip--172--31--18--224-k8s-goldmane--768f4c5c69--bd8wp-eth0" Jul 15 05:16:49.729836 containerd[1996]: 2025-07-15 05:16:49.694 [INFO][4769] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2f29202c2d66e63a41dc90412ad48f5d96a312081666003b86270f11e7a5f6e4" Namespace="calico-system" Pod="goldmane-768f4c5c69-bd8wp" WorkloadEndpoint="ip--172--31--18--224-k8s-goldmane--768f4c5c69--bd8wp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--224-k8s-goldmane--768f4c5c69--bd8wp-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"62dd5f46-edc9-4fbc-a34e-dcbf00a60624", ResourceVersion:"831", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 5, 16, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-224", ContainerID:"", Pod:"goldmane-768f4c5c69-bd8wp", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.113.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calie3812fa4608", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 05:16:49.729836 containerd[1996]: 2025-07-15 05:16:49.695 [INFO][4769] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.113.130/32] ContainerID="2f29202c2d66e63a41dc90412ad48f5d96a312081666003b86270f11e7a5f6e4" Namespace="calico-system" Pod="goldmane-768f4c5c69-bd8wp" WorkloadEndpoint="ip--172--31--18--224-k8s-goldmane--768f4c5c69--bd8wp-eth0" Jul 15 05:16:49.730144 containerd[1996]: 2025-07-15 05:16:49.695 [INFO][4769] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie3812fa4608 ContainerID="2f29202c2d66e63a41dc90412ad48f5d96a312081666003b86270f11e7a5f6e4" Namespace="calico-system" Pod="goldmane-768f4c5c69-bd8wp" WorkloadEndpoint="ip--172--31--18--224-k8s-goldmane--768f4c5c69--bd8wp-eth0" Jul 15 05:16:49.730144 containerd[1996]: 2025-07-15 05:16:49.707 [INFO][4769] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2f29202c2d66e63a41dc90412ad48f5d96a312081666003b86270f11e7a5f6e4" Namespace="calico-system" Pod="goldmane-768f4c5c69-bd8wp" WorkloadEndpoint="ip--172--31--18--224-k8s-goldmane--768f4c5c69--bd8wp-eth0" Jul 15 05:16:49.730257 containerd[1996]: 2025-07-15 05:16:49.709 [INFO][4769] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2f29202c2d66e63a41dc90412ad48f5d96a312081666003b86270f11e7a5f6e4" Namespace="calico-system" Pod="goldmane-768f4c5c69-bd8wp" WorkloadEndpoint="ip--172--31--18--224-k8s-goldmane--768f4c5c69--bd8wp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--224-k8s-goldmane--768f4c5c69--bd8wp-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"62dd5f46-edc9-4fbc-a34e-dcbf00a60624", ResourceVersion:"831", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 5, 16, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-224", ContainerID:"2f29202c2d66e63a41dc90412ad48f5d96a312081666003b86270f11e7a5f6e4", Pod:"goldmane-768f4c5c69-bd8wp", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.113.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calie3812fa4608", MAC:"d6:bb:7d:97:99:36", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 05:16:49.731759 containerd[1996]: 2025-07-15 05:16:49.723 [INFO][4769] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2f29202c2d66e63a41dc90412ad48f5d96a312081666003b86270f11e7a5f6e4" Namespace="calico-system" Pod="goldmane-768f4c5c69-bd8wp" WorkloadEndpoint="ip--172--31--18--224-k8s-goldmane--768f4c5c69--bd8wp-eth0" Jul 15 05:16:49.772384 containerd[1996]: time="2025-07-15T05:16:49.772332859Z" level=info msg="connecting to shim 2f29202c2d66e63a41dc90412ad48f5d96a312081666003b86270f11e7a5f6e4" address="unix:///run/containerd/s/39067d4554bf8fb52a1b6f2f5e5b47b772721348ccc668e178595ce1302ab2ea" namespace=k8s.io protocol=ttrpc version=3 Jul 15 05:16:49.808109 systemd[1]: Started cri-containerd-2f29202c2d66e63a41dc90412ad48f5d96a312081666003b86270f11e7a5f6e4.scope - libcontainer container 2f29202c2d66e63a41dc90412ad48f5d96a312081666003b86270f11e7a5f6e4. Jul 15 05:16:49.843155 systemd-networkd[1854]: vxlan.calico: Gained IPv6LL Jul 15 05:16:49.889030 containerd[1996]: time="2025-07-15T05:16:49.888980830Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-bd8wp,Uid:62dd5f46-edc9-4fbc-a34e-dcbf00a60624,Namespace:calico-system,Attempt:0,} returns sandbox id \"2f29202c2d66e63a41dc90412ad48f5d96a312081666003b86270f11e7a5f6e4\"" Jul 15 05:16:50.004559 containerd[1996]: time="2025-07-15T05:16:50.004424532Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:16:50.006205 containerd[1996]: time="2025-07-15T05:16:50.006156572Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4661207" Jul 15 05:16:50.007698 containerd[1996]: time="2025-07-15T05:16:50.007637748Z" level=info msg="ImageCreate event name:\"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:16:50.011216 containerd[1996]: time="2025-07-15T05:16:50.010403875Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:16:50.011216 containerd[1996]: time="2025-07-15T05:16:50.011079718Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"6153902\" in 1.385023195s" Jul 15 05:16:50.011216 containerd[1996]: time="2025-07-15T05:16:50.011115394Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\"" Jul 15 05:16:50.012580 containerd[1996]: time="2025-07-15T05:16:50.012548561Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 15 05:16:50.025309 containerd[1996]: time="2025-07-15T05:16:50.025267642Z" level=info msg="CreateContainer within sandbox \"635ae8f9e3c03edeccb4a3ba42d1480f0809ff01c76928137ff81faafc035e85\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 15 05:16:50.045396 containerd[1996]: time="2025-07-15T05:16:50.045341971Z" level=info msg="Container 556f606387520562dab50db03216f2aaeebe73bde2b0f930eac71bbf4bd48c12: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:16:50.051543 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount757508482.mount: Deactivated successfully. Jul 15 05:16:50.061284 containerd[1996]: time="2025-07-15T05:16:50.061240115Z" level=info msg="CreateContainer within sandbox \"635ae8f9e3c03edeccb4a3ba42d1480f0809ff01c76928137ff81faafc035e85\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"556f606387520562dab50db03216f2aaeebe73bde2b0f930eac71bbf4bd48c12\"" Jul 15 05:16:50.062078 containerd[1996]: time="2025-07-15T05:16:50.061976203Z" level=info msg="StartContainer for \"556f606387520562dab50db03216f2aaeebe73bde2b0f930eac71bbf4bd48c12\"" Jul 15 05:16:50.063579 containerd[1996]: time="2025-07-15T05:16:50.063541325Z" level=info msg="connecting to shim 556f606387520562dab50db03216f2aaeebe73bde2b0f930eac71bbf4bd48c12" address="unix:///run/containerd/s/edab6c7a6c16f4ee8b3acee03ab9abbc574bd55a750e0f2433f01d23582d59fc" protocol=ttrpc version=3 Jul 15 05:16:50.086233 systemd[1]: Started cri-containerd-556f606387520562dab50db03216f2aaeebe73bde2b0f930eac71bbf4bd48c12.scope - libcontainer container 556f606387520562dab50db03216f2aaeebe73bde2b0f930eac71bbf4bd48c12. Jul 15 05:16:50.145279 containerd[1996]: time="2025-07-15T05:16:50.145245075Z" level=info msg="StartContainer for \"556f606387520562dab50db03216f2aaeebe73bde2b0f930eac71bbf4bd48c12\" returns successfully" Jul 15 05:16:50.571744 containerd[1996]: time="2025-07-15T05:16:50.571191820Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rzzg2,Uid:ddb65bd6-da34-49ee-a1d2-f42709d6e6d2,Namespace:kube-system,Attempt:0,}" Jul 15 05:16:50.572317 containerd[1996]: time="2025-07-15T05:16:50.572286186Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d8549b7d9-d5z85,Uid:87b9403b-286e-449f-b792-8973989d361e,Namespace:calico-apiserver,Attempt:0,}" Jul 15 05:16:50.572885 containerd[1996]: time="2025-07-15T05:16:50.572644550Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-665c85449c-swjcb,Uid:00cde189-05db-4c0c-92a4-d78eaf0ed38b,Namespace:calico-system,Attempt:0,}" Jul 15 05:16:50.572885 containerd[1996]: time="2025-07-15T05:16:50.572665089Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d8549b7d9-2jwmz,Uid:34fd6f0e-186d-449e-b768-2f198ebe186d,Namespace:calico-apiserver,Attempt:0,}" Jul 15 05:16:50.572885 containerd[1996]: time="2025-07-15T05:16:50.572788382Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-nwgx4,Uid:09c0c105-5305-45ec-9f9e-1db93f47968c,Namespace:kube-system,Attempt:0,}" Jul 15 05:16:50.572885 containerd[1996]: time="2025-07-15T05:16:50.572803907Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-plmkb,Uid:792c1079-d6ad-4977-9449-eb7585301bdc,Namespace:calico-system,Attempt:0,}" Jul 15 05:16:50.613541 kubelet[3322]: I0715 05:16:50.613502 3322 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 15 05:16:51.124036 systemd-networkd[1854]: calie3812fa4608: Gained IPv6LL Jul 15 05:16:51.226197 systemd-networkd[1854]: califc75f95ac6b: Link UP Jul 15 05:16:51.228036 systemd-networkd[1854]: califc75f95ac6b: Gained carrier Jul 15 05:16:51.247823 containerd[1996]: time="2025-07-15T05:16:51.247751522Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e1dac7f605fa00166c119ba0415a1e7136b369767826afc6fb0b14babfdda518\" id:\"9092bb3ed0027ed695b77428f3ceda40e320ce7434cd69bf14b0d44aaa63af45\" pid:5000 exited_at:{seconds:1752556611 nanos:246489160}" Jul 15 05:16:51.284439 containerd[1996]: 2025-07-15 05:16:50.886 [INFO][4906] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--224-k8s-calico--apiserver--5d8549b7d9--2jwmz-eth0 calico-apiserver-5d8549b7d9- calico-apiserver 34fd6f0e-186d-449e-b768-2f198ebe186d 834 0 2025-07-15 05:16:22 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5d8549b7d9 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-18-224 calico-apiserver-5d8549b7d9-2jwmz eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] califc75f95ac6b [] [] }} ContainerID="9b59e2cde954f924aa4ccdaeba324be8c4260cf05453abdff3bffa2cdd5f4db1" Namespace="calico-apiserver" Pod="calico-apiserver-5d8549b7d9-2jwmz" WorkloadEndpoint="ip--172--31--18--224-k8s-calico--apiserver--5d8549b7d9--2jwmz-" Jul 15 05:16:51.284439 containerd[1996]: 2025-07-15 05:16:50.887 [INFO][4906] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9b59e2cde954f924aa4ccdaeba324be8c4260cf05453abdff3bffa2cdd5f4db1" Namespace="calico-apiserver" Pod="calico-apiserver-5d8549b7d9-2jwmz" WorkloadEndpoint="ip--172--31--18--224-k8s-calico--apiserver--5d8549b7d9--2jwmz-eth0" Jul 15 05:16:51.284439 containerd[1996]: 2025-07-15 05:16:51.020 [INFO][4958] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9b59e2cde954f924aa4ccdaeba324be8c4260cf05453abdff3bffa2cdd5f4db1" HandleID="k8s-pod-network.9b59e2cde954f924aa4ccdaeba324be8c4260cf05453abdff3bffa2cdd5f4db1" Workload="ip--172--31--18--224-k8s-calico--apiserver--5d8549b7d9--2jwmz-eth0" Jul 15 05:16:51.285087 containerd[1996]: 2025-07-15 05:16:51.021 [INFO][4958] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9b59e2cde954f924aa4ccdaeba324be8c4260cf05453abdff3bffa2cdd5f4db1" HandleID="k8s-pod-network.9b59e2cde954f924aa4ccdaeba324be8c4260cf05453abdff3bffa2cdd5f4db1" Workload="ip--172--31--18--224-k8s-calico--apiserver--5d8549b7d9--2jwmz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5d80), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-18-224", "pod":"calico-apiserver-5d8549b7d9-2jwmz", "timestamp":"2025-07-15 05:16:51.018795481 +0000 UTC"}, Hostname:"ip-172-31-18-224", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 05:16:51.285087 containerd[1996]: 2025-07-15 05:16:51.021 [INFO][4958] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 05:16:51.285087 containerd[1996]: 2025-07-15 05:16:51.022 [INFO][4958] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 05:16:51.285087 containerd[1996]: 2025-07-15 05:16:51.022 [INFO][4958] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-224' Jul 15 05:16:51.285087 containerd[1996]: 2025-07-15 05:16:51.065 [INFO][4958] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9b59e2cde954f924aa4ccdaeba324be8c4260cf05453abdff3bffa2cdd5f4db1" host="ip-172-31-18-224" Jul 15 05:16:51.285087 containerd[1996]: 2025-07-15 05:16:51.082 [INFO][4958] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-18-224" Jul 15 05:16:51.285087 containerd[1996]: 2025-07-15 05:16:51.101 [INFO][4958] ipam/ipam.go 511: Trying affinity for 192.168.113.128/26 host="ip-172-31-18-224" Jul 15 05:16:51.285087 containerd[1996]: 2025-07-15 05:16:51.107 [INFO][4958] ipam/ipam.go 158: Attempting to load block cidr=192.168.113.128/26 host="ip-172-31-18-224" Jul 15 05:16:51.285087 containerd[1996]: 2025-07-15 05:16:51.113 [INFO][4958] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.113.128/26 host="ip-172-31-18-224" Jul 15 05:16:51.287012 containerd[1996]: 2025-07-15 05:16:51.114 [INFO][4958] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.113.128/26 handle="k8s-pod-network.9b59e2cde954f924aa4ccdaeba324be8c4260cf05453abdff3bffa2cdd5f4db1" host="ip-172-31-18-224" Jul 15 05:16:51.287012 containerd[1996]: 2025-07-15 05:16:51.120 [INFO][4958] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9b59e2cde954f924aa4ccdaeba324be8c4260cf05453abdff3bffa2cdd5f4db1 Jul 15 05:16:51.287012 containerd[1996]: 2025-07-15 05:16:51.134 [INFO][4958] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.113.128/26 handle="k8s-pod-network.9b59e2cde954f924aa4ccdaeba324be8c4260cf05453abdff3bffa2cdd5f4db1" host="ip-172-31-18-224" Jul 15 05:16:51.287012 containerd[1996]: 2025-07-15 05:16:51.154 [INFO][4958] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.113.131/26] block=192.168.113.128/26 handle="k8s-pod-network.9b59e2cde954f924aa4ccdaeba324be8c4260cf05453abdff3bffa2cdd5f4db1" host="ip-172-31-18-224" Jul 15 05:16:51.287012 containerd[1996]: 2025-07-15 05:16:51.156 [INFO][4958] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.113.131/26] handle="k8s-pod-network.9b59e2cde954f924aa4ccdaeba324be8c4260cf05453abdff3bffa2cdd5f4db1" host="ip-172-31-18-224" Jul 15 05:16:51.287012 containerd[1996]: 2025-07-15 05:16:51.156 [INFO][4958] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 05:16:51.287012 containerd[1996]: 2025-07-15 05:16:51.156 [INFO][4958] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.113.131/26] IPv6=[] ContainerID="9b59e2cde954f924aa4ccdaeba324be8c4260cf05453abdff3bffa2cdd5f4db1" HandleID="k8s-pod-network.9b59e2cde954f924aa4ccdaeba324be8c4260cf05453abdff3bffa2cdd5f4db1" Workload="ip--172--31--18--224-k8s-calico--apiserver--5d8549b7d9--2jwmz-eth0" Jul 15 05:16:51.287295 containerd[1996]: 2025-07-15 05:16:51.181 [INFO][4906] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9b59e2cde954f924aa4ccdaeba324be8c4260cf05453abdff3bffa2cdd5f4db1" Namespace="calico-apiserver" Pod="calico-apiserver-5d8549b7d9-2jwmz" WorkloadEndpoint="ip--172--31--18--224-k8s-calico--apiserver--5d8549b7d9--2jwmz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--224-k8s-calico--apiserver--5d8549b7d9--2jwmz-eth0", GenerateName:"calico-apiserver-5d8549b7d9-", Namespace:"calico-apiserver", SelfLink:"", UID:"34fd6f0e-186d-449e-b768-2f198ebe186d", ResourceVersion:"834", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 5, 16, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d8549b7d9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-224", ContainerID:"", Pod:"calico-apiserver-5d8549b7d9-2jwmz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.113.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califc75f95ac6b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 05:16:51.287415 containerd[1996]: 2025-07-15 05:16:51.185 [INFO][4906] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.113.131/32] ContainerID="9b59e2cde954f924aa4ccdaeba324be8c4260cf05453abdff3bffa2cdd5f4db1" Namespace="calico-apiserver" Pod="calico-apiserver-5d8549b7d9-2jwmz" WorkloadEndpoint="ip--172--31--18--224-k8s-calico--apiserver--5d8549b7d9--2jwmz-eth0" Jul 15 05:16:51.287415 containerd[1996]: 2025-07-15 05:16:51.185 [INFO][4906] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califc75f95ac6b ContainerID="9b59e2cde954f924aa4ccdaeba324be8c4260cf05453abdff3bffa2cdd5f4db1" Namespace="calico-apiserver" Pod="calico-apiserver-5d8549b7d9-2jwmz" WorkloadEndpoint="ip--172--31--18--224-k8s-calico--apiserver--5d8549b7d9--2jwmz-eth0" Jul 15 05:16:51.287415 containerd[1996]: 2025-07-15 05:16:51.234 [INFO][4906] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9b59e2cde954f924aa4ccdaeba324be8c4260cf05453abdff3bffa2cdd5f4db1" Namespace="calico-apiserver" Pod="calico-apiserver-5d8549b7d9-2jwmz" WorkloadEndpoint="ip--172--31--18--224-k8s-calico--apiserver--5d8549b7d9--2jwmz-eth0" Jul 15 05:16:51.290077 containerd[1996]: 2025-07-15 05:16:51.238 [INFO][4906] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9b59e2cde954f924aa4ccdaeba324be8c4260cf05453abdff3bffa2cdd5f4db1" Namespace="calico-apiserver" Pod="calico-apiserver-5d8549b7d9-2jwmz" WorkloadEndpoint="ip--172--31--18--224-k8s-calico--apiserver--5d8549b7d9--2jwmz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--224-k8s-calico--apiserver--5d8549b7d9--2jwmz-eth0", GenerateName:"calico-apiserver-5d8549b7d9-", Namespace:"calico-apiserver", SelfLink:"", UID:"34fd6f0e-186d-449e-b768-2f198ebe186d", ResourceVersion:"834", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 5, 16, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d8549b7d9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-224", ContainerID:"9b59e2cde954f924aa4ccdaeba324be8c4260cf05453abdff3bffa2cdd5f4db1", Pod:"calico-apiserver-5d8549b7d9-2jwmz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.113.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califc75f95ac6b", MAC:"26:e5:b4:9a:f8:60", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 05:16:51.290191 containerd[1996]: 2025-07-15 05:16:51.266 [INFO][4906] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9b59e2cde954f924aa4ccdaeba324be8c4260cf05453abdff3bffa2cdd5f4db1" Namespace="calico-apiserver" Pod="calico-apiserver-5d8549b7d9-2jwmz" WorkloadEndpoint="ip--172--31--18--224-k8s-calico--apiserver--5d8549b7d9--2jwmz-eth0" Jul 15 05:16:51.356645 systemd-networkd[1854]: cali308a1f013a9: Link UP Jul 15 05:16:51.358840 systemd-networkd[1854]: cali308a1f013a9: Gained carrier Jul 15 05:16:51.434162 containerd[1996]: time="2025-07-15T05:16:51.433473977Z" level=info msg="connecting to shim 9b59e2cde954f924aa4ccdaeba324be8c4260cf05453abdff3bffa2cdd5f4db1" address="unix:///run/containerd/s/8c9d4d9c08398d4680e34091cdaa6de9f16631c34ebc1c4ffb01ef51574ad36a" namespace=k8s.io protocol=ttrpc version=3 Jul 15 05:16:51.448306 containerd[1996]: 2025-07-15 05:16:50.851 [INFO][4880] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--224-k8s-coredns--668d6bf9bc--rzzg2-eth0 coredns-668d6bf9bc- kube-system ddb65bd6-da34-49ee-a1d2-f42709d6e6d2 832 0 2025-07-15 05:16:12 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-18-224 coredns-668d6bf9bc-rzzg2 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali308a1f013a9 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="96d6158e2a73c985ac610960d91677fd51534a39a0c4dfc005b36bd23198352f" Namespace="kube-system" Pod="coredns-668d6bf9bc-rzzg2" WorkloadEndpoint="ip--172--31--18--224-k8s-coredns--668d6bf9bc--rzzg2-" Jul 15 05:16:51.448306 containerd[1996]: 2025-07-15 05:16:50.851 [INFO][4880] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="96d6158e2a73c985ac610960d91677fd51534a39a0c4dfc005b36bd23198352f" Namespace="kube-system" Pod="coredns-668d6bf9bc-rzzg2" WorkloadEndpoint="ip--172--31--18--224-k8s-coredns--668d6bf9bc--rzzg2-eth0" Jul 15 05:16:51.448306 containerd[1996]: 2025-07-15 05:16:51.136 [INFO][4956] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="96d6158e2a73c985ac610960d91677fd51534a39a0c4dfc005b36bd23198352f" HandleID="k8s-pod-network.96d6158e2a73c985ac610960d91677fd51534a39a0c4dfc005b36bd23198352f" Workload="ip--172--31--18--224-k8s-coredns--668d6bf9bc--rzzg2-eth0" Jul 15 05:16:51.449122 containerd[1996]: 2025-07-15 05:16:51.138 [INFO][4956] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="96d6158e2a73c985ac610960d91677fd51534a39a0c4dfc005b36bd23198352f" HandleID="k8s-pod-network.96d6158e2a73c985ac610960d91677fd51534a39a0c4dfc005b36bd23198352f" Workload="ip--172--31--18--224-k8s-coredns--668d6bf9bc--rzzg2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000388970), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-18-224", "pod":"coredns-668d6bf9bc-rzzg2", "timestamp":"2025-07-15 05:16:51.136130599 +0000 UTC"}, Hostname:"ip-172-31-18-224", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 05:16:51.449122 containerd[1996]: 2025-07-15 05:16:51.142 [INFO][4956] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 05:16:51.449122 containerd[1996]: 2025-07-15 05:16:51.158 [INFO][4956] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 05:16:51.449122 containerd[1996]: 2025-07-15 05:16:51.158 [INFO][4956] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-224' Jul 15 05:16:51.449122 containerd[1996]: 2025-07-15 05:16:51.185 [INFO][4956] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.96d6158e2a73c985ac610960d91677fd51534a39a0c4dfc005b36bd23198352f" host="ip-172-31-18-224" Jul 15 05:16:51.449122 containerd[1996]: 2025-07-15 05:16:51.222 [INFO][4956] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-18-224" Jul 15 05:16:51.449122 containerd[1996]: 2025-07-15 05:16:51.243 [INFO][4956] ipam/ipam.go 511: Trying affinity for 192.168.113.128/26 host="ip-172-31-18-224" Jul 15 05:16:51.449122 containerd[1996]: 2025-07-15 05:16:51.249 [INFO][4956] ipam/ipam.go 158: Attempting to load block cidr=192.168.113.128/26 host="ip-172-31-18-224" Jul 15 05:16:51.449122 containerd[1996]: 2025-07-15 05:16:51.261 [INFO][4956] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.113.128/26 host="ip-172-31-18-224" Jul 15 05:16:51.449539 containerd[1996]: 2025-07-15 05:16:51.261 [INFO][4956] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.113.128/26 handle="k8s-pod-network.96d6158e2a73c985ac610960d91677fd51534a39a0c4dfc005b36bd23198352f" host="ip-172-31-18-224" Jul 15 05:16:51.449539 containerd[1996]: 2025-07-15 05:16:51.265 [INFO][4956] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.96d6158e2a73c985ac610960d91677fd51534a39a0c4dfc005b36bd23198352f Jul 15 05:16:51.449539 containerd[1996]: 2025-07-15 05:16:51.283 [INFO][4956] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.113.128/26 handle="k8s-pod-network.96d6158e2a73c985ac610960d91677fd51534a39a0c4dfc005b36bd23198352f" host="ip-172-31-18-224" Jul 15 05:16:51.449539 containerd[1996]: 2025-07-15 05:16:51.307 [INFO][4956] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.113.132/26] block=192.168.113.128/26 handle="k8s-pod-network.96d6158e2a73c985ac610960d91677fd51534a39a0c4dfc005b36bd23198352f" host="ip-172-31-18-224" Jul 15 05:16:51.449539 containerd[1996]: 2025-07-15 05:16:51.307 [INFO][4956] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.113.132/26] handle="k8s-pod-network.96d6158e2a73c985ac610960d91677fd51534a39a0c4dfc005b36bd23198352f" host="ip-172-31-18-224" Jul 15 05:16:51.449539 containerd[1996]: 2025-07-15 05:16:51.308 [INFO][4956] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 05:16:51.449539 containerd[1996]: 2025-07-15 05:16:51.309 [INFO][4956] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.113.132/26] IPv6=[] ContainerID="96d6158e2a73c985ac610960d91677fd51534a39a0c4dfc005b36bd23198352f" HandleID="k8s-pod-network.96d6158e2a73c985ac610960d91677fd51534a39a0c4dfc005b36bd23198352f" Workload="ip--172--31--18--224-k8s-coredns--668d6bf9bc--rzzg2-eth0" Jul 15 05:16:51.449800 containerd[1996]: 2025-07-15 05:16:51.333 [INFO][4880] cni-plugin/k8s.go 418: Populated endpoint ContainerID="96d6158e2a73c985ac610960d91677fd51534a39a0c4dfc005b36bd23198352f" Namespace="kube-system" Pod="coredns-668d6bf9bc-rzzg2" WorkloadEndpoint="ip--172--31--18--224-k8s-coredns--668d6bf9bc--rzzg2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--224-k8s-coredns--668d6bf9bc--rzzg2-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"ddb65bd6-da34-49ee-a1d2-f42709d6e6d2", ResourceVersion:"832", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 5, 16, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-224", ContainerID:"", Pod:"coredns-668d6bf9bc-rzzg2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.113.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali308a1f013a9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 05:16:51.449800 containerd[1996]: 2025-07-15 05:16:51.333 [INFO][4880] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.113.132/32] ContainerID="96d6158e2a73c985ac610960d91677fd51534a39a0c4dfc005b36bd23198352f" Namespace="kube-system" Pod="coredns-668d6bf9bc-rzzg2" WorkloadEndpoint="ip--172--31--18--224-k8s-coredns--668d6bf9bc--rzzg2-eth0" Jul 15 05:16:51.449800 containerd[1996]: 2025-07-15 05:16:51.333 [INFO][4880] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali308a1f013a9 ContainerID="96d6158e2a73c985ac610960d91677fd51534a39a0c4dfc005b36bd23198352f" Namespace="kube-system" Pod="coredns-668d6bf9bc-rzzg2" WorkloadEndpoint="ip--172--31--18--224-k8s-coredns--668d6bf9bc--rzzg2-eth0" Jul 15 05:16:51.449800 containerd[1996]: 2025-07-15 05:16:51.358 [INFO][4880] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="96d6158e2a73c985ac610960d91677fd51534a39a0c4dfc005b36bd23198352f" Namespace="kube-system" Pod="coredns-668d6bf9bc-rzzg2" WorkloadEndpoint="ip--172--31--18--224-k8s-coredns--668d6bf9bc--rzzg2-eth0" Jul 15 05:16:51.449800 containerd[1996]: 2025-07-15 05:16:51.360 [INFO][4880] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="96d6158e2a73c985ac610960d91677fd51534a39a0c4dfc005b36bd23198352f" Namespace="kube-system" Pod="coredns-668d6bf9bc-rzzg2" WorkloadEndpoint="ip--172--31--18--224-k8s-coredns--668d6bf9bc--rzzg2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--224-k8s-coredns--668d6bf9bc--rzzg2-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"ddb65bd6-da34-49ee-a1d2-f42709d6e6d2", ResourceVersion:"832", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 5, 16, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-224", ContainerID:"96d6158e2a73c985ac610960d91677fd51534a39a0c4dfc005b36bd23198352f", Pod:"coredns-668d6bf9bc-rzzg2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.113.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali308a1f013a9", MAC:"26:e7:61:b1:76:6c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 05:16:51.449800 containerd[1996]: 2025-07-15 05:16:51.412 [INFO][4880] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="96d6158e2a73c985ac610960d91677fd51534a39a0c4dfc005b36bd23198352f" Namespace="kube-system" Pod="coredns-668d6bf9bc-rzzg2" WorkloadEndpoint="ip--172--31--18--224-k8s-coredns--668d6bf9bc--rzzg2-eth0" Jul 15 05:16:51.561342 systemd[1]: Started cri-containerd-9b59e2cde954f924aa4ccdaeba324be8c4260cf05453abdff3bffa2cdd5f4db1.scope - libcontainer container 9b59e2cde954f924aa4ccdaeba324be8c4260cf05453abdff3bffa2cdd5f4db1. Jul 15 05:16:51.613933 containerd[1996]: time="2025-07-15T05:16:51.611795039Z" level=info msg="connecting to shim 96d6158e2a73c985ac610960d91677fd51534a39a0c4dfc005b36bd23198352f" address="unix:///run/containerd/s/d137b8eb154a6720f02441de83dc16f85fd0278f7a1e6c69804ea1ebccfaaf30" namespace=k8s.io protocol=ttrpc version=3 Jul 15 05:16:51.619497 systemd-networkd[1854]: calid4f34491333: Link UP Jul 15 05:16:51.630014 systemd-networkd[1854]: calid4f34491333: Gained carrier Jul 15 05:16:51.681967 containerd[1996]: 2025-07-15 05:16:50.916 [INFO][4884] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--224-k8s-calico--apiserver--5d8549b7d9--d5z85-eth0 calico-apiserver-5d8549b7d9- calico-apiserver 87b9403b-286e-449f-b792-8973989d361e 833 0 2025-07-15 05:16:22 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5d8549b7d9 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-18-224 calico-apiserver-5d8549b7d9-d5z85 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calid4f34491333 [] [] }} ContainerID="d201b124024d9b106e6d1337f1f25cce462d956a3ce63b87e811758b99b93e84" Namespace="calico-apiserver" Pod="calico-apiserver-5d8549b7d9-d5z85" WorkloadEndpoint="ip--172--31--18--224-k8s-calico--apiserver--5d8549b7d9--d5z85-" Jul 15 05:16:51.681967 containerd[1996]: 2025-07-15 05:16:50.924 [INFO][4884] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d201b124024d9b106e6d1337f1f25cce462d956a3ce63b87e811758b99b93e84" Namespace="calico-apiserver" Pod="calico-apiserver-5d8549b7d9-d5z85" WorkloadEndpoint="ip--172--31--18--224-k8s-calico--apiserver--5d8549b7d9--d5z85-eth0" Jul 15 05:16:51.681967 containerd[1996]: 2025-07-15 05:16:51.155 [INFO][4976] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d201b124024d9b106e6d1337f1f25cce462d956a3ce63b87e811758b99b93e84" HandleID="k8s-pod-network.d201b124024d9b106e6d1337f1f25cce462d956a3ce63b87e811758b99b93e84" Workload="ip--172--31--18--224-k8s-calico--apiserver--5d8549b7d9--d5z85-eth0" Jul 15 05:16:51.681967 containerd[1996]: 2025-07-15 05:16:51.155 [INFO][4976] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d201b124024d9b106e6d1337f1f25cce462d956a3ce63b87e811758b99b93e84" HandleID="k8s-pod-network.d201b124024d9b106e6d1337f1f25cce462d956a3ce63b87e811758b99b93e84" Workload="ip--172--31--18--224-k8s-calico--apiserver--5d8549b7d9--d5z85-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000125900), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-18-224", "pod":"calico-apiserver-5d8549b7d9-d5z85", "timestamp":"2025-07-15 05:16:51.155251337 +0000 UTC"}, Hostname:"ip-172-31-18-224", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 05:16:51.681967 containerd[1996]: 2025-07-15 05:16:51.155 [INFO][4976] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 05:16:51.681967 containerd[1996]: 2025-07-15 05:16:51.309 [INFO][4976] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 05:16:51.681967 containerd[1996]: 2025-07-15 05:16:51.310 [INFO][4976] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-224' Jul 15 05:16:51.681967 containerd[1996]: 2025-07-15 05:16:51.353 [INFO][4976] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d201b124024d9b106e6d1337f1f25cce462d956a3ce63b87e811758b99b93e84" host="ip-172-31-18-224" Jul 15 05:16:51.681967 containerd[1996]: 2025-07-15 05:16:51.417 [INFO][4976] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-18-224" Jul 15 05:16:51.681967 containerd[1996]: 2025-07-15 05:16:51.451 [INFO][4976] ipam/ipam.go 511: Trying affinity for 192.168.113.128/26 host="ip-172-31-18-224" Jul 15 05:16:51.681967 containerd[1996]: 2025-07-15 05:16:51.464 [INFO][4976] ipam/ipam.go 158: Attempting to load block cidr=192.168.113.128/26 host="ip-172-31-18-224" Jul 15 05:16:51.681967 containerd[1996]: 2025-07-15 05:16:51.505 [INFO][4976] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.113.128/26 host="ip-172-31-18-224" Jul 15 05:16:51.681967 containerd[1996]: 2025-07-15 05:16:51.505 [INFO][4976] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.113.128/26 handle="k8s-pod-network.d201b124024d9b106e6d1337f1f25cce462d956a3ce63b87e811758b99b93e84" host="ip-172-31-18-224" Jul 15 05:16:51.681967 containerd[1996]: 2025-07-15 05:16:51.509 [INFO][4976] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d201b124024d9b106e6d1337f1f25cce462d956a3ce63b87e811758b99b93e84 Jul 15 05:16:51.681967 containerd[1996]: 2025-07-15 05:16:51.527 [INFO][4976] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.113.128/26 handle="k8s-pod-network.d201b124024d9b106e6d1337f1f25cce462d956a3ce63b87e811758b99b93e84" host="ip-172-31-18-224" Jul 15 05:16:51.681967 containerd[1996]: 2025-07-15 05:16:51.545 [INFO][4976] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.113.133/26] block=192.168.113.128/26 handle="k8s-pod-network.d201b124024d9b106e6d1337f1f25cce462d956a3ce63b87e811758b99b93e84" host="ip-172-31-18-224" Jul 15 05:16:51.681967 containerd[1996]: 2025-07-15 05:16:51.545 [INFO][4976] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.113.133/26] handle="k8s-pod-network.d201b124024d9b106e6d1337f1f25cce462d956a3ce63b87e811758b99b93e84" host="ip-172-31-18-224" Jul 15 05:16:51.681967 containerd[1996]: 2025-07-15 05:16:51.546 [INFO][4976] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 05:16:51.681967 containerd[1996]: 2025-07-15 05:16:51.548 [INFO][4976] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.113.133/26] IPv6=[] ContainerID="d201b124024d9b106e6d1337f1f25cce462d956a3ce63b87e811758b99b93e84" HandleID="k8s-pod-network.d201b124024d9b106e6d1337f1f25cce462d956a3ce63b87e811758b99b93e84" Workload="ip--172--31--18--224-k8s-calico--apiserver--5d8549b7d9--d5z85-eth0" Jul 15 05:16:51.687215 containerd[1996]: 2025-07-15 05:16:51.568 [INFO][4884] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d201b124024d9b106e6d1337f1f25cce462d956a3ce63b87e811758b99b93e84" Namespace="calico-apiserver" Pod="calico-apiserver-5d8549b7d9-d5z85" WorkloadEndpoint="ip--172--31--18--224-k8s-calico--apiserver--5d8549b7d9--d5z85-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--224-k8s-calico--apiserver--5d8549b7d9--d5z85-eth0", GenerateName:"calico-apiserver-5d8549b7d9-", Namespace:"calico-apiserver", SelfLink:"", UID:"87b9403b-286e-449f-b792-8973989d361e", ResourceVersion:"833", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 5, 16, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d8549b7d9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-224", ContainerID:"", Pod:"calico-apiserver-5d8549b7d9-d5z85", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.113.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid4f34491333", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 05:16:51.687215 containerd[1996]: 2025-07-15 05:16:51.570 [INFO][4884] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.113.133/32] ContainerID="d201b124024d9b106e6d1337f1f25cce462d956a3ce63b87e811758b99b93e84" Namespace="calico-apiserver" Pod="calico-apiserver-5d8549b7d9-d5z85" WorkloadEndpoint="ip--172--31--18--224-k8s-calico--apiserver--5d8549b7d9--d5z85-eth0" Jul 15 05:16:51.687215 containerd[1996]: 2025-07-15 05:16:51.571 [INFO][4884] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid4f34491333 ContainerID="d201b124024d9b106e6d1337f1f25cce462d956a3ce63b87e811758b99b93e84" Namespace="calico-apiserver" Pod="calico-apiserver-5d8549b7d9-d5z85" WorkloadEndpoint="ip--172--31--18--224-k8s-calico--apiserver--5d8549b7d9--d5z85-eth0" Jul 15 05:16:51.687215 containerd[1996]: 2025-07-15 05:16:51.633 [INFO][4884] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d201b124024d9b106e6d1337f1f25cce462d956a3ce63b87e811758b99b93e84" Namespace="calico-apiserver" Pod="calico-apiserver-5d8549b7d9-d5z85" WorkloadEndpoint="ip--172--31--18--224-k8s-calico--apiserver--5d8549b7d9--d5z85-eth0" Jul 15 05:16:51.687215 containerd[1996]: 2025-07-15 05:16:51.634 [INFO][4884] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d201b124024d9b106e6d1337f1f25cce462d956a3ce63b87e811758b99b93e84" Namespace="calico-apiserver" Pod="calico-apiserver-5d8549b7d9-d5z85" WorkloadEndpoint="ip--172--31--18--224-k8s-calico--apiserver--5d8549b7d9--d5z85-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--224-k8s-calico--apiserver--5d8549b7d9--d5z85-eth0", GenerateName:"calico-apiserver-5d8549b7d9-", Namespace:"calico-apiserver", SelfLink:"", UID:"87b9403b-286e-449f-b792-8973989d361e", ResourceVersion:"833", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 5, 16, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d8549b7d9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-224", ContainerID:"d201b124024d9b106e6d1337f1f25cce462d956a3ce63b87e811758b99b93e84", Pod:"calico-apiserver-5d8549b7d9-d5z85", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.113.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid4f34491333", MAC:"2a:33:df:b2:07:10", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 05:16:51.687215 containerd[1996]: 2025-07-15 05:16:51.660 [INFO][4884] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d201b124024d9b106e6d1337f1f25cce462d956a3ce63b87e811758b99b93e84" Namespace="calico-apiserver" Pod="calico-apiserver-5d8549b7d9-d5z85" WorkloadEndpoint="ip--172--31--18--224-k8s-calico--apiserver--5d8549b7d9--d5z85-eth0" Jul 15 05:16:51.773171 systemd-networkd[1854]: calic71790758c0: Link UP Jul 15 05:16:51.785918 systemd-networkd[1854]: calic71790758c0: Gained carrier Jul 15 05:16:51.815708 systemd[1]: Started cri-containerd-96d6158e2a73c985ac610960d91677fd51534a39a0c4dfc005b36bd23198352f.scope - libcontainer container 96d6158e2a73c985ac610960d91677fd51534a39a0c4dfc005b36bd23198352f. Jul 15 05:16:51.878005 containerd[1996]: 2025-07-15 05:16:50.919 [INFO][4895] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--224-k8s-calico--kube--controllers--665c85449c--swjcb-eth0 calico-kube-controllers-665c85449c- calico-system 00cde189-05db-4c0c-92a4-d78eaf0ed38b 829 0 2025-07-15 05:16:27 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:665c85449c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-18-224 calico-kube-controllers-665c85449c-swjcb eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calic71790758c0 [] [] }} ContainerID="ec8764d39c84f1fb4fe8057b38971360708f939554d70e854747b4bb92ad4f41" Namespace="calico-system" Pod="calico-kube-controllers-665c85449c-swjcb" WorkloadEndpoint="ip--172--31--18--224-k8s-calico--kube--controllers--665c85449c--swjcb-" Jul 15 05:16:51.878005 containerd[1996]: 2025-07-15 05:16:50.922 [INFO][4895] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ec8764d39c84f1fb4fe8057b38971360708f939554d70e854747b4bb92ad4f41" Namespace="calico-system" Pod="calico-kube-controllers-665c85449c-swjcb" WorkloadEndpoint="ip--172--31--18--224-k8s-calico--kube--controllers--665c85449c--swjcb-eth0" Jul 15 05:16:51.878005 containerd[1996]: 2025-07-15 05:16:51.168 [INFO][4970] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ec8764d39c84f1fb4fe8057b38971360708f939554d70e854747b4bb92ad4f41" HandleID="k8s-pod-network.ec8764d39c84f1fb4fe8057b38971360708f939554d70e854747b4bb92ad4f41" Workload="ip--172--31--18--224-k8s-calico--kube--controllers--665c85449c--swjcb-eth0" Jul 15 05:16:51.878005 containerd[1996]: 2025-07-15 05:16:51.168 [INFO][4970] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ec8764d39c84f1fb4fe8057b38971360708f939554d70e854747b4bb92ad4f41" HandleID="k8s-pod-network.ec8764d39c84f1fb4fe8057b38971360708f939554d70e854747b4bb92ad4f41" Workload="ip--172--31--18--224-k8s-calico--kube--controllers--665c85449c--swjcb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00011fcd0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-18-224", "pod":"calico-kube-controllers-665c85449c-swjcb", "timestamp":"2025-07-15 05:16:51.168152881 +0000 UTC"}, Hostname:"ip-172-31-18-224", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 05:16:51.878005 containerd[1996]: 2025-07-15 05:16:51.168 [INFO][4970] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 05:16:51.878005 containerd[1996]: 2025-07-15 05:16:51.546 [INFO][4970] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 05:16:51.878005 containerd[1996]: 2025-07-15 05:16:51.546 [INFO][4970] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-224' Jul 15 05:16:51.878005 containerd[1996]: 2025-07-15 05:16:51.617 [INFO][4970] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ec8764d39c84f1fb4fe8057b38971360708f939554d70e854747b4bb92ad4f41" host="ip-172-31-18-224" Jul 15 05:16:51.878005 containerd[1996]: 2025-07-15 05:16:51.635 [INFO][4970] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-18-224" Jul 15 05:16:51.878005 containerd[1996]: 2025-07-15 05:16:51.667 [INFO][4970] ipam/ipam.go 511: Trying affinity for 192.168.113.128/26 host="ip-172-31-18-224" Jul 15 05:16:51.878005 containerd[1996]: 2025-07-15 05:16:51.679 [INFO][4970] ipam/ipam.go 158: Attempting to load block cidr=192.168.113.128/26 host="ip-172-31-18-224" Jul 15 05:16:51.878005 containerd[1996]: 2025-07-15 05:16:51.685 [INFO][4970] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.113.128/26 host="ip-172-31-18-224" Jul 15 05:16:51.878005 containerd[1996]: 2025-07-15 05:16:51.686 [INFO][4970] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.113.128/26 handle="k8s-pod-network.ec8764d39c84f1fb4fe8057b38971360708f939554d70e854747b4bb92ad4f41" host="ip-172-31-18-224" Jul 15 05:16:51.878005 containerd[1996]: 2025-07-15 05:16:51.700 [INFO][4970] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ec8764d39c84f1fb4fe8057b38971360708f939554d70e854747b4bb92ad4f41 Jul 15 05:16:51.878005 containerd[1996]: 2025-07-15 05:16:51.713 [INFO][4970] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.113.128/26 handle="k8s-pod-network.ec8764d39c84f1fb4fe8057b38971360708f939554d70e854747b4bb92ad4f41" host="ip-172-31-18-224" Jul 15 05:16:51.878005 containerd[1996]: 2025-07-15 05:16:51.728 [INFO][4970] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.113.134/26] block=192.168.113.128/26 handle="k8s-pod-network.ec8764d39c84f1fb4fe8057b38971360708f939554d70e854747b4bb92ad4f41" host="ip-172-31-18-224" Jul 15 05:16:51.878005 containerd[1996]: 2025-07-15 05:16:51.728 [INFO][4970] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.113.134/26] handle="k8s-pod-network.ec8764d39c84f1fb4fe8057b38971360708f939554d70e854747b4bb92ad4f41" host="ip-172-31-18-224" Jul 15 05:16:51.878005 containerd[1996]: 2025-07-15 05:16:51.728 [INFO][4970] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 05:16:51.878005 containerd[1996]: 2025-07-15 05:16:51.728 [INFO][4970] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.113.134/26] IPv6=[] ContainerID="ec8764d39c84f1fb4fe8057b38971360708f939554d70e854747b4bb92ad4f41" HandleID="k8s-pod-network.ec8764d39c84f1fb4fe8057b38971360708f939554d70e854747b4bb92ad4f41" Workload="ip--172--31--18--224-k8s-calico--kube--controllers--665c85449c--swjcb-eth0" Jul 15 05:16:51.881474 containerd[1996]: 2025-07-15 05:16:51.739 [INFO][4895] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ec8764d39c84f1fb4fe8057b38971360708f939554d70e854747b4bb92ad4f41" Namespace="calico-system" Pod="calico-kube-controllers-665c85449c-swjcb" WorkloadEndpoint="ip--172--31--18--224-k8s-calico--kube--controllers--665c85449c--swjcb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--224-k8s-calico--kube--controllers--665c85449c--swjcb-eth0", GenerateName:"calico-kube-controllers-665c85449c-", Namespace:"calico-system", SelfLink:"", UID:"00cde189-05db-4c0c-92a4-d78eaf0ed38b", ResourceVersion:"829", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 5, 16, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"665c85449c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-224", ContainerID:"", Pod:"calico-kube-controllers-665c85449c-swjcb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.113.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic71790758c0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 05:16:51.881474 containerd[1996]: 2025-07-15 05:16:51.740 [INFO][4895] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.113.134/32] ContainerID="ec8764d39c84f1fb4fe8057b38971360708f939554d70e854747b4bb92ad4f41" Namespace="calico-system" Pod="calico-kube-controllers-665c85449c-swjcb" WorkloadEndpoint="ip--172--31--18--224-k8s-calico--kube--controllers--665c85449c--swjcb-eth0" Jul 15 05:16:51.881474 containerd[1996]: 2025-07-15 05:16:51.740 [INFO][4895] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic71790758c0 ContainerID="ec8764d39c84f1fb4fe8057b38971360708f939554d70e854747b4bb92ad4f41" Namespace="calico-system" Pod="calico-kube-controllers-665c85449c-swjcb" WorkloadEndpoint="ip--172--31--18--224-k8s-calico--kube--controllers--665c85449c--swjcb-eth0" Jul 15 05:16:51.881474 containerd[1996]: 2025-07-15 05:16:51.791 [INFO][4895] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ec8764d39c84f1fb4fe8057b38971360708f939554d70e854747b4bb92ad4f41" Namespace="calico-system" Pod="calico-kube-controllers-665c85449c-swjcb" WorkloadEndpoint="ip--172--31--18--224-k8s-calico--kube--controllers--665c85449c--swjcb-eth0" Jul 15 05:16:51.881474 containerd[1996]: 2025-07-15 05:16:51.795 [INFO][4895] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ec8764d39c84f1fb4fe8057b38971360708f939554d70e854747b4bb92ad4f41" Namespace="calico-system" Pod="calico-kube-controllers-665c85449c-swjcb" WorkloadEndpoint="ip--172--31--18--224-k8s-calico--kube--controllers--665c85449c--swjcb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--224-k8s-calico--kube--controllers--665c85449c--swjcb-eth0", GenerateName:"calico-kube-controllers-665c85449c-", Namespace:"calico-system", SelfLink:"", UID:"00cde189-05db-4c0c-92a4-d78eaf0ed38b", ResourceVersion:"829", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 5, 16, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"665c85449c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-224", ContainerID:"ec8764d39c84f1fb4fe8057b38971360708f939554d70e854747b4bb92ad4f41", Pod:"calico-kube-controllers-665c85449c-swjcb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.113.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic71790758c0", MAC:"ce:c1:b3:e3:e3:9c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 05:16:51.881474 containerd[1996]: 2025-07-15 05:16:51.846 [INFO][4895] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ec8764d39c84f1fb4fe8057b38971360708f939554d70e854747b4bb92ad4f41" Namespace="calico-system" Pod="calico-kube-controllers-665c85449c-swjcb" WorkloadEndpoint="ip--172--31--18--224-k8s-calico--kube--controllers--665c85449c--swjcb-eth0" Jul 15 05:16:51.983032 systemd-networkd[1854]: cali90fd552888f: Link UP Jul 15 05:16:51.987534 systemd-networkd[1854]: cali90fd552888f: Gained carrier Jul 15 05:16:52.034366 containerd[1996]: time="2025-07-15T05:16:52.034316849Z" level=info msg="connecting to shim d201b124024d9b106e6d1337f1f25cce462d956a3ce63b87e811758b99b93e84" address="unix:///run/containerd/s/fc7a27616fac429ec0fec9a0edc0507034363482d7bce3cd3ad41f03f08a4003" namespace=k8s.io protocol=ttrpc version=3 Jul 15 05:16:52.068917 containerd[1996]: 2025-07-15 05:16:50.913 [INFO][4926] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--224-k8s-csi--node--driver--plmkb-eth0 csi-node-driver- calico-system 792c1079-d6ad-4977-9449-eb7585301bdc 696 0 2025-07-15 05:16:27 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:8967bcb6f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-18-224 csi-node-driver-plmkb eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali90fd552888f [] [] }} ContainerID="fb27a28eda7669277ef00dfd7ef3a360943c6bd52d1999e2359fc4e9108da216" Namespace="calico-system" Pod="csi-node-driver-plmkb" WorkloadEndpoint="ip--172--31--18--224-k8s-csi--node--driver--plmkb-" Jul 15 05:16:52.068917 containerd[1996]: 2025-07-15 05:16:50.913 [INFO][4926] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fb27a28eda7669277ef00dfd7ef3a360943c6bd52d1999e2359fc4e9108da216" Namespace="calico-system" Pod="csi-node-driver-plmkb" WorkloadEndpoint="ip--172--31--18--224-k8s-csi--node--driver--plmkb-eth0" Jul 15 05:16:52.068917 containerd[1996]: 2025-07-15 05:16:51.224 [INFO][4971] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fb27a28eda7669277ef00dfd7ef3a360943c6bd52d1999e2359fc4e9108da216" HandleID="k8s-pod-network.fb27a28eda7669277ef00dfd7ef3a360943c6bd52d1999e2359fc4e9108da216" Workload="ip--172--31--18--224-k8s-csi--node--driver--plmkb-eth0" Jul 15 05:16:52.068917 containerd[1996]: 2025-07-15 05:16:51.224 [INFO][4971] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="fb27a28eda7669277ef00dfd7ef3a360943c6bd52d1999e2359fc4e9108da216" HandleID="k8s-pod-network.fb27a28eda7669277ef00dfd7ef3a360943c6bd52d1999e2359fc4e9108da216" Workload="ip--172--31--18--224-k8s-csi--node--driver--plmkb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e300), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-18-224", "pod":"csi-node-driver-plmkb", "timestamp":"2025-07-15 05:16:51.224524111 +0000 UTC"}, Hostname:"ip-172-31-18-224", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 05:16:52.068917 containerd[1996]: 2025-07-15 05:16:51.226 [INFO][4971] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 05:16:52.068917 containerd[1996]: 2025-07-15 05:16:51.736 [INFO][4971] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 05:16:52.068917 containerd[1996]: 2025-07-15 05:16:51.737 [INFO][4971] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-224' Jul 15 05:16:52.068917 containerd[1996]: 2025-07-15 05:16:51.768 [INFO][4971] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.fb27a28eda7669277ef00dfd7ef3a360943c6bd52d1999e2359fc4e9108da216" host="ip-172-31-18-224" Jul 15 05:16:52.068917 containerd[1996]: 2025-07-15 05:16:51.809 [INFO][4971] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-18-224" Jul 15 05:16:52.068917 containerd[1996]: 2025-07-15 05:16:51.844 [INFO][4971] ipam/ipam.go 511: Trying affinity for 192.168.113.128/26 host="ip-172-31-18-224" Jul 15 05:16:52.068917 containerd[1996]: 2025-07-15 05:16:51.862 [INFO][4971] ipam/ipam.go 158: Attempting to load block cidr=192.168.113.128/26 host="ip-172-31-18-224" Jul 15 05:16:52.068917 containerd[1996]: 2025-07-15 05:16:51.873 [INFO][4971] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.113.128/26 host="ip-172-31-18-224" Jul 15 05:16:52.068917 containerd[1996]: 2025-07-15 05:16:51.875 [INFO][4971] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.113.128/26 handle="k8s-pod-network.fb27a28eda7669277ef00dfd7ef3a360943c6bd52d1999e2359fc4e9108da216" host="ip-172-31-18-224" Jul 15 05:16:52.068917 containerd[1996]: 2025-07-15 05:16:51.885 [INFO][4971] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.fb27a28eda7669277ef00dfd7ef3a360943c6bd52d1999e2359fc4e9108da216 Jul 15 05:16:52.068917 containerd[1996]: 2025-07-15 05:16:51.901 [INFO][4971] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.113.128/26 handle="k8s-pod-network.fb27a28eda7669277ef00dfd7ef3a360943c6bd52d1999e2359fc4e9108da216" host="ip-172-31-18-224" Jul 15 05:16:52.068917 containerd[1996]: 2025-07-15 05:16:51.926 [INFO][4971] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.113.135/26] block=192.168.113.128/26 handle="k8s-pod-network.fb27a28eda7669277ef00dfd7ef3a360943c6bd52d1999e2359fc4e9108da216" host="ip-172-31-18-224" Jul 15 05:16:52.068917 containerd[1996]: 2025-07-15 05:16:51.926 [INFO][4971] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.113.135/26] handle="k8s-pod-network.fb27a28eda7669277ef00dfd7ef3a360943c6bd52d1999e2359fc4e9108da216" host="ip-172-31-18-224" Jul 15 05:16:52.068917 containerd[1996]: 2025-07-15 05:16:51.926 [INFO][4971] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 05:16:52.068917 containerd[1996]: 2025-07-15 05:16:51.926 [INFO][4971] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.113.135/26] IPv6=[] ContainerID="fb27a28eda7669277ef00dfd7ef3a360943c6bd52d1999e2359fc4e9108da216" HandleID="k8s-pod-network.fb27a28eda7669277ef00dfd7ef3a360943c6bd52d1999e2359fc4e9108da216" Workload="ip--172--31--18--224-k8s-csi--node--driver--plmkb-eth0" Jul 15 05:16:52.071439 containerd[1996]: 2025-07-15 05:16:51.945 [INFO][4926] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fb27a28eda7669277ef00dfd7ef3a360943c6bd52d1999e2359fc4e9108da216" Namespace="calico-system" Pod="csi-node-driver-plmkb" WorkloadEndpoint="ip--172--31--18--224-k8s-csi--node--driver--plmkb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--224-k8s-csi--node--driver--plmkb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"792c1079-d6ad-4977-9449-eb7585301bdc", ResourceVersion:"696", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 5, 16, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-224", ContainerID:"", Pod:"csi-node-driver-plmkb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.113.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali90fd552888f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 05:16:52.071439 containerd[1996]: 2025-07-15 05:16:51.947 [INFO][4926] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.113.135/32] ContainerID="fb27a28eda7669277ef00dfd7ef3a360943c6bd52d1999e2359fc4e9108da216" Namespace="calico-system" Pod="csi-node-driver-plmkb" WorkloadEndpoint="ip--172--31--18--224-k8s-csi--node--driver--plmkb-eth0" Jul 15 05:16:52.071439 containerd[1996]: 2025-07-15 05:16:51.948 [INFO][4926] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali90fd552888f ContainerID="fb27a28eda7669277ef00dfd7ef3a360943c6bd52d1999e2359fc4e9108da216" Namespace="calico-system" Pod="csi-node-driver-plmkb" WorkloadEndpoint="ip--172--31--18--224-k8s-csi--node--driver--plmkb-eth0" Jul 15 05:16:52.071439 containerd[1996]: 2025-07-15 05:16:51.992 [INFO][4926] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fb27a28eda7669277ef00dfd7ef3a360943c6bd52d1999e2359fc4e9108da216" Namespace="calico-system" Pod="csi-node-driver-plmkb" WorkloadEndpoint="ip--172--31--18--224-k8s-csi--node--driver--plmkb-eth0" Jul 15 05:16:52.071439 containerd[1996]: 2025-07-15 05:16:51.992 [INFO][4926] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fb27a28eda7669277ef00dfd7ef3a360943c6bd52d1999e2359fc4e9108da216" Namespace="calico-system" Pod="csi-node-driver-plmkb" WorkloadEndpoint="ip--172--31--18--224-k8s-csi--node--driver--plmkb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--224-k8s-csi--node--driver--plmkb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"792c1079-d6ad-4977-9449-eb7585301bdc", ResourceVersion:"696", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 5, 16, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-224", ContainerID:"fb27a28eda7669277ef00dfd7ef3a360943c6bd52d1999e2359fc4e9108da216", Pod:"csi-node-driver-plmkb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.113.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali90fd552888f", MAC:"62:ce:e8:5e:41:6a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 05:16:52.071439 containerd[1996]: 2025-07-15 05:16:52.037 [INFO][4926] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fb27a28eda7669277ef00dfd7ef3a360943c6bd52d1999e2359fc4e9108da216" Namespace="calico-system" Pod="csi-node-driver-plmkb" WorkloadEndpoint="ip--172--31--18--224-k8s-csi--node--driver--plmkb-eth0" Jul 15 05:16:52.191174 systemd-networkd[1854]: calicdfc4ec6a6e: Link UP Jul 15 05:16:52.193225 systemd-networkd[1854]: calicdfc4ec6a6e: Gained carrier Jul 15 05:16:52.194740 containerd[1996]: time="2025-07-15T05:16:52.194684928Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d8549b7d9-2jwmz,Uid:34fd6f0e-186d-449e-b768-2f198ebe186d,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"9b59e2cde954f924aa4ccdaeba324be8c4260cf05453abdff3bffa2cdd5f4db1\"" Jul 15 05:16:52.215381 containerd[1996]: time="2025-07-15T05:16:52.215335505Z" level=info msg="connecting to shim ec8764d39c84f1fb4fe8057b38971360708f939554d70e854747b4bb92ad4f41" address="unix:///run/containerd/s/36d1b29e1b789b308add4d520bd69aa4485a9a7dcea8193d0348c5e2113d8bb1" namespace=k8s.io protocol=ttrpc version=3 Jul 15 05:16:52.252638 systemd[1]: Started cri-containerd-d201b124024d9b106e6d1337f1f25cce462d956a3ce63b87e811758b99b93e84.scope - libcontainer container d201b124024d9b106e6d1337f1f25cce462d956a3ce63b87e811758b99b93e84. Jul 15 05:16:52.275404 containerd[1996]: time="2025-07-15T05:16:52.275357117Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e1dac7f605fa00166c119ba0415a1e7136b369767826afc6fb0b14babfdda518\" id:\"c579cb418d2b604eeb1b15aee20df0a92fad9dfcc9f0755f51426aecac86bfdf\" pid:5057 exited_at:{seconds:1752556612 nanos:200057566}" Jul 15 05:16:52.286541 containerd[1996]: 2025-07-15 05:16:50.918 [INFO][4916] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--224-k8s-coredns--668d6bf9bc--nwgx4-eth0 coredns-668d6bf9bc- kube-system 09c0c105-5305-45ec-9f9e-1db93f47968c 822 0 2025-07-15 05:16:12 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-18-224 coredns-668d6bf9bc-nwgx4 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calicdfc4ec6a6e [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="227778273093fe3b8e664f827db8e14934dcff7a4383d30cbe443b739940ffcf" Namespace="kube-system" Pod="coredns-668d6bf9bc-nwgx4" WorkloadEndpoint="ip--172--31--18--224-k8s-coredns--668d6bf9bc--nwgx4-" Jul 15 05:16:52.286541 containerd[1996]: 2025-07-15 05:16:50.921 [INFO][4916] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="227778273093fe3b8e664f827db8e14934dcff7a4383d30cbe443b739940ffcf" Namespace="kube-system" Pod="coredns-668d6bf9bc-nwgx4" WorkloadEndpoint="ip--172--31--18--224-k8s-coredns--668d6bf9bc--nwgx4-eth0" Jul 15 05:16:52.286541 containerd[1996]: 2025-07-15 05:16:51.238 [INFO][4978] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="227778273093fe3b8e664f827db8e14934dcff7a4383d30cbe443b739940ffcf" HandleID="k8s-pod-network.227778273093fe3b8e664f827db8e14934dcff7a4383d30cbe443b739940ffcf" Workload="ip--172--31--18--224-k8s-coredns--668d6bf9bc--nwgx4-eth0" Jul 15 05:16:52.286541 containerd[1996]: 2025-07-15 05:16:51.239 [INFO][4978] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="227778273093fe3b8e664f827db8e14934dcff7a4383d30cbe443b739940ffcf" HandleID="k8s-pod-network.227778273093fe3b8e664f827db8e14934dcff7a4383d30cbe443b739940ffcf" Workload="ip--172--31--18--224-k8s-coredns--668d6bf9bc--nwgx4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000123860), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-18-224", "pod":"coredns-668d6bf9bc-nwgx4", "timestamp":"2025-07-15 05:16:51.238713199 +0000 UTC"}, Hostname:"ip-172-31-18-224", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 05:16:52.286541 containerd[1996]: 2025-07-15 05:16:51.239 [INFO][4978] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 05:16:52.286541 containerd[1996]: 2025-07-15 05:16:51.926 [INFO][4978] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 05:16:52.286541 containerd[1996]: 2025-07-15 05:16:51.926 [INFO][4978] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-224' Jul 15 05:16:52.286541 containerd[1996]: 2025-07-15 05:16:51.963 [INFO][4978] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.227778273093fe3b8e664f827db8e14934dcff7a4383d30cbe443b739940ffcf" host="ip-172-31-18-224" Jul 15 05:16:52.286541 containerd[1996]: 2025-07-15 05:16:51.990 [INFO][4978] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-18-224" Jul 15 05:16:52.286541 containerd[1996]: 2025-07-15 05:16:52.043 [INFO][4978] ipam/ipam.go 511: Trying affinity for 192.168.113.128/26 host="ip-172-31-18-224" Jul 15 05:16:52.286541 containerd[1996]: 2025-07-15 05:16:52.079 [INFO][4978] ipam/ipam.go 158: Attempting to load block cidr=192.168.113.128/26 host="ip-172-31-18-224" Jul 15 05:16:52.286541 containerd[1996]: 2025-07-15 05:16:52.101 [INFO][4978] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.113.128/26 host="ip-172-31-18-224" Jul 15 05:16:52.286541 containerd[1996]: 2025-07-15 05:16:52.101 [INFO][4978] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.113.128/26 handle="k8s-pod-network.227778273093fe3b8e664f827db8e14934dcff7a4383d30cbe443b739940ffcf" host="ip-172-31-18-224" Jul 15 05:16:52.286541 containerd[1996]: 2025-07-15 05:16:52.106 [INFO][4978] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.227778273093fe3b8e664f827db8e14934dcff7a4383d30cbe443b739940ffcf Jul 15 05:16:52.286541 containerd[1996]: 2025-07-15 05:16:52.125 [INFO][4978] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.113.128/26 handle="k8s-pod-network.227778273093fe3b8e664f827db8e14934dcff7a4383d30cbe443b739940ffcf" host="ip-172-31-18-224" Jul 15 05:16:52.286541 containerd[1996]: 2025-07-15 05:16:52.148 [INFO][4978] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.113.136/26] block=192.168.113.128/26 handle="k8s-pod-network.227778273093fe3b8e664f827db8e14934dcff7a4383d30cbe443b739940ffcf" host="ip-172-31-18-224" Jul 15 05:16:52.286541 containerd[1996]: 2025-07-15 05:16:52.148 [INFO][4978] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.113.136/26] handle="k8s-pod-network.227778273093fe3b8e664f827db8e14934dcff7a4383d30cbe443b739940ffcf" host="ip-172-31-18-224" Jul 15 05:16:52.286541 containerd[1996]: 2025-07-15 05:16:52.150 [INFO][4978] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 05:16:52.286541 containerd[1996]: 2025-07-15 05:16:52.150 [INFO][4978] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.113.136/26] IPv6=[] ContainerID="227778273093fe3b8e664f827db8e14934dcff7a4383d30cbe443b739940ffcf" HandleID="k8s-pod-network.227778273093fe3b8e664f827db8e14934dcff7a4383d30cbe443b739940ffcf" Workload="ip--172--31--18--224-k8s-coredns--668d6bf9bc--nwgx4-eth0" Jul 15 05:16:52.289089 containerd[1996]: 2025-07-15 05:16:52.164 [INFO][4916] cni-plugin/k8s.go 418: Populated endpoint ContainerID="227778273093fe3b8e664f827db8e14934dcff7a4383d30cbe443b739940ffcf" Namespace="kube-system" Pod="coredns-668d6bf9bc-nwgx4" WorkloadEndpoint="ip--172--31--18--224-k8s-coredns--668d6bf9bc--nwgx4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--224-k8s-coredns--668d6bf9bc--nwgx4-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"09c0c105-5305-45ec-9f9e-1db93f47968c", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 5, 16, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-224", ContainerID:"", Pod:"coredns-668d6bf9bc-nwgx4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.113.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicdfc4ec6a6e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 05:16:52.289089 containerd[1996]: 2025-07-15 05:16:52.164 [INFO][4916] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.113.136/32] ContainerID="227778273093fe3b8e664f827db8e14934dcff7a4383d30cbe443b739940ffcf" Namespace="kube-system" Pod="coredns-668d6bf9bc-nwgx4" WorkloadEndpoint="ip--172--31--18--224-k8s-coredns--668d6bf9bc--nwgx4-eth0" Jul 15 05:16:52.289089 containerd[1996]: 2025-07-15 05:16:52.164 [INFO][4916] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicdfc4ec6a6e ContainerID="227778273093fe3b8e664f827db8e14934dcff7a4383d30cbe443b739940ffcf" Namespace="kube-system" Pod="coredns-668d6bf9bc-nwgx4" WorkloadEndpoint="ip--172--31--18--224-k8s-coredns--668d6bf9bc--nwgx4-eth0" Jul 15 05:16:52.289089 containerd[1996]: 2025-07-15 05:16:52.198 [INFO][4916] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="227778273093fe3b8e664f827db8e14934dcff7a4383d30cbe443b739940ffcf" Namespace="kube-system" Pod="coredns-668d6bf9bc-nwgx4" WorkloadEndpoint="ip--172--31--18--224-k8s-coredns--668d6bf9bc--nwgx4-eth0" Jul 15 05:16:52.289089 containerd[1996]: 2025-07-15 05:16:52.204 [INFO][4916] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="227778273093fe3b8e664f827db8e14934dcff7a4383d30cbe443b739940ffcf" Namespace="kube-system" Pod="coredns-668d6bf9bc-nwgx4" WorkloadEndpoint="ip--172--31--18--224-k8s-coredns--668d6bf9bc--nwgx4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--224-k8s-coredns--668d6bf9bc--nwgx4-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"09c0c105-5305-45ec-9f9e-1db93f47968c", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 5, 16, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-224", ContainerID:"227778273093fe3b8e664f827db8e14934dcff7a4383d30cbe443b739940ffcf", Pod:"coredns-668d6bf9bc-nwgx4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.113.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicdfc4ec6a6e", MAC:"4e:2a:a6:84:c9:30", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 05:16:52.289089 containerd[1996]: 2025-07-15 05:16:52.265 [INFO][4916] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="227778273093fe3b8e664f827db8e14934dcff7a4383d30cbe443b739940ffcf" Namespace="kube-system" Pod="coredns-668d6bf9bc-nwgx4" WorkloadEndpoint="ip--172--31--18--224-k8s-coredns--668d6bf9bc--nwgx4-eth0" Jul 15 05:16:52.289089 containerd[1996]: time="2025-07-15T05:16:52.286875688Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rzzg2,Uid:ddb65bd6-da34-49ee-a1d2-f42709d6e6d2,Namespace:kube-system,Attempt:0,} returns sandbox id \"96d6158e2a73c985ac610960d91677fd51534a39a0c4dfc005b36bd23198352f\"" Jul 15 05:16:52.345152 systemd[1]: Started cri-containerd-ec8764d39c84f1fb4fe8057b38971360708f939554d70e854747b4bb92ad4f41.scope - libcontainer container ec8764d39c84f1fb4fe8057b38971360708f939554d70e854747b4bb92ad4f41. Jul 15 05:16:52.351924 containerd[1996]: time="2025-07-15T05:16:52.351234805Z" level=info msg="CreateContainer within sandbox \"96d6158e2a73c985ac610960d91677fd51534a39a0c4dfc005b36bd23198352f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 15 05:16:52.389077 containerd[1996]: time="2025-07-15T05:16:52.389023263Z" level=info msg="connecting to shim fb27a28eda7669277ef00dfd7ef3a360943c6bd52d1999e2359fc4e9108da216" address="unix:///run/containerd/s/f63638227a75931b04d6faaef813ce437f5a126e9ec3d81dd990a5f17812cc3c" namespace=k8s.io protocol=ttrpc version=3 Jul 15 05:16:52.433595 containerd[1996]: time="2025-07-15T05:16:52.430765391Z" level=info msg="Container 37d9e585d15ac243edf8fad4e36811fc24b200b258d3854d64a49167cd2f9128: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:16:52.485745 systemd[1]: Started cri-containerd-fb27a28eda7669277ef00dfd7ef3a360943c6bd52d1999e2359fc4e9108da216.scope - libcontainer container fb27a28eda7669277ef00dfd7ef3a360943c6bd52d1999e2359fc4e9108da216. Jul 15 05:16:52.490528 containerd[1996]: time="2025-07-15T05:16:52.490167774Z" level=info msg="connecting to shim 227778273093fe3b8e664f827db8e14934dcff7a4383d30cbe443b739940ffcf" address="unix:///run/containerd/s/1ce9d6ecf58808fb82d5412b4b447db6e6c526e65aefd1b0f0577686a5a22fe3" namespace=k8s.io protocol=ttrpc version=3 Jul 15 05:16:52.500544 containerd[1996]: time="2025-07-15T05:16:52.500486386Z" level=info msg="CreateContainer within sandbox \"96d6158e2a73c985ac610960d91677fd51534a39a0c4dfc005b36bd23198352f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"37d9e585d15ac243edf8fad4e36811fc24b200b258d3854d64a49167cd2f9128\"" Jul 15 05:16:52.504598 containerd[1996]: time="2025-07-15T05:16:52.504434460Z" level=info msg="StartContainer for \"37d9e585d15ac243edf8fad4e36811fc24b200b258d3854d64a49167cd2f9128\"" Jul 15 05:16:52.516792 containerd[1996]: time="2025-07-15T05:16:52.515810949Z" level=info msg="connecting to shim 37d9e585d15ac243edf8fad4e36811fc24b200b258d3854d64a49167cd2f9128" address="unix:///run/containerd/s/d137b8eb154a6720f02441de83dc16f85fd0278f7a1e6c69804ea1ebccfaaf30" protocol=ttrpc version=3 Jul 15 05:16:52.636997 systemd[1]: Started cri-containerd-227778273093fe3b8e664f827db8e14934dcff7a4383d30cbe443b739940ffcf.scope - libcontainer container 227778273093fe3b8e664f827db8e14934dcff7a4383d30cbe443b739940ffcf. Jul 15 05:16:52.680235 systemd[1]: Started cri-containerd-37d9e585d15ac243edf8fad4e36811fc24b200b258d3854d64a49167cd2f9128.scope - libcontainer container 37d9e585d15ac243edf8fad4e36811fc24b200b258d3854d64a49167cd2f9128. Jul 15 05:16:52.703576 containerd[1996]: time="2025-07-15T05:16:52.703522459Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d8549b7d9-d5z85,Uid:87b9403b-286e-449f-b792-8973989d361e,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"d201b124024d9b106e6d1337f1f25cce462d956a3ce63b87e811758b99b93e84\"" Jul 15 05:16:52.723848 systemd-networkd[1854]: calid4f34491333: Gained IPv6LL Jul 15 05:16:52.759361 containerd[1996]: time="2025-07-15T05:16:52.759311544Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-665c85449c-swjcb,Uid:00cde189-05db-4c0c-92a4-d78eaf0ed38b,Namespace:calico-system,Attempt:0,} returns sandbox id \"ec8764d39c84f1fb4fe8057b38971360708f939554d70e854747b4bb92ad4f41\"" Jul 15 05:16:52.840921 containerd[1996]: time="2025-07-15T05:16:52.840671494Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-nwgx4,Uid:09c0c105-5305-45ec-9f9e-1db93f47968c,Namespace:kube-system,Attempt:0,} returns sandbox id \"227778273093fe3b8e664f827db8e14934dcff7a4383d30cbe443b739940ffcf\"" Jul 15 05:16:52.848219 containerd[1996]: time="2025-07-15T05:16:52.848148906Z" level=info msg="CreateContainer within sandbox \"227778273093fe3b8e664f827db8e14934dcff7a4383d30cbe443b739940ffcf\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 15 05:16:52.872004 containerd[1996]: time="2025-07-15T05:16:52.870169339Z" level=info msg="StartContainer for \"37d9e585d15ac243edf8fad4e36811fc24b200b258d3854d64a49167cd2f9128\" returns successfully" Jul 15 05:16:52.881920 containerd[1996]: time="2025-07-15T05:16:52.880051347Z" level=info msg="Container 13287df2f09a305a65fed09f1ed04f007dd7b9c90060114715467e5777fd9d3f: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:16:52.888819 containerd[1996]: time="2025-07-15T05:16:52.887623316Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-plmkb,Uid:792c1079-d6ad-4977-9449-eb7585301bdc,Namespace:calico-system,Attempt:0,} returns sandbox id \"fb27a28eda7669277ef00dfd7ef3a360943c6bd52d1999e2359fc4e9108da216\"" Jul 15 05:16:52.891125 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount124875758.mount: Deactivated successfully. Jul 15 05:16:52.895868 containerd[1996]: time="2025-07-15T05:16:52.895754102Z" level=info msg="CreateContainer within sandbox \"227778273093fe3b8e664f827db8e14934dcff7a4383d30cbe443b739940ffcf\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"13287df2f09a305a65fed09f1ed04f007dd7b9c90060114715467e5777fd9d3f\"" Jul 15 05:16:52.896917 containerd[1996]: time="2025-07-15T05:16:52.896863662Z" level=info msg="StartContainer for \"13287df2f09a305a65fed09f1ed04f007dd7b9c90060114715467e5777fd9d3f\"" Jul 15 05:16:52.898187 containerd[1996]: time="2025-07-15T05:16:52.898154911Z" level=info msg="connecting to shim 13287df2f09a305a65fed09f1ed04f007dd7b9c90060114715467e5777fd9d3f" address="unix:///run/containerd/s/1ce9d6ecf58808fb82d5412b4b447db6e6c526e65aefd1b0f0577686a5a22fe3" protocol=ttrpc version=3 Jul 15 05:16:52.933134 systemd[1]: Started cri-containerd-13287df2f09a305a65fed09f1ed04f007dd7b9c90060114715467e5777fd9d3f.scope - libcontainer container 13287df2f09a305a65fed09f1ed04f007dd7b9c90060114715467e5777fd9d3f. Jul 15 05:16:53.040978 containerd[1996]: time="2025-07-15T05:16:53.040841413Z" level=info msg="StartContainer for \"13287df2f09a305a65fed09f1ed04f007dd7b9c90060114715467e5777fd9d3f\" returns successfully" Jul 15 05:16:53.209934 kubelet[3322]: I0715 05:16:53.177796 3322 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-nwgx4" podStartSLOduration=41.171576928 podStartE2EDuration="41.171576928s" podCreationTimestamp="2025-07-15 05:16:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 05:16:53.104199685 +0000 UTC m=+46.696534651" watchObservedRunningTime="2025-07-15 05:16:53.171576928 +0000 UTC m=+46.763912068" Jul 15 05:16:53.209934 kubelet[3322]: I0715 05:16:53.208721 3322 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-rzzg2" podStartSLOduration=41.208700341 podStartE2EDuration="41.208700341s" podCreationTimestamp="2025-07-15 05:16:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 05:16:53.17131594 +0000 UTC m=+46.763650918" watchObservedRunningTime="2025-07-15 05:16:53.208700341 +0000 UTC m=+46.801035303" Jul 15 05:16:53.235125 systemd-networkd[1854]: calic71790758c0: Gained IPv6LL Jul 15 05:16:53.235455 systemd-networkd[1854]: califc75f95ac6b: Gained IPv6LL Jul 15 05:16:53.299218 systemd-networkd[1854]: cali308a1f013a9: Gained IPv6LL Jul 15 05:16:53.363271 systemd-networkd[1854]: cali90fd552888f: Gained IPv6LL Jul 15 05:16:53.427339 systemd-networkd[1854]: calicdfc4ec6a6e: Gained IPv6LL Jul 15 05:16:54.488707 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3262602801.mount: Deactivated successfully. Jul 15 05:16:55.632874 ntpd[1973]: Listen normally on 8 vxlan.calico 192.168.113.128:123 Jul 15 05:16:55.633021 ntpd[1973]: Listen normally on 9 calie2e6e0f2945 [fe80::ecee:eeff:feee:eeee%4]:123 Jul 15 05:16:55.635639 ntpd[1973]: 15 Jul 05:16:55 ntpd[1973]: Listen normally on 8 vxlan.calico 192.168.113.128:123 Jul 15 05:16:55.635639 ntpd[1973]: 15 Jul 05:16:55 ntpd[1973]: Listen normally on 9 calie2e6e0f2945 [fe80::ecee:eeff:feee:eeee%4]:123 Jul 15 05:16:55.635639 ntpd[1973]: 15 Jul 05:16:55 ntpd[1973]: Listen normally on 10 vxlan.calico [fe80::64ba:f8ff:fe72:f0b8%5]:123 Jul 15 05:16:55.635639 ntpd[1973]: 15 Jul 05:16:55 ntpd[1973]: Listen normally on 11 calie3812fa4608 [fe80::ecee:eeff:feee:eeee%8]:123 Jul 15 05:16:55.635639 ntpd[1973]: 15 Jul 05:16:55 ntpd[1973]: Listen normally on 12 califc75f95ac6b [fe80::ecee:eeff:feee:eeee%9]:123 Jul 15 05:16:55.635639 ntpd[1973]: 15 Jul 05:16:55 ntpd[1973]: Listen normally on 13 cali308a1f013a9 [fe80::ecee:eeff:feee:eeee%10]:123 Jul 15 05:16:55.635639 ntpd[1973]: 15 Jul 05:16:55 ntpd[1973]: Listen normally on 14 calid4f34491333 [fe80::ecee:eeff:feee:eeee%11]:123 Jul 15 05:16:55.635639 ntpd[1973]: 15 Jul 05:16:55 ntpd[1973]: Listen normally on 15 calic71790758c0 [fe80::ecee:eeff:feee:eeee%12]:123 Jul 15 05:16:55.635639 ntpd[1973]: 15 Jul 05:16:55 ntpd[1973]: Listen normally on 16 cali90fd552888f [fe80::ecee:eeff:feee:eeee%13]:123 Jul 15 05:16:55.635639 ntpd[1973]: 15 Jul 05:16:55 ntpd[1973]: Listen normally on 17 calicdfc4ec6a6e [fe80::ecee:eeff:feee:eeee%14]:123 Jul 15 05:16:55.633084 ntpd[1973]: Listen normally on 10 vxlan.calico [fe80::64ba:f8ff:fe72:f0b8%5]:123 Jul 15 05:16:55.633121 ntpd[1973]: Listen normally on 11 calie3812fa4608 [fe80::ecee:eeff:feee:eeee%8]:123 Jul 15 05:16:55.633156 ntpd[1973]: Listen normally on 12 califc75f95ac6b [fe80::ecee:eeff:feee:eeee%9]:123 Jul 15 05:16:55.633191 ntpd[1973]: Listen normally on 13 cali308a1f013a9 [fe80::ecee:eeff:feee:eeee%10]:123 Jul 15 05:16:55.633225 ntpd[1973]: Listen normally on 14 calid4f34491333 [fe80::ecee:eeff:feee:eeee%11]:123 Jul 15 05:16:55.633270 ntpd[1973]: Listen normally on 15 calic71790758c0 [fe80::ecee:eeff:feee:eeee%12]:123 Jul 15 05:16:55.633303 ntpd[1973]: Listen normally on 16 cali90fd552888f [fe80::ecee:eeff:feee:eeee%13]:123 Jul 15 05:16:55.633336 ntpd[1973]: Listen normally on 17 calicdfc4ec6a6e [fe80::ecee:eeff:feee:eeee%14]:123 Jul 15 05:16:55.822268 containerd[1996]: time="2025-07-15T05:16:55.822208637Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:16:55.823463 containerd[1996]: time="2025-07-15T05:16:55.823419931Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=66352308" Jul 15 05:16:55.825707 containerd[1996]: time="2025-07-15T05:16:55.825670917Z" level=info msg="ImageCreate event name:\"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:16:55.830188 containerd[1996]: time="2025-07-15T05:16:55.830146440Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:16:55.831877 containerd[1996]: time="2025-07-15T05:16:55.831836373Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"66352154\" in 5.819131888s" Jul 15 05:16:55.831877 containerd[1996]: time="2025-07-15T05:16:55.831880496Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\"" Jul 15 05:16:55.833509 containerd[1996]: time="2025-07-15T05:16:55.833409803Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 15 05:16:55.837890 containerd[1996]: time="2025-07-15T05:16:55.837103974Z" level=info msg="CreateContainer within sandbox \"2f29202c2d66e63a41dc90412ad48f5d96a312081666003b86270f11e7a5f6e4\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 15 05:16:55.849888 containerd[1996]: time="2025-07-15T05:16:55.849844367Z" level=info msg="Container 9011ce956e9c79256ccdb9007222128056479a916534b9da7694eaab0b9fb5fe: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:16:55.875939 containerd[1996]: time="2025-07-15T05:16:55.875881366Z" level=info msg="CreateContainer within sandbox \"2f29202c2d66e63a41dc90412ad48f5d96a312081666003b86270f11e7a5f6e4\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"9011ce956e9c79256ccdb9007222128056479a916534b9da7694eaab0b9fb5fe\"" Jul 15 05:16:55.877858 containerd[1996]: time="2025-07-15T05:16:55.877805636Z" level=info msg="StartContainer for \"9011ce956e9c79256ccdb9007222128056479a916534b9da7694eaab0b9fb5fe\"" Jul 15 05:16:55.882027 containerd[1996]: time="2025-07-15T05:16:55.881975973Z" level=info msg="connecting to shim 9011ce956e9c79256ccdb9007222128056479a916534b9da7694eaab0b9fb5fe" address="unix:///run/containerd/s/39067d4554bf8fb52a1b6f2f5e5b47b772721348ccc668e178595ce1302ab2ea" protocol=ttrpc version=3 Jul 15 05:16:55.999650 systemd[1]: Started cri-containerd-9011ce956e9c79256ccdb9007222128056479a916534b9da7694eaab0b9fb5fe.scope - libcontainer container 9011ce956e9c79256ccdb9007222128056479a916534b9da7694eaab0b9fb5fe. Jul 15 05:16:56.300241 containerd[1996]: time="2025-07-15T05:16:56.300052467Z" level=info msg="StartContainer for \"9011ce956e9c79256ccdb9007222128056479a916534b9da7694eaab0b9fb5fe\" returns successfully" Jul 15 05:16:57.136114 kubelet[3322]: I0715 05:16:57.136040 3322 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-768f4c5c69-bd8wp" podStartSLOduration=24.195058593 podStartE2EDuration="30.13601459s" podCreationTimestamp="2025-07-15 05:16:27 +0000 UTC" firstStartedPulling="2025-07-15 05:16:49.892166147 +0000 UTC m=+43.484501097" lastFinishedPulling="2025-07-15 05:16:55.833122139 +0000 UTC m=+49.425457094" observedRunningTime="2025-07-15 05:16:57.131862654 +0000 UTC m=+50.724197615" watchObservedRunningTime="2025-07-15 05:16:57.13601459 +0000 UTC m=+50.728349552" Jul 15 05:16:57.302540 containerd[1996]: time="2025-07-15T05:16:57.302478137Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9011ce956e9c79256ccdb9007222128056479a916534b9da7694eaab0b9fb5fe\" id:\"585d6e65717db4cf2ffc15c1c5335016b56c8768125b4ef4001ed8e782d4fb47\" pid:5515 exit_status:1 exited_at:{seconds:1752556617 nanos:301492242}" Jul 15 05:16:58.371083 containerd[1996]: time="2025-07-15T05:16:58.370852493Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9011ce956e9c79256ccdb9007222128056479a916534b9da7694eaab0b9fb5fe\" id:\"2b504ba0a4de636a0d076f77ddbe0a13007c175b203acf29c7a6d081508b5e68\" pid:5539 exit_status:1 exited_at:{seconds:1752556618 nanos:369975798}" Jul 15 05:16:59.650081 containerd[1996]: time="2025-07-15T05:16:59.649797901Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9011ce956e9c79256ccdb9007222128056479a916534b9da7694eaab0b9fb5fe\" id:\"e94d8bbfe9d9557f8161ca189c36295a0a027b1e4c2961bad3f63ca7b56ac85f\" pid:5568 exit_status:1 exited_at:{seconds:1752556619 nanos:649301041}" Jul 15 05:17:00.537236 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount518271749.mount: Deactivated successfully. Jul 15 05:17:00.566767 containerd[1996]: time="2025-07-15T05:17:00.566702095Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:17:00.572123 containerd[1996]: time="2025-07-15T05:17:00.572040017Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=33083477" Jul 15 05:17:00.573987 containerd[1996]: time="2025-07-15T05:17:00.573944987Z" level=info msg="ImageCreate event name:\"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:17:00.582410 containerd[1996]: time="2025-07-15T05:17:00.582167370Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:17:00.584622 containerd[1996]: time="2025-07-15T05:17:00.584569701Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"33083307\" in 4.751124525s" Jul 15 05:17:00.586472 containerd[1996]: time="2025-07-15T05:17:00.586437274Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\"" Jul 15 05:17:00.590047 containerd[1996]: time="2025-07-15T05:17:00.590014055Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 15 05:17:00.594697 containerd[1996]: time="2025-07-15T05:17:00.594103779Z" level=info msg="CreateContainer within sandbox \"635ae8f9e3c03edeccb4a3ba42d1480f0809ff01c76928137ff81faafc035e85\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 15 05:17:00.610239 containerd[1996]: time="2025-07-15T05:17:00.610192632Z" level=info msg="Container 9af75a3e433c0a405f7af6527a7cab3c156f21e61a31457243a9b3c99de554bb: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:17:00.665786 containerd[1996]: time="2025-07-15T05:17:00.665740966Z" level=info msg="CreateContainer within sandbox \"635ae8f9e3c03edeccb4a3ba42d1480f0809ff01c76928137ff81faafc035e85\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"9af75a3e433c0a405f7af6527a7cab3c156f21e61a31457243a9b3c99de554bb\"" Jul 15 05:17:00.667685 containerd[1996]: time="2025-07-15T05:17:00.667568043Z" level=info msg="StartContainer for \"9af75a3e433c0a405f7af6527a7cab3c156f21e61a31457243a9b3c99de554bb\"" Jul 15 05:17:00.671323 containerd[1996]: time="2025-07-15T05:17:00.671278501Z" level=info msg="connecting to shim 9af75a3e433c0a405f7af6527a7cab3c156f21e61a31457243a9b3c99de554bb" address="unix:///run/containerd/s/edab6c7a6c16f4ee8b3acee03ab9abbc574bd55a750e0f2433f01d23582d59fc" protocol=ttrpc version=3 Jul 15 05:17:00.729177 systemd[1]: Started cri-containerd-9af75a3e433c0a405f7af6527a7cab3c156f21e61a31457243a9b3c99de554bb.scope - libcontainer container 9af75a3e433c0a405f7af6527a7cab3c156f21e61a31457243a9b3c99de554bb. Jul 15 05:17:00.870756 containerd[1996]: time="2025-07-15T05:17:00.870612607Z" level=info msg="StartContainer for \"9af75a3e433c0a405f7af6527a7cab3c156f21e61a31457243a9b3c99de554bb\" returns successfully" Jul 15 05:17:01.214106 kubelet[3322]: I0715 05:17:01.213872 3322 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-5776c95fbf-qzrx2" podStartSLOduration=3.24857756 podStartE2EDuration="15.213845974s" podCreationTimestamp="2025-07-15 05:16:46 +0000 UTC" firstStartedPulling="2025-07-15 05:16:48.623880704 +0000 UTC m=+42.216215654" lastFinishedPulling="2025-07-15 05:17:00.589149105 +0000 UTC m=+54.181484068" observedRunningTime="2025-07-15 05:17:01.209550824 +0000 UTC m=+54.801885786" watchObservedRunningTime="2025-07-15 05:17:01.213845974 +0000 UTC m=+54.806180933" Jul 15 05:17:04.654663 containerd[1996]: time="2025-07-15T05:17:04.654481679Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:17:04.657012 containerd[1996]: time="2025-07-15T05:17:04.656978306Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=47317977" Jul 15 05:17:04.658284 containerd[1996]: time="2025-07-15T05:17:04.658252652Z" level=info msg="ImageCreate event name:\"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:17:04.662919 containerd[1996]: time="2025-07-15T05:17:04.662699128Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:17:04.664364 containerd[1996]: time="2025-07-15T05:17:04.663879688Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 4.073822098s" Jul 15 05:17:04.664698 containerd[1996]: time="2025-07-15T05:17:04.664673079Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 15 05:17:04.666994 containerd[1996]: time="2025-07-15T05:17:04.666969781Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 15 05:17:04.712601 containerd[1996]: time="2025-07-15T05:17:04.712551610Z" level=info msg="CreateContainer within sandbox \"9b59e2cde954f924aa4ccdaeba324be8c4260cf05453abdff3bffa2cdd5f4db1\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 15 05:17:04.730932 containerd[1996]: time="2025-07-15T05:17:04.729064421Z" level=info msg="Container 1ba6b5023aa0736979eef6b2470df71fe413d3fdc93bb0178c18849dc862611c: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:17:04.743429 containerd[1996]: time="2025-07-15T05:17:04.743299528Z" level=info msg="CreateContainer within sandbox \"9b59e2cde954f924aa4ccdaeba324be8c4260cf05453abdff3bffa2cdd5f4db1\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"1ba6b5023aa0736979eef6b2470df71fe413d3fdc93bb0178c18849dc862611c\"" Jul 15 05:17:04.745809 containerd[1996]: time="2025-07-15T05:17:04.744293318Z" level=info msg="StartContainer for \"1ba6b5023aa0736979eef6b2470df71fe413d3fdc93bb0178c18849dc862611c\"" Jul 15 05:17:04.745809 containerd[1996]: time="2025-07-15T05:17:04.745592840Z" level=info msg="connecting to shim 1ba6b5023aa0736979eef6b2470df71fe413d3fdc93bb0178c18849dc862611c" address="unix:///run/containerd/s/8c9d4d9c08398d4680e34091cdaa6de9f16631c34ebc1c4ffb01ef51574ad36a" protocol=ttrpc version=3 Jul 15 05:17:04.744403 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3456308249.mount: Deactivated successfully. Jul 15 05:17:04.792996 systemd[1]: Started cri-containerd-1ba6b5023aa0736979eef6b2470df71fe413d3fdc93bb0178c18849dc862611c.scope - libcontainer container 1ba6b5023aa0736979eef6b2470df71fe413d3fdc93bb0178c18849dc862611c. Jul 15 05:17:04.943563 containerd[1996]: time="2025-07-15T05:17:04.943449407Z" level=info msg="StartContainer for \"1ba6b5023aa0736979eef6b2470df71fe413d3fdc93bb0178c18849dc862611c\" returns successfully" Jul 15 05:17:05.084765 containerd[1996]: time="2025-07-15T05:17:05.084708016Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:17:05.086799 containerd[1996]: time="2025-07-15T05:17:05.086752009Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Jul 15 05:17:05.089925 containerd[1996]: time="2025-07-15T05:17:05.089814585Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 422.527136ms" Jul 15 05:17:05.089925 containerd[1996]: time="2025-07-15T05:17:05.089872237Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 15 05:17:05.094394 containerd[1996]: time="2025-07-15T05:17:05.093139000Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 15 05:17:05.097922 containerd[1996]: time="2025-07-15T05:17:05.096783122Z" level=info msg="CreateContainer within sandbox \"d201b124024d9b106e6d1337f1f25cce462d956a3ce63b87e811758b99b93e84\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 15 05:17:05.116778 containerd[1996]: time="2025-07-15T05:17:05.114001472Z" level=info msg="Container afe982a71981fa9da0135dc1fd19c029d4fb3c2d39d13fe5b4db7cf97348d1bb: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:17:05.133440 containerd[1996]: time="2025-07-15T05:17:05.133395219Z" level=info msg="CreateContainer within sandbox \"d201b124024d9b106e6d1337f1f25cce462d956a3ce63b87e811758b99b93e84\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"afe982a71981fa9da0135dc1fd19c029d4fb3c2d39d13fe5b4db7cf97348d1bb\"" Jul 15 05:17:05.135811 containerd[1996]: time="2025-07-15T05:17:05.135773336Z" level=info msg="StartContainer for \"afe982a71981fa9da0135dc1fd19c029d4fb3c2d39d13fe5b4db7cf97348d1bb\"" Jul 15 05:17:05.173790 containerd[1996]: time="2025-07-15T05:17:05.173725056Z" level=info msg="connecting to shim afe982a71981fa9da0135dc1fd19c029d4fb3c2d39d13fe5b4db7cf97348d1bb" address="unix:///run/containerd/s/fc7a27616fac429ec0fec9a0edc0507034363482d7bce3cd3ad41f03f08a4003" protocol=ttrpc version=3 Jul 15 05:17:05.223188 systemd[1]: Started cri-containerd-afe982a71981fa9da0135dc1fd19c029d4fb3c2d39d13fe5b4db7cf97348d1bb.scope - libcontainer container afe982a71981fa9da0135dc1fd19c029d4fb3c2d39d13fe5b4db7cf97348d1bb. Jul 15 05:17:05.301238 kubelet[3322]: I0715 05:17:05.301032 3322 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5d8549b7d9-2jwmz" podStartSLOduration=30.897004524 podStartE2EDuration="43.299887558s" podCreationTimestamp="2025-07-15 05:16:22 +0000 UTC" firstStartedPulling="2025-07-15 05:16:52.263611381 +0000 UTC m=+45.855946335" lastFinishedPulling="2025-07-15 05:17:04.66649441 +0000 UTC m=+58.258829369" observedRunningTime="2025-07-15 05:17:05.288052336 +0000 UTC m=+58.880387318" watchObservedRunningTime="2025-07-15 05:17:05.299887558 +0000 UTC m=+58.892222522" Jul 15 05:17:05.398227 containerd[1996]: time="2025-07-15T05:17:05.398164519Z" level=info msg="StartContainer for \"afe982a71981fa9da0135dc1fd19c029d4fb3c2d39d13fe5b4db7cf97348d1bb\" returns successfully" Jul 15 05:17:06.301262 kubelet[3322]: I0715 05:17:06.301165 3322 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 15 05:17:07.284461 systemd[1]: Started sshd@9-172.31.18.224:22-139.178.89.65:35150.service - OpenSSH per-connection server daemon (139.178.89.65:35150). Jul 15 05:17:07.775275 kubelet[3322]: E0715 05:17:07.775116 3322 kubelet.go:2573] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.041s" Jul 15 05:17:07.791001 kubelet[3322]: I0715 05:17:07.789886 3322 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 15 05:17:07.818072 sshd[5716]: Accepted publickey for core from 139.178.89.65 port 35150 ssh2: RSA SHA256:GkB2NQb8ttcecrkr6wMNwKWllqcPg0g7p088zv9jGDI Jul 15 05:17:07.826397 sshd-session[5716]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:17:07.850781 systemd-logind[1980]: New session 10 of user core. Jul 15 05:17:07.856286 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 15 05:17:09.149794 sshd[5719]: Connection closed by 139.178.89.65 port 35150 Jul 15 05:17:09.150242 sshd-session[5716]: pam_unix(sshd:session): session closed for user core Jul 15 05:17:09.175960 systemd[1]: sshd@9-172.31.18.224:22-139.178.89.65:35150.service: Deactivated successfully. Jul 15 05:17:09.179865 systemd[1]: session-10.scope: Deactivated successfully. Jul 15 05:17:09.183108 systemd-logind[1980]: Session 10 logged out. Waiting for processes to exit. Jul 15 05:17:09.188587 systemd-logind[1980]: Removed session 10. Jul 15 05:17:10.823214 containerd[1996]: time="2025-07-15T05:17:10.820080227Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:17:10.835505 containerd[1996]: time="2025-07-15T05:17:10.835363649Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=51276688" Jul 15 05:17:10.840461 containerd[1996]: time="2025-07-15T05:17:10.840412988Z" level=info msg="ImageCreate event name:\"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:17:10.844808 containerd[1996]: time="2025-07-15T05:17:10.844757873Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:17:10.847772 containerd[1996]: time="2025-07-15T05:17:10.847611950Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"52769359\" in 5.753463651s" Jul 15 05:17:10.847772 containerd[1996]: time="2025-07-15T05:17:10.847680256Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\"" Jul 15 05:17:10.878726 containerd[1996]: time="2025-07-15T05:17:10.878381525Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 15 05:17:10.964341 containerd[1996]: time="2025-07-15T05:17:10.964293079Z" level=info msg="CreateContainer within sandbox \"ec8764d39c84f1fb4fe8057b38971360708f939554d70e854747b4bb92ad4f41\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 15 05:17:10.980766 containerd[1996]: time="2025-07-15T05:17:10.980716588Z" level=info msg="Container a334789cfcb479693109a6f7efcd7470b4d1cf685f5e2cb886f328d63db0a0dd: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:17:11.015852 containerd[1996]: time="2025-07-15T05:17:11.015803418Z" level=info msg="CreateContainer within sandbox \"ec8764d39c84f1fb4fe8057b38971360708f939554d70e854747b4bb92ad4f41\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"a334789cfcb479693109a6f7efcd7470b4d1cf685f5e2cb886f328d63db0a0dd\"" Jul 15 05:17:11.053706 containerd[1996]: time="2025-07-15T05:17:11.053662516Z" level=info msg="StartContainer for \"a334789cfcb479693109a6f7efcd7470b4d1cf685f5e2cb886f328d63db0a0dd\"" Jul 15 05:17:11.087361 containerd[1996]: time="2025-07-15T05:17:11.087236330Z" level=info msg="connecting to shim a334789cfcb479693109a6f7efcd7470b4d1cf685f5e2cb886f328d63db0a0dd" address="unix:///run/containerd/s/36d1b29e1b789b308add4d520bd69aa4485a9a7dcea8193d0348c5e2113d8bb1" protocol=ttrpc version=3 Jul 15 05:17:11.291863 systemd[1]: Started cri-containerd-a334789cfcb479693109a6f7efcd7470b4d1cf685f5e2cb886f328d63db0a0dd.scope - libcontainer container a334789cfcb479693109a6f7efcd7470b4d1cf685f5e2cb886f328d63db0a0dd. Jul 15 05:17:11.836925 containerd[1996]: time="2025-07-15T05:17:11.835382703Z" level=info msg="StartContainer for \"a334789cfcb479693109a6f7efcd7470b4d1cf685f5e2cb886f328d63db0a0dd\" returns successfully" Jul 15 05:17:11.976178 kubelet[3322]: I0715 05:17:11.974037 3322 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5d8549b7d9-d5z85" podStartSLOduration=37.625280919 podStartE2EDuration="49.944780582s" podCreationTimestamp="2025-07-15 05:16:22 +0000 UTC" firstStartedPulling="2025-07-15 05:16:52.772982391 +0000 UTC m=+46.365317350" lastFinishedPulling="2025-07-15 05:17:05.092482074 +0000 UTC m=+58.684817013" observedRunningTime="2025-07-15 05:17:06.365727572 +0000 UTC m=+59.958062553" watchObservedRunningTime="2025-07-15 05:17:11.944780582 +0000 UTC m=+65.537115544" Jul 15 05:17:11.981583 kubelet[3322]: I0715 05:17:11.980799 3322 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-665c85449c-swjcb" podStartSLOduration=26.913293577 podStartE2EDuration="44.980774899s" podCreationTimestamp="2025-07-15 05:16:27 +0000 UTC" firstStartedPulling="2025-07-15 05:16:52.783072743 +0000 UTC m=+46.375407686" lastFinishedPulling="2025-07-15 05:17:10.850554056 +0000 UTC m=+64.442889008" observedRunningTime="2025-07-15 05:17:11.975933063 +0000 UTC m=+65.568268016" watchObservedRunningTime="2025-07-15 05:17:11.980774899 +0000 UTC m=+65.573110028" Jul 15 05:17:12.492369 containerd[1996]: time="2025-07-15T05:17:12.492165062Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a334789cfcb479693109a6f7efcd7470b4d1cf685f5e2cb886f328d63db0a0dd\" id:\"fe12118b6f236b8929211256691087179976109ef72f9297af25b2c728373828\" pid:5795 exit_status:1 exited_at:{seconds:1752556632 nanos:491272819}" Jul 15 05:17:12.751613 containerd[1996]: time="2025-07-15T05:17:12.751467730Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:17:12.754822 containerd[1996]: time="2025-07-15T05:17:12.754783638Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8759190" Jul 15 05:17:12.756946 containerd[1996]: time="2025-07-15T05:17:12.756569407Z" level=info msg="ImageCreate event name:\"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:17:12.763498 containerd[1996]: time="2025-07-15T05:17:12.763374004Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:17:12.766128 containerd[1996]: time="2025-07-15T05:17:12.766081333Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"10251893\" in 1.88765631s" Jul 15 05:17:12.766319 containerd[1996]: time="2025-07-15T05:17:12.766297304Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\"" Jul 15 05:17:12.784334 containerd[1996]: time="2025-07-15T05:17:12.783716204Z" level=info msg="CreateContainer within sandbox \"fb27a28eda7669277ef00dfd7ef3a360943c6bd52d1999e2359fc4e9108da216\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 15 05:17:12.882942 containerd[1996]: time="2025-07-15T05:17:12.881217641Z" level=info msg="Container 716811586c8832b9d3470806d949c5db8bc5b00bf208880b03bcdf3d19ec5a60: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:17:12.934845 containerd[1996]: time="2025-07-15T05:17:12.934707921Z" level=info msg="CreateContainer within sandbox \"fb27a28eda7669277ef00dfd7ef3a360943c6bd52d1999e2359fc4e9108da216\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"716811586c8832b9d3470806d949c5db8bc5b00bf208880b03bcdf3d19ec5a60\"" Jul 15 05:17:12.938848 containerd[1996]: time="2025-07-15T05:17:12.938809144Z" level=info msg="StartContainer for \"716811586c8832b9d3470806d949c5db8bc5b00bf208880b03bcdf3d19ec5a60\"" Jul 15 05:17:12.941070 containerd[1996]: time="2025-07-15T05:17:12.940988395Z" level=info msg="connecting to shim 716811586c8832b9d3470806d949c5db8bc5b00bf208880b03bcdf3d19ec5a60" address="unix:///run/containerd/s/f63638227a75931b04d6faaef813ce437f5a126e9ec3d81dd990a5f17812cc3c" protocol=ttrpc version=3 Jul 15 05:17:12.984945 systemd[1]: Started cri-containerd-716811586c8832b9d3470806d949c5db8bc5b00bf208880b03bcdf3d19ec5a60.scope - libcontainer container 716811586c8832b9d3470806d949c5db8bc5b00bf208880b03bcdf3d19ec5a60. Jul 15 05:17:13.165785 containerd[1996]: time="2025-07-15T05:17:13.165724305Z" level=info msg="StartContainer for \"716811586c8832b9d3470806d949c5db8bc5b00bf208880b03bcdf3d19ec5a60\" returns successfully" Jul 15 05:17:13.173761 containerd[1996]: time="2025-07-15T05:17:13.173725038Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 15 05:17:13.217160 containerd[1996]: time="2025-07-15T05:17:13.217080543Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a334789cfcb479693109a6f7efcd7470b4d1cf685f5e2cb886f328d63db0a0dd\" id:\"afb042af61dcca88cb0e17a843a0031ff82c819525695a79e0afccc0f7d00f0f\" pid:5843 exited_at:{seconds:1752556633 nanos:216736589}" Jul 15 05:17:14.190340 systemd[1]: Started sshd@10-172.31.18.224:22-139.178.89.65:41874.service - OpenSSH per-connection server daemon (139.178.89.65:41874). Jul 15 05:17:14.492953 sshd[5871]: Accepted publickey for core from 139.178.89.65 port 41874 ssh2: RSA SHA256:GkB2NQb8ttcecrkr6wMNwKWllqcPg0g7p088zv9jGDI Jul 15 05:17:14.498525 sshd-session[5871]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:17:14.510295 systemd-logind[1980]: New session 11 of user core. Jul 15 05:17:14.517350 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 15 05:17:15.681437 sshd[5874]: Connection closed by 139.178.89.65 port 41874 Jul 15 05:17:15.680483 sshd-session[5871]: pam_unix(sshd:session): session closed for user core Jul 15 05:17:15.690100 systemd[1]: sshd@10-172.31.18.224:22-139.178.89.65:41874.service: Deactivated successfully. Jul 15 05:17:15.698415 systemd[1]: session-11.scope: Deactivated successfully. Jul 15 05:17:15.709626 systemd-logind[1980]: Session 11 logged out. Waiting for processes to exit. Jul 15 05:17:15.726400 systemd-logind[1980]: Removed session 11. Jul 15 05:17:15.944531 containerd[1996]: time="2025-07-15T05:17:15.944141899Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:17:15.946613 containerd[1996]: time="2025-07-15T05:17:15.946536100Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=14703784" Jul 15 05:17:15.948014 containerd[1996]: time="2025-07-15T05:17:15.947790573Z" level=info msg="ImageCreate event name:\"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:17:15.953486 containerd[1996]: time="2025-07-15T05:17:15.953446019Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:17:15.955150 containerd[1996]: time="2025-07-15T05:17:15.955098378Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"16196439\" in 2.781127922s" Jul 15 05:17:15.955785 containerd[1996]: time="2025-07-15T05:17:15.955670159Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\"" Jul 15 05:17:15.964614 containerd[1996]: time="2025-07-15T05:17:15.964234118Z" level=info msg="CreateContainer within sandbox \"fb27a28eda7669277ef00dfd7ef3a360943c6bd52d1999e2359fc4e9108da216\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 15 05:17:15.982319 containerd[1996]: time="2025-07-15T05:17:15.982073541Z" level=info msg="Container 3ac1a731a3a8901cdc0985091ff219b5eb1479075043c43e6f9e8a8c4b140238: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:17:16.003388 containerd[1996]: time="2025-07-15T05:17:16.003247748Z" level=info msg="CreateContainer within sandbox \"fb27a28eda7669277ef00dfd7ef3a360943c6bd52d1999e2359fc4e9108da216\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"3ac1a731a3a8901cdc0985091ff219b5eb1479075043c43e6f9e8a8c4b140238\"" Jul 15 05:17:16.005244 containerd[1996]: time="2025-07-15T05:17:16.004867489Z" level=info msg="StartContainer for \"3ac1a731a3a8901cdc0985091ff219b5eb1479075043c43e6f9e8a8c4b140238\"" Jul 15 05:17:16.011167 containerd[1996]: time="2025-07-15T05:17:16.011003891Z" level=info msg="connecting to shim 3ac1a731a3a8901cdc0985091ff219b5eb1479075043c43e6f9e8a8c4b140238" address="unix:///run/containerd/s/f63638227a75931b04d6faaef813ce437f5a126e9ec3d81dd990a5f17812cc3c" protocol=ttrpc version=3 Jul 15 05:17:16.151518 systemd[1]: Started cri-containerd-3ac1a731a3a8901cdc0985091ff219b5eb1479075043c43e6f9e8a8c4b140238.scope - libcontainer container 3ac1a731a3a8901cdc0985091ff219b5eb1479075043c43e6f9e8a8c4b140238. Jul 15 05:17:16.297979 containerd[1996]: time="2025-07-15T05:17:16.295997916Z" level=info msg="StartContainer for \"3ac1a731a3a8901cdc0985091ff219b5eb1479075043c43e6f9e8a8c4b140238\" returns successfully" Jul 15 05:17:17.161393 kubelet[3322]: I0715 05:17:17.153311 3322 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 15 05:17:17.170643 kubelet[3322]: I0715 05:17:17.170599 3322 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 15 05:17:17.803932 containerd[1996]: time="2025-07-15T05:17:17.803610467Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9011ce956e9c79256ccdb9007222128056479a916534b9da7694eaab0b9fb5fe\" id:\"f98675d9846f6fdf4715fdb9fdaea3a883ed49620938c9c1259f608e27b6d7b6\" pid:5942 exited_at:{seconds:1752556637 nanos:801240141}" Jul 15 05:17:20.715062 systemd[1]: Started sshd@11-172.31.18.224:22-139.178.89.65:38412.service - OpenSSH per-connection server daemon (139.178.89.65:38412). Jul 15 05:17:20.985246 sshd[5957]: Accepted publickey for core from 139.178.89.65 port 38412 ssh2: RSA SHA256:GkB2NQb8ttcecrkr6wMNwKWllqcPg0g7p088zv9jGDI Jul 15 05:17:20.990133 sshd-session[5957]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:17:20.999004 systemd-logind[1980]: New session 12 of user core. Jul 15 05:17:21.003350 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 15 05:17:21.898136 sshd[5960]: Connection closed by 139.178.89.65 port 38412 Jul 15 05:17:21.903358 sshd-session[5957]: pam_unix(sshd:session): session closed for user core Jul 15 05:17:21.914870 systemd[1]: sshd@11-172.31.18.224:22-139.178.89.65:38412.service: Deactivated successfully. Jul 15 05:17:21.918154 systemd[1]: session-12.scope: Deactivated successfully. Jul 15 05:17:21.921444 systemd-logind[1980]: Session 12 logged out. Waiting for processes to exit. Jul 15 05:17:21.943400 systemd[1]: Started sshd@12-172.31.18.224:22-139.178.89.65:38422.service - OpenSSH per-connection server daemon (139.178.89.65:38422). Jul 15 05:17:21.945415 systemd-logind[1980]: Removed session 12. Jul 15 05:17:22.159049 sshd[5996]: Accepted publickey for core from 139.178.89.65 port 38422 ssh2: RSA SHA256:GkB2NQb8ttcecrkr6wMNwKWllqcPg0g7p088zv9jGDI Jul 15 05:17:22.169008 sshd-session[5996]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:17:22.184156 systemd-logind[1980]: New session 13 of user core. Jul 15 05:17:22.189154 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 15 05:17:22.313130 containerd[1996]: time="2025-07-15T05:17:22.313075856Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e1dac7f605fa00166c119ba0415a1e7136b369767826afc6fb0b14babfdda518\" id:\"363875ea7fec9c167170c9158a24306349e1a4d69b429549fd7191771098e0e7\" pid:5979 exited_at:{seconds:1752556642 nanos:312739033}" Jul 15 05:17:22.711666 sshd[6000]: Connection closed by 139.178.89.65 port 38422 Jul 15 05:17:22.713629 sshd-session[5996]: pam_unix(sshd:session): session closed for user core Jul 15 05:17:22.724605 systemd[1]: sshd@12-172.31.18.224:22-139.178.89.65:38422.service: Deactivated successfully. Jul 15 05:17:22.729833 systemd[1]: session-13.scope: Deactivated successfully. Jul 15 05:17:22.739966 systemd-logind[1980]: Session 13 logged out. Waiting for processes to exit. Jul 15 05:17:22.754179 systemd[1]: Started sshd@13-172.31.18.224:22-139.178.89.65:38436.service - OpenSSH per-connection server daemon (139.178.89.65:38436). Jul 15 05:17:22.755931 systemd-logind[1980]: Removed session 13. Jul 15 05:17:22.977658 sshd[6010]: Accepted publickey for core from 139.178.89.65 port 38436 ssh2: RSA SHA256:GkB2NQb8ttcecrkr6wMNwKWllqcPg0g7p088zv9jGDI Jul 15 05:17:22.981850 sshd-session[6010]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:17:22.993441 systemd-logind[1980]: New session 14 of user core. Jul 15 05:17:23.001125 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 15 05:17:23.312526 sshd[6013]: Connection closed by 139.178.89.65 port 38436 Jul 15 05:17:23.313125 sshd-session[6010]: pam_unix(sshd:session): session closed for user core Jul 15 05:17:23.321190 systemd[1]: sshd@13-172.31.18.224:22-139.178.89.65:38436.service: Deactivated successfully. Jul 15 05:17:23.324715 systemd[1]: session-14.scope: Deactivated successfully. Jul 15 05:17:23.328565 systemd-logind[1980]: Session 14 logged out. Waiting for processes to exit. Jul 15 05:17:23.330612 systemd-logind[1980]: Removed session 14. Jul 15 05:17:24.847851 containerd[1996]: time="2025-07-15T05:17:24.847808403Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a334789cfcb479693109a6f7efcd7470b4d1cf685f5e2cb886f328d63db0a0dd\" id:\"d1c630df41222c4a1c144b900cbe5f4e175f068f05c061aff95198f3f18bd9f0\" pid:6040 exited_at:{seconds:1752556644 nanos:847518686}" Jul 15 05:17:28.347455 systemd[1]: Started sshd@14-172.31.18.224:22-139.178.89.65:38452.service - OpenSSH per-connection server daemon (139.178.89.65:38452). Jul 15 05:17:28.598221 sshd[6051]: Accepted publickey for core from 139.178.89.65 port 38452 ssh2: RSA SHA256:GkB2NQb8ttcecrkr6wMNwKWllqcPg0g7p088zv9jGDI Jul 15 05:17:28.602284 sshd-session[6051]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:17:28.619270 systemd-logind[1980]: New session 15 of user core. Jul 15 05:17:28.629161 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 15 05:17:29.222081 sshd[6054]: Connection closed by 139.178.89.65 port 38452 Jul 15 05:17:29.224782 sshd-session[6051]: pam_unix(sshd:session): session closed for user core Jul 15 05:17:29.233666 systemd[1]: sshd@14-172.31.18.224:22-139.178.89.65:38452.service: Deactivated successfully. Jul 15 05:17:29.236756 systemd[1]: session-15.scope: Deactivated successfully. Jul 15 05:17:29.241386 systemd-logind[1980]: Session 15 logged out. Waiting for processes to exit. Jul 15 05:17:29.244603 systemd-logind[1980]: Removed session 15. Jul 15 05:17:29.484085 containerd[1996]: time="2025-07-15T05:17:29.483761781Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9011ce956e9c79256ccdb9007222128056479a916534b9da7694eaab0b9fb5fe\" id:\"094171417b65731ce46af0d0e605787d5e2135752dc4a92b90528b4e3b92d780\" pid:6080 exited_at:{seconds:1752556649 nanos:483090480}" Jul 15 05:17:29.727000 kubelet[3322]: I0715 05:17:29.705605 3322 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-plmkb" podStartSLOduration=39.613739234 podStartE2EDuration="1m2.67420104s" podCreationTimestamp="2025-07-15 05:16:27 +0000 UTC" firstStartedPulling="2025-07-15 05:16:52.896376758 +0000 UTC m=+46.488711709" lastFinishedPulling="2025-07-15 05:17:15.956838574 +0000 UTC m=+69.549173515" observedRunningTime="2025-07-15 05:17:17.017343196 +0000 UTC m=+70.609678159" watchObservedRunningTime="2025-07-15 05:17:29.67420104 +0000 UTC m=+83.266536002" Jul 15 05:17:34.255842 systemd[1]: Started sshd@15-172.31.18.224:22-139.178.89.65:52882.service - OpenSSH per-connection server daemon (139.178.89.65:52882). Jul 15 05:17:34.531622 sshd[6099]: Accepted publickey for core from 139.178.89.65 port 52882 ssh2: RSA SHA256:GkB2NQb8ttcecrkr6wMNwKWllqcPg0g7p088zv9jGDI Jul 15 05:17:34.539277 sshd-session[6099]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:17:34.547703 systemd-logind[1980]: New session 16 of user core. Jul 15 05:17:34.555186 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 15 05:17:35.276033 sshd[6102]: Connection closed by 139.178.89.65 port 52882 Jul 15 05:17:35.277210 sshd-session[6099]: pam_unix(sshd:session): session closed for user core Jul 15 05:17:35.284414 systemd-logind[1980]: Session 16 logged out. Waiting for processes to exit. Jul 15 05:17:35.285838 systemd[1]: sshd@15-172.31.18.224:22-139.178.89.65:52882.service: Deactivated successfully. Jul 15 05:17:35.291348 systemd[1]: session-16.scope: Deactivated successfully. Jul 15 05:17:35.298071 systemd-logind[1980]: Removed session 16. Jul 15 05:17:40.313955 systemd[1]: Started sshd@16-172.31.18.224:22-139.178.89.65:43478.service - OpenSSH per-connection server daemon (139.178.89.65:43478). Jul 15 05:17:40.491945 sshd[6114]: Accepted publickey for core from 139.178.89.65 port 43478 ssh2: RSA SHA256:GkB2NQb8ttcecrkr6wMNwKWllqcPg0g7p088zv9jGDI Jul 15 05:17:40.495661 sshd-session[6114]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:17:40.507526 systemd-logind[1980]: New session 17 of user core. Jul 15 05:17:40.514440 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 15 05:17:40.801953 sshd[6117]: Connection closed by 139.178.89.65 port 43478 Jul 15 05:17:40.801726 sshd-session[6114]: pam_unix(sshd:session): session closed for user core Jul 15 05:17:40.807315 systemd[1]: sshd@16-172.31.18.224:22-139.178.89.65:43478.service: Deactivated successfully. Jul 15 05:17:40.810682 systemd[1]: session-17.scope: Deactivated successfully. Jul 15 05:17:40.812971 systemd-logind[1980]: Session 17 logged out. Waiting for processes to exit. Jul 15 05:17:40.816040 systemd-logind[1980]: Removed session 17. Jul 15 05:17:43.240333 containerd[1996]: time="2025-07-15T05:17:43.240275416Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a334789cfcb479693109a6f7efcd7470b4d1cf685f5e2cb886f328d63db0a0dd\" id:\"060cba134aa2bfec3e031ecdc577e6b4e08d3b41dbd2d836a7808bfa9f4ad353\" pid:6142 exited_at:{seconds:1752556663 nanos:199864459}" Jul 15 05:17:45.844815 systemd[1]: Started sshd@17-172.31.18.224:22-139.178.89.65:43486.service - OpenSSH per-connection server daemon (139.178.89.65:43486). Jul 15 05:17:46.210428 sshd[6158]: Accepted publickey for core from 139.178.89.65 port 43486 ssh2: RSA SHA256:GkB2NQb8ttcecrkr6wMNwKWllqcPg0g7p088zv9jGDI Jul 15 05:17:46.214994 sshd-session[6158]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:17:46.221672 systemd-logind[1980]: New session 18 of user core. Jul 15 05:17:46.229282 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 15 05:17:47.107665 sshd[6161]: Connection closed by 139.178.89.65 port 43486 Jul 15 05:17:47.110151 sshd-session[6158]: pam_unix(sshd:session): session closed for user core Jul 15 05:17:47.117125 systemd[1]: sshd@17-172.31.18.224:22-139.178.89.65:43486.service: Deactivated successfully. Jul 15 05:17:47.120569 systemd[1]: session-18.scope: Deactivated successfully. Jul 15 05:17:47.122861 systemd-logind[1980]: Session 18 logged out. Waiting for processes to exit. Jul 15 05:17:47.128462 systemd-logind[1980]: Removed session 18. Jul 15 05:17:47.148193 systemd[1]: Started sshd@18-172.31.18.224:22-139.178.89.65:43502.service - OpenSSH per-connection server daemon (139.178.89.65:43502). Jul 15 05:17:47.342922 sshd[6172]: Accepted publickey for core from 139.178.89.65 port 43502 ssh2: RSA SHA256:GkB2NQb8ttcecrkr6wMNwKWllqcPg0g7p088zv9jGDI Jul 15 05:17:47.345221 sshd-session[6172]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:17:47.356552 systemd-logind[1980]: New session 19 of user core. Jul 15 05:17:47.361151 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 15 05:17:48.086056 sshd[6175]: Connection closed by 139.178.89.65 port 43502 Jul 15 05:17:48.086942 sshd-session[6172]: pam_unix(sshd:session): session closed for user core Jul 15 05:17:48.093922 systemd[1]: sshd@18-172.31.18.224:22-139.178.89.65:43502.service: Deactivated successfully. Jul 15 05:17:48.094941 systemd-logind[1980]: Session 19 logged out. Waiting for processes to exit. Jul 15 05:17:48.099278 systemd[1]: session-19.scope: Deactivated successfully. Jul 15 05:17:48.104075 systemd-logind[1980]: Removed session 19. Jul 15 05:17:48.120810 systemd[1]: Started sshd@19-172.31.18.224:22-139.178.89.65:43506.service - OpenSSH per-connection server daemon (139.178.89.65:43506). Jul 15 05:17:48.326728 sshd[6185]: Accepted publickey for core from 139.178.89.65 port 43506 ssh2: RSA SHA256:GkB2NQb8ttcecrkr6wMNwKWllqcPg0g7p088zv9jGDI Jul 15 05:17:48.328424 sshd-session[6185]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:17:48.334147 systemd-logind[1980]: New session 20 of user core. Jul 15 05:17:48.341137 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 15 05:17:49.404262 sshd[6188]: Connection closed by 139.178.89.65 port 43506 Jul 15 05:17:49.405462 sshd-session[6185]: pam_unix(sshd:session): session closed for user core Jul 15 05:17:49.416652 systemd-logind[1980]: Session 20 logged out. Waiting for processes to exit. Jul 15 05:17:49.417484 systemd[1]: sshd@19-172.31.18.224:22-139.178.89.65:43506.service: Deactivated successfully. Jul 15 05:17:49.422584 systemd[1]: session-20.scope: Deactivated successfully. Jul 15 05:17:49.448252 systemd-logind[1980]: Removed session 20. Jul 15 05:17:49.450782 systemd[1]: Started sshd@20-172.31.18.224:22-139.178.89.65:45060.service - OpenSSH per-connection server daemon (139.178.89.65:45060). Jul 15 05:17:49.726002 sshd[6206]: Accepted publickey for core from 139.178.89.65 port 45060 ssh2: RSA SHA256:GkB2NQb8ttcecrkr6wMNwKWllqcPg0g7p088zv9jGDI Jul 15 05:17:49.727793 sshd-session[6206]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:17:49.739415 systemd-logind[1980]: New session 21 of user core. Jul 15 05:17:49.745221 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 15 05:17:51.087893 sshd[6212]: Connection closed by 139.178.89.65 port 45060 Jul 15 05:17:51.090273 sshd-session[6206]: pam_unix(sshd:session): session closed for user core Jul 15 05:17:51.098254 systemd-logind[1980]: Session 21 logged out. Waiting for processes to exit. Jul 15 05:17:51.099723 systemd[1]: sshd@20-172.31.18.224:22-139.178.89.65:45060.service: Deactivated successfully. Jul 15 05:17:51.105729 systemd[1]: session-21.scope: Deactivated successfully. Jul 15 05:17:51.110980 systemd-logind[1980]: Removed session 21. Jul 15 05:17:51.128665 systemd[1]: Started sshd@21-172.31.18.224:22-139.178.89.65:45074.service - OpenSSH per-connection server daemon (139.178.89.65:45074). Jul 15 05:17:51.370470 sshd[6223]: Accepted publickey for core from 139.178.89.65 port 45074 ssh2: RSA SHA256:GkB2NQb8ttcecrkr6wMNwKWllqcPg0g7p088zv9jGDI Jul 15 05:17:51.374477 sshd-session[6223]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:17:51.383517 systemd-logind[1980]: New session 22 of user core. Jul 15 05:17:51.390333 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 15 05:17:51.876025 sshd[6244]: Connection closed by 139.178.89.65 port 45074 Jul 15 05:17:51.883769 sshd-session[6223]: pam_unix(sshd:session): session closed for user core Jul 15 05:17:51.898551 systemd[1]: sshd@21-172.31.18.224:22-139.178.89.65:45074.service: Deactivated successfully. Jul 15 05:17:51.899136 systemd-logind[1980]: Session 22 logged out. Waiting for processes to exit. Jul 15 05:17:51.904750 systemd[1]: session-22.scope: Deactivated successfully. Jul 15 05:17:51.910705 systemd-logind[1980]: Removed session 22. Jul 15 05:17:52.214872 containerd[1996]: time="2025-07-15T05:17:52.214038578Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e1dac7f605fa00166c119ba0415a1e7136b369767826afc6fb0b14babfdda518\" id:\"222700f372f9e0e33fc0ac3769ead978a1ac736917c9ccd6a0fa576cefd102db\" pid:6238 exited_at:{seconds:1752556672 nanos:212964714}" Jul 15 05:17:56.913478 systemd[1]: Started sshd@22-172.31.18.224:22-139.178.89.65:45086.service - OpenSSH per-connection server daemon (139.178.89.65:45086). Jul 15 05:17:57.198405 sshd[6266]: Accepted publickey for core from 139.178.89.65 port 45086 ssh2: RSA SHA256:GkB2NQb8ttcecrkr6wMNwKWllqcPg0g7p088zv9jGDI Jul 15 05:17:57.203766 sshd-session[6266]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:17:57.214524 systemd-logind[1980]: New session 23 of user core. Jul 15 05:17:57.220347 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 15 05:17:57.839490 sshd[6269]: Connection closed by 139.178.89.65 port 45086 Jul 15 05:17:57.841595 sshd-session[6266]: pam_unix(sshd:session): session closed for user core Jul 15 05:17:57.862800 systemd[1]: sshd@22-172.31.18.224:22-139.178.89.65:45086.service: Deactivated successfully. Jul 15 05:17:57.863132 systemd-logind[1980]: Session 23 logged out. Waiting for processes to exit. Jul 15 05:17:57.868423 systemd[1]: session-23.scope: Deactivated successfully. Jul 15 05:17:57.873549 systemd-logind[1980]: Removed session 23. Jul 15 05:17:59.543205 containerd[1996]: time="2025-07-15T05:17:59.543115071Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9011ce956e9c79256ccdb9007222128056479a916534b9da7694eaab0b9fb5fe\" id:\"e422b588654ace193399046129db099d93805cf42d9f4a62e8b006250befecf4\" pid:6292 exited_at:{seconds:1752556679 nanos:542385139}" Jul 15 05:18:02.881571 systemd[1]: Started sshd@23-172.31.18.224:22-139.178.89.65:47330.service - OpenSSH per-connection server daemon (139.178.89.65:47330). Jul 15 05:18:03.141773 sshd[6303]: Accepted publickey for core from 139.178.89.65 port 47330 ssh2: RSA SHA256:GkB2NQb8ttcecrkr6wMNwKWllqcPg0g7p088zv9jGDI Jul 15 05:18:03.144032 sshd-session[6303]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:18:03.153821 systemd-logind[1980]: New session 24 of user core. Jul 15 05:18:03.163235 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 15 05:18:04.003942 sshd[6306]: Connection closed by 139.178.89.65 port 47330 Jul 15 05:18:04.007037 sshd-session[6303]: pam_unix(sshd:session): session closed for user core Jul 15 05:18:04.019409 systemd[1]: sshd@23-172.31.18.224:22-139.178.89.65:47330.service: Deactivated successfully. Jul 15 05:18:04.024459 systemd[1]: session-24.scope: Deactivated successfully. Jul 15 05:18:04.026919 systemd-logind[1980]: Session 24 logged out. Waiting for processes to exit. Jul 15 05:18:04.030362 systemd-logind[1980]: Removed session 24. Jul 15 05:18:09.040374 systemd[1]: Started sshd@24-172.31.18.224:22-139.178.89.65:39784.service - OpenSSH per-connection server daemon (139.178.89.65:39784). Jul 15 05:18:09.283934 sshd[6320]: Accepted publickey for core from 139.178.89.65 port 39784 ssh2: RSA SHA256:GkB2NQb8ttcecrkr6wMNwKWllqcPg0g7p088zv9jGDI Jul 15 05:18:09.285344 sshd-session[6320]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:18:09.294324 systemd-logind[1980]: New session 25 of user core. Jul 15 05:18:09.298099 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 15 05:18:09.695387 sshd[6329]: Connection closed by 139.178.89.65 port 39784 Jul 15 05:18:09.696198 sshd-session[6320]: pam_unix(sshd:session): session closed for user core Jul 15 05:18:09.707759 systemd[1]: sshd@24-172.31.18.224:22-139.178.89.65:39784.service: Deactivated successfully. Jul 15 05:18:09.710896 systemd[1]: session-25.scope: Deactivated successfully. Jul 15 05:18:09.712486 systemd-logind[1980]: Session 25 logged out. Waiting for processes to exit. Jul 15 05:18:09.715843 systemd-logind[1980]: Removed session 25. Jul 15 05:18:12.984303 containerd[1996]: time="2025-07-15T05:18:12.984256317Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a334789cfcb479693109a6f7efcd7470b4d1cf685f5e2cb886f328d63db0a0dd\" id:\"ca3b3691367b2ba69bf4543c0c017594fb773d9fcac6e0c3f89480837a9b8ecf\" pid:6356 exited_at:{seconds:1752556692 nanos:983669924}" Jul 15 05:18:14.741731 systemd[1]: Started sshd@25-172.31.18.224:22-139.178.89.65:39794.service - OpenSSH per-connection server daemon (139.178.89.65:39794). Jul 15 05:18:14.936661 sshd[6367]: Accepted publickey for core from 139.178.89.65 port 39794 ssh2: RSA SHA256:GkB2NQb8ttcecrkr6wMNwKWllqcPg0g7p088zv9jGDI Jul 15 05:18:14.939641 sshd-session[6367]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:18:14.951732 systemd-logind[1980]: New session 26 of user core. Jul 15 05:18:14.957412 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 15 05:18:15.363158 sshd[6370]: Connection closed by 139.178.89.65 port 39794 Jul 15 05:18:15.368267 sshd-session[6367]: pam_unix(sshd:session): session closed for user core Jul 15 05:18:15.374564 systemd[1]: sshd@25-172.31.18.224:22-139.178.89.65:39794.service: Deactivated successfully. Jul 15 05:18:15.378037 systemd[1]: session-26.scope: Deactivated successfully. Jul 15 05:18:15.380455 systemd-logind[1980]: Session 26 logged out. Waiting for processes to exit. Jul 15 05:18:15.381944 systemd-logind[1980]: Removed session 26. Jul 15 05:18:17.205452 containerd[1996]: time="2025-07-15T05:18:17.151205531Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9011ce956e9c79256ccdb9007222128056479a916534b9da7694eaab0b9fb5fe\" id:\"7f0be9e1d4be000fc85fd4e62ddeabbcda54c636d233fdb63643f883e8c48d34\" pid:6393 exited_at:{seconds:1752556697 nanos:150494836}" Jul 15 05:18:20.401334 systemd[1]: Started sshd@26-172.31.18.224:22-139.178.89.65:51446.service - OpenSSH per-connection server daemon (139.178.89.65:51446). Jul 15 05:18:20.636251 sshd[6404]: Accepted publickey for core from 139.178.89.65 port 51446 ssh2: RSA SHA256:GkB2NQb8ttcecrkr6wMNwKWllqcPg0g7p088zv9jGDI Jul 15 05:18:20.639737 sshd-session[6404]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:18:20.653116 systemd-logind[1980]: New session 27 of user core. Jul 15 05:18:20.657316 systemd[1]: Started session-27.scope - Session 27 of User core. Jul 15 05:18:21.019651 sshd[6407]: Connection closed by 139.178.89.65 port 51446 Jul 15 05:18:21.020293 sshd-session[6404]: pam_unix(sshd:session): session closed for user core Jul 15 05:18:21.029224 systemd-logind[1980]: Session 27 logged out. Waiting for processes to exit. Jul 15 05:18:21.029468 systemd[1]: sshd@26-172.31.18.224:22-139.178.89.65:51446.service: Deactivated successfully. Jul 15 05:18:21.032218 systemd[1]: session-27.scope: Deactivated successfully. Jul 15 05:18:21.034166 systemd-logind[1980]: Removed session 27. Jul 15 05:18:21.857059 containerd[1996]: time="2025-07-15T05:18:21.857014465Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e1dac7f605fa00166c119ba0415a1e7136b369767826afc6fb0b14babfdda518\" id:\"a7f0aa3ad4972c7859c9ae3157aff54f99fc0047f83a4ae27395f3475d3f169e\" pid:6431 exited_at:{seconds:1752556701 nanos:856628819}" Jul 15 05:18:24.787987 containerd[1996]: time="2025-07-15T05:18:24.787450653Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a334789cfcb479693109a6f7efcd7470b4d1cf685f5e2cb886f328d63db0a0dd\" id:\"43b7ad4c93af61ad9d2031cdfa738829d27727658359f02611dbe0e06b3f1239\" pid:6456 exited_at:{seconds:1752556704 nanos:786171741}" Jul 15 05:18:26.059697 systemd[1]: Started sshd@27-172.31.18.224:22-139.178.89.65:51452.service - OpenSSH per-connection server daemon (139.178.89.65:51452). Jul 15 05:18:26.376928 sshd[6474]: Accepted publickey for core from 139.178.89.65 port 51452 ssh2: RSA SHA256:GkB2NQb8ttcecrkr6wMNwKWllqcPg0g7p088zv9jGDI Jul 15 05:18:26.379753 sshd-session[6474]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:18:26.387656 systemd-logind[1980]: New session 28 of user core. Jul 15 05:18:26.395427 systemd[1]: Started session-28.scope - Session 28 of User core. Jul 15 05:18:27.432089 sshd[6477]: Connection closed by 139.178.89.65 port 51452 Jul 15 05:18:27.437435 sshd-session[6474]: pam_unix(sshd:session): session closed for user core Jul 15 05:18:27.448978 systemd[1]: sshd@27-172.31.18.224:22-139.178.89.65:51452.service: Deactivated successfully. Jul 15 05:18:27.453812 systemd[1]: session-28.scope: Deactivated successfully. Jul 15 05:18:27.456926 systemd-logind[1980]: Session 28 logged out. Waiting for processes to exit. Jul 15 05:18:27.460406 systemd-logind[1980]: Removed session 28. Jul 15 05:18:29.261625 containerd[1996]: time="2025-07-15T05:18:29.261562919Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9011ce956e9c79256ccdb9007222128056479a916534b9da7694eaab0b9fb5fe\" id:\"7d678f351d5ff1b3698fe2c3b3f3d041631f7afd3ae6da0b68d07a4e8fe4f022\" pid:6514 exited_at:{seconds:1752556709 nanos:261311110}"