Jun 20 19:55:09.908624 kernel: Linux version 6.12.34-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Fri Jun 20 17:06:39 -00 2025 Jun 20 19:55:09.908665 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=b7bb3b1ced9c5d47870a8b74c6c30075189c27e25d75251cfa7215e4bbff75ea Jun 20 19:55:09.908681 kernel: BIOS-provided physical RAM map: Jun 20 19:55:09.908693 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jun 20 19:55:09.908704 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000786cdfff] usable Jun 20 19:55:09.908716 kernel: BIOS-e820: [mem 0x00000000786ce000-0x000000007894dfff] reserved Jun 20 19:55:09.908730 kernel: BIOS-e820: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Jun 20 19:55:09.908779 kernel: BIOS-e820: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Jun 20 19:55:09.908795 kernel: BIOS-e820: [mem 0x00000000789de000-0x000000007c97bfff] usable Jun 20 19:55:09.908807 kernel: BIOS-e820: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Jun 20 19:55:09.908818 kernel: NX (Execute Disable) protection: active Jun 20 19:55:09.908831 kernel: APIC: Static calls initialized Jun 20 19:55:09.908843 kernel: e820: update [mem 0x768c0018-0x768c8e57] usable ==> usable Jun 20 19:55:09.908855 kernel: extended physical RAM map: Jun 20 19:55:09.908874 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Jun 20 19:55:09.908887 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000768c0017] usable Jun 20 19:55:09.908901 kernel: reserve setup_data: [mem 0x00000000768c0018-0x00000000768c8e57] usable Jun 20 19:55:09.908914 kernel: reserve setup_data: [mem 0x00000000768c8e58-0x00000000786cdfff] usable Jun 20 19:55:09.908928 kernel: reserve setup_data: [mem 0x00000000786ce000-0x000000007894dfff] reserved Jun 20 19:55:09.908941 kernel: reserve setup_data: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Jun 20 19:55:09.908954 kernel: reserve setup_data: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Jun 20 19:55:09.908968 kernel: reserve setup_data: [mem 0x00000000789de000-0x000000007c97bfff] usable Jun 20 19:55:09.908981 kernel: reserve setup_data: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Jun 20 19:55:09.908994 kernel: efi: EFI v2.7 by EDK II Jun 20 19:55:09.909010 kernel: efi: SMBIOS=0x7886a000 ACPI=0x7895d000 ACPI 2.0=0x7895d014 MEMATTR=0x77003518 Jun 20 19:55:09.909024 kernel: secureboot: Secure boot disabled Jun 20 19:55:09.909037 kernel: SMBIOS 2.7 present. Jun 20 19:55:09.909049 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Jun 20 19:55:09.909061 kernel: DMI: Memory slots populated: 1/1 Jun 20 19:55:09.909073 kernel: Hypervisor detected: KVM Jun 20 19:55:09.909085 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jun 20 19:55:09.909097 kernel: kvm-clock: using sched offset of 5202615112 cycles Jun 20 19:55:09.909111 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jun 20 19:55:09.909124 kernel: tsc: Detected 2500.004 MHz processor Jun 20 19:55:09.909137 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jun 20 19:55:09.909153 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jun 20 19:55:09.909166 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Jun 20 19:55:09.909179 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jun 20 19:55:09.909192 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jun 20 19:55:09.909206 kernel: Using GB pages for direct mapping Jun 20 19:55:09.909224 kernel: ACPI: Early table checksum verification disabled Jun 20 19:55:09.909240 kernel: ACPI: RSDP 0x000000007895D014 000024 (v02 AMAZON) Jun 20 19:55:09.909254 kernel: ACPI: XSDT 0x000000007895C0E8 00006C (v01 AMAZON AMZNFACP 00000001 01000013) Jun 20 19:55:09.909267 kernel: ACPI: FACP 0x0000000078955000 000114 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Jun 20 19:55:09.909280 kernel: ACPI: DSDT 0x0000000078956000 00115A (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Jun 20 19:55:09.909293 kernel: ACPI: FACS 0x00000000789D0000 000040 Jun 20 19:55:09.909307 kernel: ACPI: WAET 0x000000007895B000 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Jun 20 19:55:09.909320 kernel: ACPI: SLIT 0x000000007895A000 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jun 20 19:55:09.909334 kernel: ACPI: APIC 0x0000000078959000 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jun 20 19:55:09.909350 kernel: ACPI: SRAT 0x0000000078958000 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Jun 20 19:55:09.909364 kernel: ACPI: HPET 0x0000000078954000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Jun 20 19:55:09.909378 kernel: ACPI: SSDT 0x0000000078953000 000759 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Jun 20 19:55:09.909391 kernel: ACPI: SSDT 0x0000000078952000 00007F (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Jun 20 19:55:09.909405 kernel: ACPI: BGRT 0x0000000078951000 000038 (v01 AMAZON AMAZON 00000002 01000013) Jun 20 19:55:09.909419 kernel: ACPI: Reserving FACP table memory at [mem 0x78955000-0x78955113] Jun 20 19:55:09.909432 kernel: ACPI: Reserving DSDT table memory at [mem 0x78956000-0x78957159] Jun 20 19:55:09.909446 kernel: ACPI: Reserving FACS table memory at [mem 0x789d0000-0x789d003f] Jun 20 19:55:09.909462 kernel: ACPI: Reserving WAET table memory at [mem 0x7895b000-0x7895b027] Jun 20 19:55:09.909476 kernel: ACPI: Reserving SLIT table memory at [mem 0x7895a000-0x7895a06b] Jun 20 19:55:09.909489 kernel: ACPI: Reserving APIC table memory at [mem 0x78959000-0x78959075] Jun 20 19:55:09.909503 kernel: ACPI: Reserving SRAT table memory at [mem 0x78958000-0x7895809f] Jun 20 19:55:09.909516 kernel: ACPI: Reserving HPET table memory at [mem 0x78954000-0x78954037] Jun 20 19:55:09.909530 kernel: ACPI: Reserving SSDT table memory at [mem 0x78953000-0x78953758] Jun 20 19:55:09.909543 kernel: ACPI: Reserving SSDT table memory at [mem 0x78952000-0x7895207e] Jun 20 19:55:09.909557 kernel: ACPI: Reserving BGRT table memory at [mem 0x78951000-0x78951037] Jun 20 19:55:09.909570 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Jun 20 19:55:09.909586 kernel: NUMA: Initialized distance table, cnt=1 Jun 20 19:55:09.909600 kernel: NODE_DATA(0) allocated [mem 0x7a8eddc0-0x7a8f4fff] Jun 20 19:55:09.909613 kernel: Zone ranges: Jun 20 19:55:09.909626 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jun 20 19:55:09.909640 kernel: DMA32 [mem 0x0000000001000000-0x000000007c97bfff] Jun 20 19:55:09.909653 kernel: Normal empty Jun 20 19:55:09.909666 kernel: Device empty Jun 20 19:55:09.909680 kernel: Movable zone start for each node Jun 20 19:55:09.909694 kernel: Early memory node ranges Jun 20 19:55:09.909707 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jun 20 19:55:09.909723 kernel: node 0: [mem 0x0000000000100000-0x00000000786cdfff] Jun 20 19:55:09.909737 kernel: node 0: [mem 0x00000000789de000-0x000000007c97bfff] Jun 20 19:55:09.909775 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007c97bfff] Jun 20 19:55:09.909787 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jun 20 19:55:09.912119 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jun 20 19:55:09.912137 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Jun 20 19:55:09.912152 kernel: On node 0, zone DMA32: 13956 pages in unavailable ranges Jun 20 19:55:09.912164 kernel: ACPI: PM-Timer IO Port: 0xb008 Jun 20 19:55:09.912179 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jun 20 19:55:09.912199 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Jun 20 19:55:09.912214 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jun 20 19:55:09.912228 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jun 20 19:55:09.912241 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jun 20 19:55:09.912253 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jun 20 19:55:09.912267 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jun 20 19:55:09.912279 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jun 20 19:55:09.912292 kernel: TSC deadline timer available Jun 20 19:55:09.912307 kernel: CPU topo: Max. logical packages: 1 Jun 20 19:55:09.912321 kernel: CPU topo: Max. logical dies: 1 Jun 20 19:55:09.912339 kernel: CPU topo: Max. dies per package: 1 Jun 20 19:55:09.912352 kernel: CPU topo: Max. threads per core: 2 Jun 20 19:55:09.912365 kernel: CPU topo: Num. cores per package: 1 Jun 20 19:55:09.912377 kernel: CPU topo: Num. threads per package: 2 Jun 20 19:55:09.912390 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Jun 20 19:55:09.912403 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jun 20 19:55:09.912416 kernel: [mem 0x7ca00000-0xffffffff] available for PCI devices Jun 20 19:55:09.912430 kernel: Booting paravirtualized kernel on KVM Jun 20 19:55:09.912445 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jun 20 19:55:09.912461 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jun 20 19:55:09.912476 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Jun 20 19:55:09.912489 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Jun 20 19:55:09.912501 kernel: pcpu-alloc: [0] 0 1 Jun 20 19:55:09.912513 kernel: kvm-guest: PV spinlocks enabled Jun 20 19:55:09.912527 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jun 20 19:55:09.912544 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=b7bb3b1ced9c5d47870a8b74c6c30075189c27e25d75251cfa7215e4bbff75ea Jun 20 19:55:09.912560 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jun 20 19:55:09.912579 kernel: random: crng init done Jun 20 19:55:09.912592 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jun 20 19:55:09.912608 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jun 20 19:55:09.912623 kernel: Fallback order for Node 0: 0 Jun 20 19:55:09.912637 kernel: Built 1 zonelists, mobility grouping on. Total pages: 509451 Jun 20 19:55:09.912650 kernel: Policy zone: DMA32 Jun 20 19:55:09.912675 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jun 20 19:55:09.912693 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jun 20 19:55:09.912707 kernel: Kernel/User page tables isolation: enabled Jun 20 19:55:09.912729 kernel: ftrace: allocating 40093 entries in 157 pages Jun 20 19:55:09.912763 kernel: ftrace: allocated 157 pages with 5 groups Jun 20 19:55:09.912788 kernel: Dynamic Preempt: voluntary Jun 20 19:55:09.912802 kernel: rcu: Preemptible hierarchical RCU implementation. Jun 20 19:55:09.912815 kernel: rcu: RCU event tracing is enabled. Jun 20 19:55:09.912829 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jun 20 19:55:09.912844 kernel: Trampoline variant of Tasks RCU enabled. Jun 20 19:55:09.912859 kernel: Rude variant of Tasks RCU enabled. Jun 20 19:55:09.912879 kernel: Tracing variant of Tasks RCU enabled. Jun 20 19:55:09.912892 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jun 20 19:55:09.912906 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jun 20 19:55:09.912920 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jun 20 19:55:09.912934 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jun 20 19:55:09.912950 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jun 20 19:55:09.912965 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jun 20 19:55:09.912980 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jun 20 19:55:09.912998 kernel: Console: colour dummy device 80x25 Jun 20 19:55:09.913013 kernel: printk: legacy console [tty0] enabled Jun 20 19:55:09.913028 kernel: printk: legacy console [ttyS0] enabled Jun 20 19:55:09.913043 kernel: ACPI: Core revision 20240827 Jun 20 19:55:09.913059 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Jun 20 19:55:09.913075 kernel: APIC: Switch to symmetric I/O mode setup Jun 20 19:55:09.913090 kernel: x2apic enabled Jun 20 19:55:09.913105 kernel: APIC: Switched APIC routing to: physical x2apic Jun 20 19:55:09.913120 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093d6e846, max_idle_ns: 440795249997 ns Jun 20 19:55:09.913136 kernel: Calibrating delay loop (skipped) preset value.. 5000.00 BogoMIPS (lpj=2500004) Jun 20 19:55:09.913149 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jun 20 19:55:09.913164 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Jun 20 19:55:09.913180 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jun 20 19:55:09.913195 kernel: Spectre V2 : Mitigation: Retpolines Jun 20 19:55:09.913210 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jun 20 19:55:09.913226 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jun 20 19:55:09.913241 kernel: RETBleed: Vulnerable Jun 20 19:55:09.913255 kernel: Speculative Store Bypass: Vulnerable Jun 20 19:55:09.913268 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Jun 20 19:55:09.913280 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jun 20 19:55:09.913298 kernel: GDS: Unknown: Dependent on hypervisor status Jun 20 19:55:09.913312 kernel: ITS: Mitigation: Aligned branch/return thunks Jun 20 19:55:09.913325 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jun 20 19:55:09.913340 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jun 20 19:55:09.913355 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jun 20 19:55:09.913369 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Jun 20 19:55:09.913384 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Jun 20 19:55:09.913399 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jun 20 19:55:09.913413 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jun 20 19:55:09.913428 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jun 20 19:55:09.913445 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Jun 20 19:55:09.913460 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jun 20 19:55:09.913474 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Jun 20 19:55:09.913489 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Jun 20 19:55:09.913505 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Jun 20 19:55:09.913520 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Jun 20 19:55:09.913536 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Jun 20 19:55:09.913551 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Jun 20 19:55:09.913567 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Jun 20 19:55:09.913584 kernel: Freeing SMP alternatives memory: 32K Jun 20 19:55:09.913599 kernel: pid_max: default: 32768 minimum: 301 Jun 20 19:55:09.913615 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jun 20 19:55:09.913634 kernel: landlock: Up and running. Jun 20 19:55:09.913650 kernel: SELinux: Initializing. Jun 20 19:55:09.913666 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jun 20 19:55:09.913682 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jun 20 19:55:09.913700 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Jun 20 19:55:09.913716 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Jun 20 19:55:09.913732 kernel: signal: max sigframe size: 3632 Jun 20 19:55:09.915120 kernel: rcu: Hierarchical SRCU implementation. Jun 20 19:55:09.915141 kernel: rcu: Max phase no-delay instances is 400. Jun 20 19:55:09.915159 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jun 20 19:55:09.915181 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jun 20 19:55:09.915197 kernel: smp: Bringing up secondary CPUs ... Jun 20 19:55:09.915214 kernel: smpboot: x86: Booting SMP configuration: Jun 20 19:55:09.915232 kernel: .... node #0, CPUs: #1 Jun 20 19:55:09.915249 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Jun 20 19:55:09.915267 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jun 20 19:55:09.915283 kernel: smp: Brought up 1 node, 2 CPUs Jun 20 19:55:09.915300 kernel: smpboot: Total of 2 processors activated (10000.01 BogoMIPS) Jun 20 19:55:09.915317 kernel: Memory: 1908048K/2037804K available (14336K kernel code, 2430K rwdata, 9956K rodata, 54424K init, 2544K bss, 125192K reserved, 0K cma-reserved) Jun 20 19:55:09.915337 kernel: devtmpfs: initialized Jun 20 19:55:09.915353 kernel: x86/mm: Memory block size: 128MB Jun 20 19:55:09.915369 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7895e000-0x789ddfff] (524288 bytes) Jun 20 19:55:09.915386 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jun 20 19:55:09.915402 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jun 20 19:55:09.915418 kernel: pinctrl core: initialized pinctrl subsystem Jun 20 19:55:09.915434 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jun 20 19:55:09.915451 kernel: audit: initializing netlink subsys (disabled) Jun 20 19:55:09.915470 kernel: audit: type=2000 audit(1750449307.714:1): state=initialized audit_enabled=0 res=1 Jun 20 19:55:09.915486 kernel: thermal_sys: Registered thermal governor 'step_wise' Jun 20 19:55:09.915502 kernel: thermal_sys: Registered thermal governor 'user_space' Jun 20 19:55:09.915517 kernel: cpuidle: using governor menu Jun 20 19:55:09.915533 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jun 20 19:55:09.915548 kernel: dca service started, version 1.12.1 Jun 20 19:55:09.915564 kernel: PCI: Using configuration type 1 for base access Jun 20 19:55:09.915580 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jun 20 19:55:09.915596 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jun 20 19:55:09.915613 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jun 20 19:55:09.915626 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jun 20 19:55:09.915640 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jun 20 19:55:09.915655 kernel: ACPI: Added _OSI(Module Device) Jun 20 19:55:09.915669 kernel: ACPI: Added _OSI(Processor Device) Jun 20 19:55:09.915683 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jun 20 19:55:09.915699 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Jun 20 19:55:09.915715 kernel: ACPI: Interpreter enabled Jun 20 19:55:09.915729 kernel: ACPI: PM: (supports S0 S5) Jun 20 19:55:09.917315 kernel: ACPI: Using IOAPIC for interrupt routing Jun 20 19:55:09.917336 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jun 20 19:55:09.917351 kernel: PCI: Using E820 reservations for host bridge windows Jun 20 19:55:09.917365 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jun 20 19:55:09.917379 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jun 20 19:55:09.917619 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jun 20 19:55:09.917798 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jun 20 19:55:09.917941 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jun 20 19:55:09.917964 kernel: acpiphp: Slot [3] registered Jun 20 19:55:09.917979 kernel: acpiphp: Slot [4] registered Jun 20 19:55:09.917994 kernel: acpiphp: Slot [5] registered Jun 20 19:55:09.918008 kernel: acpiphp: Slot [6] registered Jun 20 19:55:09.918021 kernel: acpiphp: Slot [7] registered Jun 20 19:55:09.918036 kernel: acpiphp: Slot [8] registered Jun 20 19:55:09.918052 kernel: acpiphp: Slot [9] registered Jun 20 19:55:09.918068 kernel: acpiphp: Slot [10] registered Jun 20 19:55:09.918084 kernel: acpiphp: Slot [11] registered Jun 20 19:55:09.918104 kernel: acpiphp: Slot [12] registered Jun 20 19:55:09.918120 kernel: acpiphp: Slot [13] registered Jun 20 19:55:09.918136 kernel: acpiphp: Slot [14] registered Jun 20 19:55:09.918152 kernel: acpiphp: Slot [15] registered Jun 20 19:55:09.918168 kernel: acpiphp: Slot [16] registered Jun 20 19:55:09.918183 kernel: acpiphp: Slot [17] registered Jun 20 19:55:09.918196 kernel: acpiphp: Slot [18] registered Jun 20 19:55:09.918210 kernel: acpiphp: Slot [19] registered Jun 20 19:55:09.918224 kernel: acpiphp: Slot [20] registered Jun 20 19:55:09.918241 kernel: acpiphp: Slot [21] registered Jun 20 19:55:09.918255 kernel: acpiphp: Slot [22] registered Jun 20 19:55:09.918269 kernel: acpiphp: Slot [23] registered Jun 20 19:55:09.918283 kernel: acpiphp: Slot [24] registered Jun 20 19:55:09.918298 kernel: acpiphp: Slot [25] registered Jun 20 19:55:09.918312 kernel: acpiphp: Slot [26] registered Jun 20 19:55:09.918327 kernel: acpiphp: Slot [27] registered Jun 20 19:55:09.918341 kernel: acpiphp: Slot [28] registered Jun 20 19:55:09.918355 kernel: acpiphp: Slot [29] registered Jun 20 19:55:09.918369 kernel: acpiphp: Slot [30] registered Jun 20 19:55:09.918386 kernel: acpiphp: Slot [31] registered Jun 20 19:55:09.918400 kernel: PCI host bridge to bus 0000:00 Jun 20 19:55:09.918542 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jun 20 19:55:09.918660 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jun 20 19:55:09.919481 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jun 20 19:55:09.919625 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Jun 20 19:55:09.919768 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x2000ffffffff window] Jun 20 19:55:09.919887 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jun 20 19:55:09.920037 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint Jun 20 19:55:09.920275 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint Jun 20 19:55:09.920412 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 conventional PCI endpoint Jun 20 19:55:09.920551 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Jun 20 19:55:09.920686 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Jun 20 19:55:09.921866 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Jun 20 19:55:09.922022 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Jun 20 19:55:09.922163 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Jun 20 19:55:09.922303 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Jun 20 19:55:09.922434 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Jun 20 19:55:09.924485 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 conventional PCI endpoint Jun 20 19:55:09.924654 kernel: pci 0000:00:03.0: BAR 0 [mem 0x80000000-0x803fffff pref] Jun 20 19:55:09.924862 kernel: pci 0000:00:03.0: ROM [mem 0xffff0000-0xffffffff pref] Jun 20 19:55:09.925013 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jun 20 19:55:09.925165 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 PCIe Endpoint Jun 20 19:55:09.925304 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80404000-0x80407fff] Jun 20 19:55:09.925450 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 PCIe Endpoint Jun 20 19:55:09.925598 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80400000-0x80403fff] Jun 20 19:55:09.925624 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jun 20 19:55:09.925642 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jun 20 19:55:09.925658 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jun 20 19:55:09.925676 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jun 20 19:55:09.925692 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jun 20 19:55:09.925708 kernel: iommu: Default domain type: Translated Jun 20 19:55:09.925725 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jun 20 19:55:09.925766 kernel: efivars: Registered efivars operations Jun 20 19:55:09.925782 kernel: PCI: Using ACPI for IRQ routing Jun 20 19:55:09.925801 kernel: PCI: pci_cache_line_size set to 64 bytes Jun 20 19:55:09.925817 kernel: e820: reserve RAM buffer [mem 0x768c0018-0x77ffffff] Jun 20 19:55:09.925832 kernel: e820: reserve RAM buffer [mem 0x786ce000-0x7bffffff] Jun 20 19:55:09.925847 kernel: e820: reserve RAM buffer [mem 0x7c97c000-0x7fffffff] Jun 20 19:55:09.925996 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Jun 20 19:55:09.926133 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Jun 20 19:55:09.926268 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jun 20 19:55:09.926287 kernel: vgaarb: loaded Jun 20 19:55:09.926304 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Jun 20 19:55:09.926323 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Jun 20 19:55:09.926338 kernel: clocksource: Switched to clocksource kvm-clock Jun 20 19:55:09.926354 kernel: VFS: Disk quotas dquot_6.6.0 Jun 20 19:55:09.926370 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jun 20 19:55:09.926385 kernel: pnp: PnP ACPI init Jun 20 19:55:09.926402 kernel: pnp: PnP ACPI: found 5 devices Jun 20 19:55:09.926417 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jun 20 19:55:09.926433 kernel: NET: Registered PF_INET protocol family Jun 20 19:55:09.926449 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jun 20 19:55:09.926467 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jun 20 19:55:09.926483 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jun 20 19:55:09.926499 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jun 20 19:55:09.926515 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jun 20 19:55:09.926530 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jun 20 19:55:09.926546 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jun 20 19:55:09.926562 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jun 20 19:55:09.926577 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jun 20 19:55:09.926596 kernel: NET: Registered PF_XDP protocol family Jun 20 19:55:09.926722 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jun 20 19:55:09.930543 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jun 20 19:55:09.930680 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jun 20 19:55:09.930825 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Jun 20 19:55:09.930946 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x2000ffffffff window] Jun 20 19:55:09.931092 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jun 20 19:55:09.931114 kernel: PCI: CLS 0 bytes, default 64 Jun 20 19:55:09.931131 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jun 20 19:55:09.931153 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093d6e846, max_idle_ns: 440795249997 ns Jun 20 19:55:09.931169 kernel: clocksource: Switched to clocksource tsc Jun 20 19:55:09.931185 kernel: Initialise system trusted keyrings Jun 20 19:55:09.931201 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jun 20 19:55:09.931217 kernel: Key type asymmetric registered Jun 20 19:55:09.931232 kernel: Asymmetric key parser 'x509' registered Jun 20 19:55:09.931247 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jun 20 19:55:09.931263 kernel: io scheduler mq-deadline registered Jun 20 19:55:09.931279 kernel: io scheduler kyber registered Jun 20 19:55:09.931298 kernel: io scheduler bfq registered Jun 20 19:55:09.931314 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jun 20 19:55:09.931330 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jun 20 19:55:09.931345 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jun 20 19:55:09.931361 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jun 20 19:55:09.931377 kernel: i8042: Warning: Keylock active Jun 20 19:55:09.931392 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jun 20 19:55:09.931408 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jun 20 19:55:09.931558 kernel: rtc_cmos 00:00: RTC can wake from S4 Jun 20 19:55:09.931689 kernel: rtc_cmos 00:00: registered as rtc0 Jun 20 19:55:09.931847 kernel: rtc_cmos 00:00: setting system clock to 2025-06-20T19:55:09 UTC (1750449309) Jun 20 19:55:09.931966 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Jun 20 19:55:09.931984 kernel: intel_pstate: CPU model not supported Jun 20 19:55:09.932024 kernel: efifb: probing for efifb Jun 20 19:55:09.932042 kernel: efifb: framebuffer at 0x80000000, using 1876k, total 1875k Jun 20 19:55:09.932067 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 Jun 20 19:55:09.932084 kernel: efifb: scrolling: redraw Jun 20 19:55:09.932098 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jun 20 19:55:09.932112 kernel: Console: switching to colour frame buffer device 100x37 Jun 20 19:55:09.932125 kernel: fb0: EFI VGA frame buffer device Jun 20 19:55:09.932140 kernel: pstore: Using crash dump compression: deflate Jun 20 19:55:09.932155 kernel: pstore: Registered efi_pstore as persistent store backend Jun 20 19:55:09.932170 kernel: NET: Registered PF_INET6 protocol family Jun 20 19:55:09.932185 kernel: Segment Routing with IPv6 Jun 20 19:55:09.932200 kernel: In-situ OAM (IOAM) with IPv6 Jun 20 19:55:09.932217 kernel: NET: Registered PF_PACKET protocol family Jun 20 19:55:09.932233 kernel: Key type dns_resolver registered Jun 20 19:55:09.932247 kernel: IPI shorthand broadcast: enabled Jun 20 19:55:09.932262 kernel: sched_clock: Marking stable (2674002715, 152770408)->(2922882559, -96109436) Jun 20 19:55:09.932278 kernel: registered taskstats version 1 Jun 20 19:55:09.932293 kernel: Loading compiled-in X.509 certificates Jun 20 19:55:09.932308 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.34-flatcar: 9a085d119111c823c157514215d0379e3a2f1b94' Jun 20 19:55:09.932324 kernel: Demotion targets for Node 0: null Jun 20 19:55:09.932339 kernel: Key type .fscrypt registered Jun 20 19:55:09.932354 kernel: Key type fscrypt-provisioning registered Jun 20 19:55:09.932375 kernel: ima: No TPM chip found, activating TPM-bypass! Jun 20 19:55:09.932391 kernel: ima: Allocated hash algorithm: sha1 Jun 20 19:55:09.932405 kernel: ima: No architecture policies found Jun 20 19:55:09.932421 kernel: clk: Disabling unused clocks Jun 20 19:55:09.932436 kernel: Warning: unable to open an initial console. Jun 20 19:55:09.932452 kernel: Freeing unused kernel image (initmem) memory: 54424K Jun 20 19:55:09.932468 kernel: Write protecting the kernel read-only data: 24576k Jun 20 19:55:09.932483 kernel: Freeing unused kernel image (rodata/data gap) memory: 284K Jun 20 19:55:09.932502 kernel: Run /init as init process Jun 20 19:55:09.932520 kernel: with arguments: Jun 20 19:55:09.932536 kernel: /init Jun 20 19:55:09.932551 kernel: with environment: Jun 20 19:55:09.932566 kernel: HOME=/ Jun 20 19:55:09.932581 kernel: TERM=linux Jun 20 19:55:09.932599 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jun 20 19:55:09.932617 systemd[1]: Successfully made /usr/ read-only. Jun 20 19:55:09.932638 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jun 20 19:55:09.932655 systemd[1]: Detected virtualization amazon. Jun 20 19:55:09.932670 systemd[1]: Detected architecture x86-64. Jun 20 19:55:09.932686 systemd[1]: Running in initrd. Jun 20 19:55:09.932702 systemd[1]: No hostname configured, using default hostname. Jun 20 19:55:09.932721 systemd[1]: Hostname set to . Jun 20 19:55:09.932780 systemd[1]: Initializing machine ID from VM UUID. Jun 20 19:55:09.932799 systemd[1]: Queued start job for default target initrd.target. Jun 20 19:55:09.932817 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 20 19:55:09.932835 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 20 19:55:09.932855 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jun 20 19:55:09.932873 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 20 19:55:09.932889 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jun 20 19:55:09.932914 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jun 20 19:55:09.932934 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jun 20 19:55:09.932952 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jun 20 19:55:09.932970 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 20 19:55:09.932987 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 20 19:55:09.933004 systemd[1]: Reached target paths.target - Path Units. Jun 20 19:55:09.933022 systemd[1]: Reached target slices.target - Slice Units. Jun 20 19:55:09.933042 systemd[1]: Reached target swap.target - Swaps. Jun 20 19:55:09.933060 systemd[1]: Reached target timers.target - Timer Units. Jun 20 19:55:09.933077 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jun 20 19:55:09.933095 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 20 19:55:09.933114 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jun 20 19:55:09.933132 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jun 20 19:55:09.933150 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 20 19:55:09.933168 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 20 19:55:09.933189 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 20 19:55:09.933206 systemd[1]: Reached target sockets.target - Socket Units. Jun 20 19:55:09.933224 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jun 20 19:55:09.933242 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 20 19:55:09.933259 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jun 20 19:55:09.933277 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jun 20 19:55:09.933296 systemd[1]: Starting systemd-fsck-usr.service... Jun 20 19:55:09.933313 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 20 19:55:09.933331 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 20 19:55:09.933351 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 19:55:09.933369 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jun 20 19:55:09.933424 systemd-journald[208]: Collecting audit messages is disabled. Jun 20 19:55:09.933467 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 20 19:55:09.933485 systemd[1]: Finished systemd-fsck-usr.service. Jun 20 19:55:09.933504 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jun 20 19:55:09.933522 systemd-journald[208]: Journal started Jun 20 19:55:09.933561 systemd-journald[208]: Runtime Journal (/run/log/journal/ec2e415be61f10768ac7e067647b645b) is 4.8M, max 38.4M, 33.6M free. Jun 20 19:55:09.908474 systemd-modules-load[209]: Inserted module 'overlay' Jun 20 19:55:09.942774 systemd[1]: Started systemd-journald.service - Journal Service. Jun 20 19:55:09.955337 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jun 20 19:55:09.960195 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 19:55:09.965057 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 20 19:55:09.969505 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jun 20 19:55:09.974359 kernel: Bridge firewalling registered Jun 20 19:55:09.976790 systemd-modules-load[209]: Inserted module 'br_netfilter' Jun 20 19:55:09.980013 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 20 19:55:09.983174 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 20 19:55:09.989056 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 20 19:55:09.990835 systemd-tmpfiles[221]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jun 20 19:55:09.995112 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 20 19:55:10.005775 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jun 20 19:55:10.003806 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jun 20 19:55:10.018090 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 20 19:55:10.020295 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 20 19:55:10.021690 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 20 19:55:10.024760 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jun 20 19:55:10.028914 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 20 19:55:10.054933 dracut-cmdline[246]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=b7bb3b1ced9c5d47870a8b74c6c30075189c27e25d75251cfa7215e4bbff75ea Jun 20 19:55:10.084552 systemd-resolved[247]: Positive Trust Anchors: Jun 20 19:55:10.085505 systemd-resolved[247]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 20 19:55:10.085573 systemd-resolved[247]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jun 20 19:55:10.092967 systemd-resolved[247]: Defaulting to hostname 'linux'. Jun 20 19:55:10.096462 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 20 19:55:10.097208 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 20 19:55:10.155785 kernel: SCSI subsystem initialized Jun 20 19:55:10.165772 kernel: Loading iSCSI transport class v2.0-870. Jun 20 19:55:10.176774 kernel: iscsi: registered transport (tcp) Jun 20 19:55:10.199120 kernel: iscsi: registered transport (qla4xxx) Jun 20 19:55:10.199210 kernel: QLogic iSCSI HBA Driver Jun 20 19:55:10.218056 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jun 20 19:55:10.237908 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jun 20 19:55:10.241472 systemd[1]: Reached target network-pre.target - Preparation for Network. Jun 20 19:55:10.285483 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jun 20 19:55:10.287893 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jun 20 19:55:10.339773 kernel: raid6: avx512x4 gen() 18032 MB/s Jun 20 19:55:10.357768 kernel: raid6: avx512x2 gen() 17935 MB/s Jun 20 19:55:10.375769 kernel: raid6: avx512x1 gen() 17569 MB/s Jun 20 19:55:10.393770 kernel: raid6: avx2x4 gen() 17740 MB/s Jun 20 19:55:10.411768 kernel: raid6: avx2x2 gen() 17693 MB/s Jun 20 19:55:10.430036 kernel: raid6: avx2x1 gen() 13869 MB/s Jun 20 19:55:10.430108 kernel: raid6: using algorithm avx512x4 gen() 18032 MB/s Jun 20 19:55:10.448861 kernel: raid6: .... xor() 7339 MB/s, rmw enabled Jun 20 19:55:10.448943 kernel: raid6: using avx512x2 recovery algorithm Jun 20 19:55:10.472792 kernel: xor: automatically using best checksumming function avx Jun 20 19:55:10.642775 kernel: Btrfs loaded, zoned=no, fsverity=no Jun 20 19:55:10.649823 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jun 20 19:55:10.651897 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 20 19:55:10.686010 systemd-udevd[456]: Using default interface naming scheme 'v255'. Jun 20 19:55:10.692831 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 20 19:55:10.696963 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jun 20 19:55:10.732437 dracut-pre-trigger[461]: rd.md=0: removing MD RAID activation Jun 20 19:55:10.734851 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input3 Jun 20 19:55:10.763866 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jun 20 19:55:10.766166 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 20 19:55:10.849713 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 20 19:55:10.855927 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jun 20 19:55:10.949832 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jun 20 19:55:10.950104 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jun 20 19:55:10.961787 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Jun 20 19:55:10.967041 kernel: nvme nvme0: pci function 0000:00:04.0 Jun 20 19:55:10.967302 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jun 20 19:55:10.971764 kernel: cryptd: max_cpu_qlen set to 1000 Jun 20 19:55:10.981765 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80400000, mac addr 06:9b:c7:95:24:eb Jun 20 19:55:10.982046 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jun 20 19:55:10.995220 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jun 20 19:55:10.995285 kernel: GPT:9289727 != 16777215 Jun 20 19:55:10.994940 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 20 19:55:11.001465 kernel: GPT:Alternate GPT header not at the end of the disk. Jun 20 19:55:11.001500 kernel: GPT:9289727 != 16777215 Jun 20 19:55:11.001518 kernel: GPT: Use GNU Parted to correct GPT errors. Jun 20 19:55:11.001544 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jun 20 19:55:10.995131 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 19:55:11.001989 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 19:55:11.004691 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 19:55:11.006463 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jun 20 19:55:11.017972 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 20 19:55:11.018232 (udev-worker)[511]: Network interface NamePolicy= disabled on kernel command line. Jun 20 19:55:11.021875 kernel: AES CTR mode by8 optimization enabled Jun 20 19:55:11.020312 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 19:55:11.027951 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 19:55:11.074072 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 19:55:11.077773 kernel: nvme nvme0: using unchecked data buffer Jun 20 19:55:11.216407 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jun 20 19:55:11.229239 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jun 20 19:55:11.230119 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jun 20 19:55:11.250845 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jun 20 19:55:11.260973 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Jun 20 19:55:11.261660 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jun 20 19:55:11.263102 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jun 20 19:55:11.264300 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 20 19:55:11.265439 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 20 19:55:11.267165 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jun 20 19:55:11.270142 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jun 20 19:55:11.291284 disk-uuid[691]: Primary Header is updated. Jun 20 19:55:11.291284 disk-uuid[691]: Secondary Entries is updated. Jun 20 19:55:11.291284 disk-uuid[691]: Secondary Header is updated. Jun 20 19:55:11.296557 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jun 20 19:55:11.299770 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jun 20 19:55:12.313818 disk-uuid[694]: The operation has completed successfully. Jun 20 19:55:12.314479 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jun 20 19:55:12.436170 systemd[1]: disk-uuid.service: Deactivated successfully. Jun 20 19:55:12.436313 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jun 20 19:55:12.468180 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jun 20 19:55:12.497447 sh[959]: Success Jun 20 19:55:12.519314 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jun 20 19:55:12.519401 kernel: device-mapper: uevent: version 1.0.3 Jun 20 19:55:12.519425 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jun 20 19:55:12.532770 kernel: device-mapper: verity: sha256 using shash "sha256-avx2" Jun 20 19:55:12.637866 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jun 20 19:55:12.640514 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jun 20 19:55:12.653345 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jun 20 19:55:12.681771 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jun 20 19:55:12.684786 kernel: BTRFS: device fsid 048b924a-9f97-43f5-98d6-0fff18874966 devid 1 transid 41 /dev/mapper/usr (254:0) scanned by mount (983) Jun 20 19:55:12.688836 kernel: BTRFS info (device dm-0): first mount of filesystem 048b924a-9f97-43f5-98d6-0fff18874966 Jun 20 19:55:12.688903 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jun 20 19:55:12.691168 kernel: BTRFS info (device dm-0): using free-space-tree Jun 20 19:55:12.825109 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jun 20 19:55:12.826050 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jun 20 19:55:12.826597 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jun 20 19:55:12.827363 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jun 20 19:55:12.829152 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jun 20 19:55:12.875515 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 (259:5) scanned by mount (1016) Jun 20 19:55:12.880243 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 40288228-7b4b-4005-945b-574c4c10ab32 Jun 20 19:55:12.880314 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jun 20 19:55:12.882125 kernel: BTRFS info (device nvme0n1p6): using free-space-tree Jun 20 19:55:12.899773 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 40288228-7b4b-4005-945b-574c4c10ab32 Jun 20 19:55:12.898459 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jun 20 19:55:12.903034 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jun 20 19:55:12.942151 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 20 19:55:12.945449 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 20 19:55:12.998251 systemd-networkd[1152]: lo: Link UP Jun 20 19:55:12.998264 systemd-networkd[1152]: lo: Gained carrier Jun 20 19:55:13.000227 systemd-networkd[1152]: Enumeration completed Jun 20 19:55:13.000362 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 20 19:55:13.001358 systemd-networkd[1152]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 19:55:13.001363 systemd-networkd[1152]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 20 19:55:13.003243 systemd[1]: Reached target network.target - Network. Jun 20 19:55:13.005452 systemd-networkd[1152]: eth0: Link UP Jun 20 19:55:13.005458 systemd-networkd[1152]: eth0: Gained carrier Jun 20 19:55:13.005474 systemd-networkd[1152]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 19:55:13.018853 systemd-networkd[1152]: eth0: DHCPv4 address 172.31.30.175/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jun 20 19:55:13.372264 ignition[1099]: Ignition 2.21.0 Jun 20 19:55:13.372819 ignition[1099]: Stage: fetch-offline Jun 20 19:55:13.373094 ignition[1099]: no configs at "/usr/lib/ignition/base.d" Jun 20 19:55:13.373106 ignition[1099]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jun 20 19:55:13.373637 ignition[1099]: Ignition finished successfully Jun 20 19:55:13.375424 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jun 20 19:55:13.377432 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jun 20 19:55:13.402525 ignition[1161]: Ignition 2.21.0 Jun 20 19:55:13.402542 ignition[1161]: Stage: fetch Jun 20 19:55:13.403198 ignition[1161]: no configs at "/usr/lib/ignition/base.d" Jun 20 19:55:13.403218 ignition[1161]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jun 20 19:55:13.403399 ignition[1161]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jun 20 19:55:13.413260 ignition[1161]: PUT result: OK Jun 20 19:55:13.415089 ignition[1161]: parsed url from cmdline: "" Jun 20 19:55:13.415103 ignition[1161]: no config URL provided Jun 20 19:55:13.415112 ignition[1161]: reading system config file "/usr/lib/ignition/user.ign" Jun 20 19:55:13.415123 ignition[1161]: no config at "/usr/lib/ignition/user.ign" Jun 20 19:55:13.415142 ignition[1161]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jun 20 19:55:13.415684 ignition[1161]: PUT result: OK Jun 20 19:55:13.415765 ignition[1161]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jun 20 19:55:13.416406 ignition[1161]: GET result: OK Jun 20 19:55:13.416474 ignition[1161]: parsing config with SHA512: 982bffeb63826b1d0f8002e8d81abb1e3a242fca08dc24a486a3de3c7e6c79a037362beb881e7a649a0ed78e580e3847994c39a2b8e71f5dc2b44022d492b2aa Jun 20 19:55:13.420701 unknown[1161]: fetched base config from "system" Jun 20 19:55:13.420710 unknown[1161]: fetched base config from "system" Jun 20 19:55:13.421056 ignition[1161]: fetch: fetch complete Jun 20 19:55:13.420715 unknown[1161]: fetched user config from "aws" Jun 20 19:55:13.421061 ignition[1161]: fetch: fetch passed Jun 20 19:55:13.421098 ignition[1161]: Ignition finished successfully Jun 20 19:55:13.423548 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jun 20 19:55:13.425002 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jun 20 19:55:13.456243 ignition[1167]: Ignition 2.21.0 Jun 20 19:55:13.456258 ignition[1167]: Stage: kargs Jun 20 19:55:13.456650 ignition[1167]: no configs at "/usr/lib/ignition/base.d" Jun 20 19:55:13.456663 ignition[1167]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jun 20 19:55:13.456819 ignition[1167]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jun 20 19:55:13.458637 ignition[1167]: PUT result: OK Jun 20 19:55:13.464298 ignition[1167]: kargs: kargs passed Jun 20 19:55:13.464361 ignition[1167]: Ignition finished successfully Jun 20 19:55:13.466394 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jun 20 19:55:13.467878 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jun 20 19:55:13.496418 ignition[1174]: Ignition 2.21.0 Jun 20 19:55:13.496433 ignition[1174]: Stage: disks Jun 20 19:55:13.496822 ignition[1174]: no configs at "/usr/lib/ignition/base.d" Jun 20 19:55:13.496835 ignition[1174]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jun 20 19:55:13.496946 ignition[1174]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jun 20 19:55:13.497872 ignition[1174]: PUT result: OK Jun 20 19:55:13.500627 ignition[1174]: disks: disks passed Jun 20 19:55:13.500717 ignition[1174]: Ignition finished successfully Jun 20 19:55:13.502832 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jun 20 19:55:13.503427 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jun 20 19:55:13.503810 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jun 20 19:55:13.504467 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 20 19:55:13.505019 systemd[1]: Reached target sysinit.target - System Initialization. Jun 20 19:55:13.505560 systemd[1]: Reached target basic.target - Basic System. Jun 20 19:55:13.507211 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jun 20 19:55:13.558019 systemd-fsck[1182]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jun 20 19:55:13.561013 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jun 20 19:55:13.562910 systemd[1]: Mounting sysroot.mount - /sysroot... Jun 20 19:55:13.730779 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 6290a154-3512-46a6-a5f5-a7fb62c65caa r/w with ordered data mode. Quota mode: none. Jun 20 19:55:13.731096 systemd[1]: Mounted sysroot.mount - /sysroot. Jun 20 19:55:13.731974 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jun 20 19:55:13.733897 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 20 19:55:13.736830 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jun 20 19:55:13.738023 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jun 20 19:55:13.738779 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jun 20 19:55:13.739810 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jun 20 19:55:13.745684 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jun 20 19:55:13.747979 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jun 20 19:55:13.763769 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 (259:5) scanned by mount (1201) Jun 20 19:55:13.770781 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 40288228-7b4b-4005-945b-574c4c10ab32 Jun 20 19:55:13.770857 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jun 20 19:55:13.771860 kernel: BTRFS info (device nvme0n1p6): using free-space-tree Jun 20 19:55:13.781462 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 20 19:55:14.171378 initrd-setup-root[1225]: cut: /sysroot/etc/passwd: No such file or directory Jun 20 19:55:14.190047 initrd-setup-root[1232]: cut: /sysroot/etc/group: No such file or directory Jun 20 19:55:14.198400 initrd-setup-root[1239]: cut: /sysroot/etc/shadow: No such file or directory Jun 20 19:55:14.203364 initrd-setup-root[1246]: cut: /sysroot/etc/gshadow: No such file or directory Jun 20 19:55:14.519970 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jun 20 19:55:14.522195 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jun 20 19:55:14.523908 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jun 20 19:55:14.543801 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jun 20 19:55:14.546271 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 40288228-7b4b-4005-945b-574c4c10ab32 Jun 20 19:55:14.576108 ignition[1313]: INFO : Ignition 2.21.0 Jun 20 19:55:14.576108 ignition[1313]: INFO : Stage: mount Jun 20 19:55:14.577223 ignition[1313]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 20 19:55:14.577223 ignition[1313]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jun 20 19:55:14.577223 ignition[1313]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jun 20 19:55:14.577223 ignition[1313]: INFO : PUT result: OK Jun 20 19:55:14.579356 ignition[1313]: INFO : mount: mount passed Jun 20 19:55:14.580227 ignition[1313]: INFO : Ignition finished successfully Jun 20 19:55:14.580778 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jun 20 19:55:14.581352 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jun 20 19:55:14.583193 systemd[1]: Starting ignition-files.service - Ignition (files)... Jun 20 19:55:14.733725 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 20 19:55:14.772324 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 (259:5) scanned by mount (1325) Jun 20 19:55:14.776924 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 40288228-7b4b-4005-945b-574c4c10ab32 Jun 20 19:55:14.777008 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jun 20 19:55:14.777031 kernel: BTRFS info (device nvme0n1p6): using free-space-tree Jun 20 19:55:14.777353 systemd-networkd[1152]: eth0: Gained IPv6LL Jun 20 19:55:14.788411 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 20 19:55:14.825536 ignition[1342]: INFO : Ignition 2.21.0 Jun 20 19:55:14.825536 ignition[1342]: INFO : Stage: files Jun 20 19:55:14.825536 ignition[1342]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 20 19:55:14.825536 ignition[1342]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jun 20 19:55:14.825536 ignition[1342]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jun 20 19:55:14.828776 ignition[1342]: INFO : PUT result: OK Jun 20 19:55:14.831716 ignition[1342]: DEBUG : files: compiled without relabeling support, skipping Jun 20 19:55:14.833087 ignition[1342]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jun 20 19:55:14.833087 ignition[1342]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jun 20 19:55:14.846372 ignition[1342]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jun 20 19:55:14.847903 ignition[1342]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jun 20 19:55:14.848881 ignition[1342]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jun 20 19:55:14.847959 unknown[1342]: wrote ssh authorized keys file for user: core Jun 20 19:55:14.852633 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jun 20 19:55:14.852633 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jun 20 19:55:14.937838 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jun 20 19:55:15.108968 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jun 20 19:55:15.108968 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jun 20 19:55:15.110610 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jun 20 19:55:15.110610 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jun 20 19:55:15.110610 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jun 20 19:55:15.110610 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 20 19:55:15.110610 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 20 19:55:15.110610 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 20 19:55:15.110610 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 20 19:55:15.116293 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jun 20 19:55:15.116293 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jun 20 19:55:15.116293 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jun 20 19:55:15.119047 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jun 20 19:55:15.119047 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jun 20 19:55:15.119047 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Jun 20 19:55:15.866223 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jun 20 19:55:17.010508 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jun 20 19:55:17.010508 ignition[1342]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jun 20 19:55:17.013655 ignition[1342]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 20 19:55:17.018502 ignition[1342]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 20 19:55:17.018502 ignition[1342]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jun 20 19:55:17.018502 ignition[1342]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jun 20 19:55:17.025300 ignition[1342]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jun 20 19:55:17.025300 ignition[1342]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jun 20 19:55:17.025300 ignition[1342]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jun 20 19:55:17.025300 ignition[1342]: INFO : files: files passed Jun 20 19:55:17.025300 ignition[1342]: INFO : Ignition finished successfully Jun 20 19:55:17.023332 systemd[1]: Finished ignition-files.service - Ignition (files). Jun 20 19:55:17.025686 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jun 20 19:55:17.030961 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jun 20 19:55:17.043086 systemd[1]: ignition-quench.service: Deactivated successfully. Jun 20 19:55:17.043235 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jun 20 19:55:17.062835 initrd-setup-root-after-ignition[1371]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 20 19:55:17.062835 initrd-setup-root-after-ignition[1371]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jun 20 19:55:17.066668 initrd-setup-root-after-ignition[1374]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 20 19:55:17.068813 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 20 19:55:17.069829 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jun 20 19:55:17.071718 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jun 20 19:55:17.127292 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jun 20 19:55:17.127455 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jun 20 19:55:17.128869 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jun 20 19:55:17.130082 systemd[1]: Reached target initrd.target - Initrd Default Target. Jun 20 19:55:17.130930 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jun 20 19:55:17.133736 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jun 20 19:55:17.169682 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 20 19:55:17.172174 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jun 20 19:55:17.196450 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jun 20 19:55:17.197166 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 20 19:55:17.198273 systemd[1]: Stopped target timers.target - Timer Units. Jun 20 19:55:17.199213 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jun 20 19:55:17.199449 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 20 19:55:17.200773 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jun 20 19:55:17.201690 systemd[1]: Stopped target basic.target - Basic System. Jun 20 19:55:17.202491 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jun 20 19:55:17.203279 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jun 20 19:55:17.204215 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jun 20 19:55:17.204964 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jun 20 19:55:17.205735 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jun 20 19:55:17.206519 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jun 20 19:55:17.207324 systemd[1]: Stopped target sysinit.target - System Initialization. Jun 20 19:55:17.208637 systemd[1]: Stopped target local-fs.target - Local File Systems. Jun 20 19:55:17.209412 systemd[1]: Stopped target swap.target - Swaps. Jun 20 19:55:17.210190 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jun 20 19:55:17.210421 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jun 20 19:55:17.211453 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jun 20 19:55:17.212447 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 20 19:55:17.213128 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jun 20 19:55:17.213837 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 20 19:55:17.214296 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jun 20 19:55:17.214480 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jun 20 19:55:17.215971 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jun 20 19:55:17.216301 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 20 19:55:17.217061 systemd[1]: ignition-files.service: Deactivated successfully. Jun 20 19:55:17.217267 systemd[1]: Stopped ignition-files.service - Ignition (files). Jun 20 19:55:17.219002 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jun 20 19:55:17.222134 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jun 20 19:55:17.222404 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jun 20 19:55:17.224388 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jun 20 19:55:17.226078 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jun 20 19:55:17.226326 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jun 20 19:55:17.229696 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jun 20 19:55:17.230895 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jun 20 19:55:17.240287 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jun 20 19:55:17.240409 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jun 20 19:55:17.256636 ignition[1395]: INFO : Ignition 2.21.0 Jun 20 19:55:17.256636 ignition[1395]: INFO : Stage: umount Jun 20 19:55:17.256636 ignition[1395]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 20 19:55:17.256636 ignition[1395]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jun 20 19:55:17.256636 ignition[1395]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jun 20 19:55:17.259697 ignition[1395]: INFO : PUT result: OK Jun 20 19:55:17.265554 ignition[1395]: INFO : umount: umount passed Jun 20 19:55:17.265554 ignition[1395]: INFO : Ignition finished successfully Jun 20 19:55:17.267445 systemd[1]: ignition-mount.service: Deactivated successfully. Jun 20 19:55:17.267626 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jun 20 19:55:17.269285 systemd[1]: ignition-disks.service: Deactivated successfully. Jun 20 19:55:17.269397 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jun 20 19:55:17.270341 systemd[1]: ignition-kargs.service: Deactivated successfully. Jun 20 19:55:17.270410 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jun 20 19:55:17.271162 systemd[1]: ignition-fetch.service: Deactivated successfully. Jun 20 19:55:17.271241 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jun 20 19:55:17.271892 systemd[1]: Stopped target network.target - Network. Jun 20 19:55:17.272673 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jun 20 19:55:17.272774 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jun 20 19:55:17.273485 systemd[1]: Stopped target paths.target - Path Units. Jun 20 19:55:17.274814 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jun 20 19:55:17.275247 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 20 19:55:17.275827 systemd[1]: Stopped target slices.target - Slice Units. Jun 20 19:55:17.277826 systemd[1]: Stopped target sockets.target - Socket Units. Jun 20 19:55:17.278609 systemd[1]: iscsid.socket: Deactivated successfully. Jun 20 19:55:17.278677 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jun 20 19:55:17.279382 systemd[1]: iscsiuio.socket: Deactivated successfully. Jun 20 19:55:17.279440 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 20 19:55:17.280272 systemd[1]: ignition-setup.service: Deactivated successfully. Jun 20 19:55:17.280352 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jun 20 19:55:17.281997 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jun 20 19:55:17.282067 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jun 20 19:55:17.282960 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jun 20 19:55:17.283695 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jun 20 19:55:17.289504 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jun 20 19:55:17.290969 systemd[1]: systemd-resolved.service: Deactivated successfully. Jun 20 19:55:17.291152 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jun 20 19:55:17.296924 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jun 20 19:55:17.297631 systemd[1]: systemd-networkd.service: Deactivated successfully. Jun 20 19:55:17.297843 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jun 20 19:55:17.299798 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jun 20 19:55:17.300203 systemd[1]: sysroot-boot.service: Deactivated successfully. Jun 20 19:55:17.300345 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jun 20 19:55:17.302968 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jun 20 19:55:17.303393 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jun 20 19:55:17.303448 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jun 20 19:55:17.304225 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jun 20 19:55:17.304298 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jun 20 19:55:17.306827 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jun 20 19:55:17.307245 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jun 20 19:55:17.307315 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 20 19:55:17.308346 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jun 20 19:55:17.308413 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jun 20 19:55:17.310611 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jun 20 19:55:17.310672 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jun 20 19:55:17.311769 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jun 20 19:55:17.311842 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jun 20 19:55:17.312908 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 20 19:55:17.319542 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jun 20 19:55:17.319648 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jun 20 19:55:17.324579 systemd[1]: systemd-udevd.service: Deactivated successfully. Jun 20 19:55:17.325848 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 20 19:55:17.328441 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jun 20 19:55:17.329132 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jun 20 19:55:17.329667 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jun 20 19:55:17.329712 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jun 20 19:55:17.330397 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jun 20 19:55:17.330465 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jun 20 19:55:17.331543 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jun 20 19:55:17.331608 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jun 20 19:55:17.334145 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jun 20 19:55:17.334220 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 20 19:55:17.337208 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jun 20 19:55:17.338487 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jun 20 19:55:17.339200 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jun 20 19:55:17.341274 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jun 20 19:55:17.341863 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 20 19:55:17.343638 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 20 19:55:17.343705 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 19:55:17.346289 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jun 20 19:55:17.346370 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jun 20 19:55:17.346428 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jun 20 19:55:17.346987 systemd[1]: network-cleanup.service: Deactivated successfully. Jun 20 19:55:17.348875 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jun 20 19:55:17.355920 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jun 20 19:55:17.356145 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jun 20 19:55:17.357647 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jun 20 19:55:17.359453 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jun 20 19:55:17.381513 systemd[1]: Switching root. Jun 20 19:55:17.427575 systemd-journald[208]: Journal stopped Jun 20 19:55:19.394637 systemd-journald[208]: Received SIGTERM from PID 1 (systemd). Jun 20 19:55:19.394720 kernel: SELinux: policy capability network_peer_controls=1 Jun 20 19:55:19.398800 kernel: SELinux: policy capability open_perms=1 Jun 20 19:55:19.398840 kernel: SELinux: policy capability extended_socket_class=1 Jun 20 19:55:19.398867 kernel: SELinux: policy capability always_check_network=0 Jun 20 19:55:19.398892 kernel: SELinux: policy capability cgroup_seclabel=1 Jun 20 19:55:19.398909 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jun 20 19:55:19.398931 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jun 20 19:55:19.398952 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jun 20 19:55:19.398969 kernel: SELinux: policy capability userspace_initial_context=0 Jun 20 19:55:19.398988 kernel: audit: type=1403 audit(1750449317.845:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jun 20 19:55:19.399009 systemd[1]: Successfully loaded SELinux policy in 65.260ms. Jun 20 19:55:19.399038 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 10.231ms. Jun 20 19:55:19.399060 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jun 20 19:55:19.399083 systemd[1]: Detected virtualization amazon. Jun 20 19:55:19.399101 systemd[1]: Detected architecture x86-64. Jun 20 19:55:19.399120 systemd[1]: Detected first boot. Jun 20 19:55:19.399140 systemd[1]: Initializing machine ID from VM UUID. Jun 20 19:55:19.399158 zram_generator::config[1438]: No configuration found. Jun 20 19:55:19.399182 kernel: Guest personality initialized and is inactive Jun 20 19:55:19.399201 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Jun 20 19:55:19.399219 kernel: Initialized host personality Jun 20 19:55:19.399236 kernel: NET: Registered PF_VSOCK protocol family Jun 20 19:55:19.399257 systemd[1]: Populated /etc with preset unit settings. Jun 20 19:55:19.399278 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jun 20 19:55:19.399296 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jun 20 19:55:19.399315 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jun 20 19:55:19.399333 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jun 20 19:55:19.399353 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jun 20 19:55:19.399374 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jun 20 19:55:19.399393 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jun 20 19:55:19.399413 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jun 20 19:55:19.399433 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jun 20 19:55:19.399453 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jun 20 19:55:19.399473 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jun 20 19:55:19.399492 systemd[1]: Created slice user.slice - User and Session Slice. Jun 20 19:55:19.399512 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 20 19:55:19.399536 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 20 19:55:19.399556 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jun 20 19:55:19.399575 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jun 20 19:55:19.399600 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jun 20 19:55:19.399621 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 20 19:55:19.399640 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jun 20 19:55:19.399658 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 20 19:55:19.399677 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 20 19:55:19.399696 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jun 20 19:55:19.399715 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jun 20 19:55:19.399733 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jun 20 19:55:19.399769 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jun 20 19:55:19.399790 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 20 19:55:19.399809 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 20 19:55:19.399828 systemd[1]: Reached target slices.target - Slice Units. Jun 20 19:55:19.399848 systemd[1]: Reached target swap.target - Swaps. Jun 20 19:55:19.399867 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jun 20 19:55:19.399886 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jun 20 19:55:19.399904 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jun 20 19:55:19.399924 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 20 19:55:19.399945 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 20 19:55:19.399963 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 20 19:55:19.399982 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jun 20 19:55:19.400011 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jun 20 19:55:19.400029 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jun 20 19:55:19.400048 systemd[1]: Mounting media.mount - External Media Directory... Jun 20 19:55:19.400067 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 19:55:19.400085 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jun 20 19:55:19.400104 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jun 20 19:55:19.400126 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jun 20 19:55:19.400145 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jun 20 19:55:19.400167 systemd[1]: Reached target machines.target - Containers. Jun 20 19:55:19.400185 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jun 20 19:55:19.400204 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 20 19:55:19.400223 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 20 19:55:19.400242 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jun 20 19:55:19.400261 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 20 19:55:19.400283 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 20 19:55:19.400302 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 20 19:55:19.400320 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jun 20 19:55:19.400339 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 20 19:55:19.400359 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jun 20 19:55:19.400378 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jun 20 19:55:19.400396 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jun 20 19:55:19.400414 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jun 20 19:55:19.400436 systemd[1]: Stopped systemd-fsck-usr.service. Jun 20 19:55:19.400455 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 20 19:55:19.400473 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 20 19:55:19.400493 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 20 19:55:19.400511 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jun 20 19:55:19.400530 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jun 20 19:55:19.400548 kernel: loop: module loaded Jun 20 19:55:19.400568 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jun 20 19:55:19.400587 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 20 19:55:19.400608 systemd[1]: verity-setup.service: Deactivated successfully. Jun 20 19:55:19.400626 systemd[1]: Stopped verity-setup.service. Jun 20 19:55:19.400649 kernel: fuse: init (API version 7.41) Jun 20 19:55:19.400665 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 19:55:19.400684 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jun 20 19:55:19.400704 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jun 20 19:55:19.400727 systemd[1]: Mounted media.mount - External Media Directory. Jun 20 19:55:19.402803 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jun 20 19:55:19.402844 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jun 20 19:55:19.402864 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jun 20 19:55:19.402888 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 20 19:55:19.402906 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jun 20 19:55:19.402925 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jun 20 19:55:19.402943 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 20 19:55:19.402962 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 20 19:55:19.402980 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 20 19:55:19.402999 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 20 19:55:19.403017 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jun 20 19:55:19.403035 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jun 20 19:55:19.403059 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 20 19:55:19.403078 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 20 19:55:19.403096 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 20 19:55:19.403115 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jun 20 19:55:19.403135 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jun 20 19:55:19.403155 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jun 20 19:55:19.403175 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jun 20 19:55:19.403193 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 20 19:55:19.403213 kernel: ACPI: bus type drm_connector registered Jun 20 19:55:19.403235 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jun 20 19:55:19.403254 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jun 20 19:55:19.403275 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 20 19:55:19.403338 systemd-journald[1521]: Collecting audit messages is disabled. Jun 20 19:55:19.403380 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jun 20 19:55:19.403401 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 20 19:55:19.403422 systemd-journald[1521]: Journal started Jun 20 19:55:19.403463 systemd-journald[1521]: Runtime Journal (/run/log/journal/ec2e415be61f10768ac7e067647b645b) is 4.8M, max 38.4M, 33.6M free. Jun 20 19:55:18.967525 systemd[1]: Queued start job for default target multi-user.target. Jun 20 19:55:18.987285 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jun 20 19:55:18.987937 systemd[1]: systemd-journald.service: Deactivated successfully. Jun 20 19:55:19.410793 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jun 20 19:55:19.414767 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 20 19:55:19.420451 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 20 19:55:19.427772 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jun 20 19:55:19.436773 systemd[1]: Started systemd-journald.service - Journal Service. Jun 20 19:55:19.438322 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 20 19:55:19.438564 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 20 19:55:19.439731 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jun 20 19:55:19.441012 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jun 20 19:55:19.442177 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jun 20 19:55:19.444090 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jun 20 19:55:19.467159 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jun 20 19:55:19.470519 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jun 20 19:55:19.472129 systemd[1]: Reached target network-pre.target - Preparation for Network. Jun 20 19:55:19.477974 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jun 20 19:55:19.482971 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jun 20 19:55:19.485979 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jun 20 19:55:19.493631 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jun 20 19:55:19.517841 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 20 19:55:19.532648 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jun 20 19:55:19.533539 systemd-journald[1521]: Time spent on flushing to /var/log/journal/ec2e415be61f10768ac7e067647b645b is 67.550ms for 1019 entries. Jun 20 19:55:19.533539 systemd-journald[1521]: System Journal (/var/log/journal/ec2e415be61f10768ac7e067647b645b) is 8M, max 195.6M, 187.6M free. Jun 20 19:55:19.625050 systemd-journald[1521]: Received client request to flush runtime journal. Jun 20 19:55:19.625128 kernel: loop0: detected capacity change from 0 to 113872 Jun 20 19:55:19.588752 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 20 19:55:19.627926 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jun 20 19:55:19.667136 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jun 20 19:55:19.672078 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 20 19:55:19.674899 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jun 20 19:55:19.712123 kernel: loop1: detected capacity change from 0 to 72352 Jun 20 19:55:19.729826 systemd-tmpfiles[1589]: ACLs are not supported, ignoring. Jun 20 19:55:19.730162 systemd-tmpfiles[1589]: ACLs are not supported, ignoring. Jun 20 19:55:19.735579 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 20 19:55:19.841771 kernel: loop2: detected capacity change from 0 to 229808 Jun 20 19:55:19.984793 kernel: loop3: detected capacity change from 0 to 146240 Jun 20 19:55:19.990206 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jun 20 19:55:20.110785 kernel: loop4: detected capacity change from 0 to 113872 Jun 20 19:55:20.142802 kernel: loop5: detected capacity change from 0 to 72352 Jun 20 19:55:20.165778 kernel: loop6: detected capacity change from 0 to 229808 Jun 20 19:55:20.210807 kernel: loop7: detected capacity change from 0 to 146240 Jun 20 19:55:20.229918 (sd-merge)[1595]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Jun 20 19:55:20.231545 (sd-merge)[1595]: Merged extensions into '/usr'. Jun 20 19:55:20.242055 systemd[1]: Reload requested from client PID 1550 ('systemd-sysext') (unit systemd-sysext.service)... Jun 20 19:55:20.242078 systemd[1]: Reloading... Jun 20 19:55:20.388782 zram_generator::config[1624]: No configuration found. Jun 20 19:55:20.536510 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 19:55:20.660109 systemd[1]: Reloading finished in 415 ms. Jun 20 19:55:20.682022 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jun 20 19:55:20.683043 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jun 20 19:55:20.694293 systemd[1]: Starting ensure-sysext.service... Jun 20 19:55:20.697959 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jun 20 19:55:20.706293 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 20 19:55:20.729818 systemd[1]: Reload requested from client PID 1673 ('systemctl') (unit ensure-sysext.service)... Jun 20 19:55:20.729991 systemd[1]: Reloading... Jun 20 19:55:20.757788 systemd-udevd[1675]: Using default interface naming scheme 'v255'. Jun 20 19:55:20.770473 systemd-tmpfiles[1674]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jun 20 19:55:20.770517 systemd-tmpfiles[1674]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jun 20 19:55:20.770908 systemd-tmpfiles[1674]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jun 20 19:55:20.771290 systemd-tmpfiles[1674]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jun 20 19:55:20.772528 systemd-tmpfiles[1674]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jun 20 19:55:20.772950 systemd-tmpfiles[1674]: ACLs are not supported, ignoring. Jun 20 19:55:20.773051 systemd-tmpfiles[1674]: ACLs are not supported, ignoring. Jun 20 19:55:20.781789 systemd-tmpfiles[1674]: Detected autofs mount point /boot during canonicalization of boot. Jun 20 19:55:20.781806 systemd-tmpfiles[1674]: Skipping /boot Jun 20 19:55:20.825839 systemd-tmpfiles[1674]: Detected autofs mount point /boot during canonicalization of boot. Jun 20 19:55:20.825860 systemd-tmpfiles[1674]: Skipping /boot Jun 20 19:55:20.876781 zram_generator::config[1703]: No configuration found. Jun 20 19:55:21.184296 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 19:55:21.244515 (udev-worker)[1722]: Network interface NamePolicy= disabled on kernel command line. Jun 20 19:55:21.246206 ldconfig[1546]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jun 20 19:55:21.287768 kernel: mousedev: PS/2 mouse device common for all mice Jun 20 19:55:21.306781 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Jun 20 19:55:21.314846 kernel: ACPI: button: Power Button [PWRF] Jun 20 19:55:21.316796 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input5 Jun 20 19:55:21.321774 kernel: ACPI: button: Sleep Button [SLPF] Jun 20 19:55:21.388760 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Jun 20 19:55:21.432079 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jun 20 19:55:21.432349 systemd[1]: Reloading finished in 701 ms. Jun 20 19:55:21.451879 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 20 19:55:21.454783 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jun 20 19:55:21.470916 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jun 20 19:55:21.547580 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 19:55:21.552050 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jun 20 19:55:21.557075 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jun 20 19:55:21.559075 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 20 19:55:21.562667 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 20 19:55:21.566358 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 20 19:55:21.571867 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 20 19:55:21.578476 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 20 19:55:21.581003 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 20 19:55:21.581215 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 20 19:55:21.584157 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jun 20 19:55:21.588727 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 20 19:55:21.596626 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 20 19:55:21.597586 systemd[1]: Reached target time-set.target - System Time Set. Jun 20 19:55:21.603211 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jun 20 19:55:21.611764 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 19:55:21.612827 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 19:55:21.636010 systemd[1]: Finished ensure-sysext.service. Jun 20 19:55:21.648023 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jun 20 19:55:21.668529 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 20 19:55:21.670320 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 20 19:55:21.676577 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 20 19:55:21.676865 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 20 19:55:21.687837 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jun 20 19:55:21.701018 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 20 19:55:21.703172 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 20 19:55:21.705409 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 20 19:55:21.706144 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 20 19:55:21.709596 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 20 19:55:21.709721 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 20 19:55:21.762702 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jun 20 19:55:21.768015 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jun 20 19:55:21.803405 augenrules[1900]: No rules Jun 20 19:55:21.805626 systemd[1]: audit-rules.service: Deactivated successfully. Jun 20 19:55:21.807019 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jun 20 19:55:21.836424 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jun 20 19:55:21.902418 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jun 20 19:55:21.903336 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jun 20 19:55:21.925822 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 19:55:21.983813 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jun 20 19:55:22.010349 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jun 20 19:55:22.015944 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jun 20 19:55:22.068297 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jun 20 19:55:22.103213 systemd-resolved[1835]: Positive Trust Anchors: Jun 20 19:55:22.103643 systemd-resolved[1835]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 20 19:55:22.103813 systemd-resolved[1835]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jun 20 19:55:22.107838 systemd-networkd[1834]: lo: Link UP Jun 20 19:55:22.107853 systemd-networkd[1834]: lo: Gained carrier Jun 20 19:55:22.109513 systemd-networkd[1834]: Enumeration completed Jun 20 19:55:22.109663 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 20 19:55:22.110734 systemd-networkd[1834]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 19:55:22.110762 systemd-networkd[1834]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 20 19:55:22.113485 systemd-resolved[1835]: Defaulting to hostname 'linux'. Jun 20 19:55:22.113901 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jun 20 19:55:22.116929 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jun 20 19:55:22.119230 systemd-networkd[1834]: eth0: Link UP Jun 20 19:55:22.119452 systemd-networkd[1834]: eth0: Gained carrier Jun 20 19:55:22.119492 systemd-networkd[1834]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 19:55:22.121361 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 20 19:55:22.122939 systemd[1]: Reached target network.target - Network. Jun 20 19:55:22.124216 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 20 19:55:22.124895 systemd[1]: Reached target sysinit.target - System Initialization. Jun 20 19:55:22.126411 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jun 20 19:55:22.127067 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jun 20 19:55:22.127837 systemd-networkd[1834]: eth0: DHCPv4 address 172.31.30.175/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jun 20 19:55:22.128780 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jun 20 19:55:22.130181 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jun 20 19:55:22.130878 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jun 20 19:55:22.131292 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jun 20 19:55:22.131680 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jun 20 19:55:22.131720 systemd[1]: Reached target paths.target - Path Units. Jun 20 19:55:22.132170 systemd[1]: Reached target timers.target - Timer Units. Jun 20 19:55:22.134217 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jun 20 19:55:22.136910 systemd[1]: Starting docker.socket - Docker Socket for the API... Jun 20 19:55:22.140427 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jun 20 19:55:22.141156 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jun 20 19:55:22.141618 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jun 20 19:55:22.144439 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jun 20 19:55:22.145365 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jun 20 19:55:22.146567 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jun 20 19:55:22.147917 systemd[1]: Reached target sockets.target - Socket Units. Jun 20 19:55:22.148366 systemd[1]: Reached target basic.target - Basic System. Jun 20 19:55:22.148769 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jun 20 19:55:22.148792 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jun 20 19:55:22.151559 systemd[1]: Starting containerd.service - containerd container runtime... Jun 20 19:55:22.155585 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jun 20 19:55:22.158871 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jun 20 19:55:22.161959 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jun 20 19:55:22.169877 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jun 20 19:55:22.181058 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jun 20 19:55:22.182904 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jun 20 19:55:22.190147 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jun 20 19:55:22.194006 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jun 20 19:55:22.198996 systemd[1]: Started ntpd.service - Network Time Service. Jun 20 19:55:22.201674 jq[1958]: false Jun 20 19:55:22.206094 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jun 20 19:55:22.217211 systemd[1]: Starting setup-oem.service - Setup OEM... Jun 20 19:55:22.222014 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jun 20 19:55:22.229173 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jun 20 19:55:22.252810 google_oslogin_nss_cache[1960]: oslogin_cache_refresh[1960]: Refreshing passwd entry cache Jun 20 19:55:22.252060 oslogin_cache_refresh[1960]: Refreshing passwd entry cache Jun 20 19:55:22.259901 systemd[1]: Starting systemd-logind.service - User Login Management... Jun 20 19:55:22.262767 google_oslogin_nss_cache[1960]: oslogin_cache_refresh[1960]: Failure getting users, quitting Jun 20 19:55:22.262767 google_oslogin_nss_cache[1960]: oslogin_cache_refresh[1960]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jun 20 19:55:22.262767 google_oslogin_nss_cache[1960]: oslogin_cache_refresh[1960]: Refreshing group entry cache Jun 20 19:55:22.261045 oslogin_cache_refresh[1960]: Failure getting users, quitting Jun 20 19:55:22.261069 oslogin_cache_refresh[1960]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jun 20 19:55:22.261125 oslogin_cache_refresh[1960]: Refreshing group entry cache Jun 20 19:55:22.263181 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jun 20 19:55:22.269855 oslogin_cache_refresh[1960]: Failure getting groups, quitting Jun 20 19:55:22.271013 google_oslogin_nss_cache[1960]: oslogin_cache_refresh[1960]: Failure getting groups, quitting Jun 20 19:55:22.271013 google_oslogin_nss_cache[1960]: oslogin_cache_refresh[1960]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jun 20 19:55:22.269871 oslogin_cache_refresh[1960]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jun 20 19:55:22.277157 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jun 20 19:55:22.279314 systemd[1]: Starting update-engine.service - Update Engine... Jun 20 19:55:22.284992 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jun 20 19:55:22.294358 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jun 20 19:55:22.297674 extend-filesystems[1959]: Found /dev/nvme0n1p6 Jun 20 19:55:22.301848 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jun 20 19:55:22.304949 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jun 20 19:55:22.305236 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jun 20 19:55:22.305632 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jun 20 19:55:22.305923 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jun 20 19:55:22.313526 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jun 20 19:55:22.313992 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jun 20 19:55:22.322550 systemd[1]: motdgen.service: Deactivated successfully. Jun 20 19:55:22.324841 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jun 20 19:55:22.329328 extend-filesystems[1959]: Found /dev/nvme0n1p9 Jun 20 19:55:22.332792 extend-filesystems[1959]: Checking size of /dev/nvme0n1p9 Jun 20 19:55:22.378310 update_engine[1979]: I20250620 19:55:22.377386 1979 main.cc:92] Flatcar Update Engine starting Jun 20 19:55:22.391765 jq[1980]: true Jun 20 19:55:22.395598 extend-filesystems[1959]: Resized partition /dev/nvme0n1p9 Jun 20 19:55:22.421858 (ntainerd)[1986]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jun 20 19:55:22.432753 ntpd[1962]: ntpd 4.2.8p17@1.4004-o Fri Jun 20 16:24:37 UTC 2025 (1): Starting Jun 20 19:55:22.438420 extend-filesystems[2012]: resize2fs 1.47.2 (1-Jan-2025) Jun 20 19:55:22.442698 ntpd[1962]: 20 Jun 19:55:22 ntpd[1962]: ntpd 4.2.8p17@1.4004-o Fri Jun 20 16:24:37 UTC 2025 (1): Starting Jun 20 19:55:22.442698 ntpd[1962]: 20 Jun 19:55:22 ntpd[1962]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jun 20 19:55:22.442698 ntpd[1962]: 20 Jun 19:55:22 ntpd[1962]: ---------------------------------------------------- Jun 20 19:55:22.442698 ntpd[1962]: 20 Jun 19:55:22 ntpd[1962]: ntp-4 is maintained by Network Time Foundation, Jun 20 19:55:22.442698 ntpd[1962]: 20 Jun 19:55:22 ntpd[1962]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jun 20 19:55:22.442698 ntpd[1962]: 20 Jun 19:55:22 ntpd[1962]: corporation. Support and training for ntp-4 are Jun 20 19:55:22.442698 ntpd[1962]: 20 Jun 19:55:22 ntpd[1962]: available at https://www.nwtime.org/support Jun 20 19:55:22.442698 ntpd[1962]: 20 Jun 19:55:22 ntpd[1962]: ---------------------------------------------------- Jun 20 19:55:22.432790 ntpd[1962]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jun 20 19:55:22.432801 ntpd[1962]: ---------------------------------------------------- Jun 20 19:55:22.432811 ntpd[1962]: ntp-4 is maintained by Network Time Foundation, Jun 20 19:55:22.432820 ntpd[1962]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jun 20 19:55:22.432829 ntpd[1962]: corporation. Support and training for ntp-4 are Jun 20 19:55:22.432839 ntpd[1962]: available at https://www.nwtime.org/support Jun 20 19:55:22.432851 ntpd[1962]: ---------------------------------------------------- Jun 20 19:55:22.454148 tar[1984]: linux-amd64/LICENSE Jun 20 19:55:22.454148 tar[1984]: linux-amd64/helm Jun 20 19:55:22.454492 ntpd[1962]: 20 Jun 19:55:22 ntpd[1962]: proto: precision = 0.068 usec (-24) Jun 20 19:55:22.445444 ntpd[1962]: proto: precision = 0.068 usec (-24) Jun 20 19:55:22.455381 ntpd[1962]: basedate set to 2025-06-08 Jun 20 19:55:22.455413 ntpd[1962]: gps base set to 2025-06-08 (week 2370) Jun 20 19:55:22.455525 ntpd[1962]: 20 Jun 19:55:22 ntpd[1962]: basedate set to 2025-06-08 Jun 20 19:55:22.455525 ntpd[1962]: 20 Jun 19:55:22 ntpd[1962]: gps base set to 2025-06-08 (week 2370) Jun 20 19:55:22.478026 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Jun 20 19:55:22.478111 coreos-metadata[1955]: Jun 20 19:55:22.471 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jun 20 19:55:22.478048 ntpd[1962]: Listen and drop on 0 v6wildcard [::]:123 Jun 20 19:55:22.478523 ntpd[1962]: 20 Jun 19:55:22 ntpd[1962]: Listen and drop on 0 v6wildcard [::]:123 Jun 20 19:55:22.478523 ntpd[1962]: 20 Jun 19:55:22 ntpd[1962]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jun 20 19:55:22.478523 ntpd[1962]: 20 Jun 19:55:22 ntpd[1962]: Listen normally on 2 lo 127.0.0.1:123 Jun 20 19:55:22.478523 ntpd[1962]: 20 Jun 19:55:22 ntpd[1962]: Listen normally on 3 eth0 172.31.30.175:123 Jun 20 19:55:22.478523 ntpd[1962]: 20 Jun 19:55:22 ntpd[1962]: Listen normally on 4 lo [::1]:123 Jun 20 19:55:22.478523 ntpd[1962]: 20 Jun 19:55:22 ntpd[1962]: bind(21) AF_INET6 fe80::49b:c7ff:fe95:24eb%2#123 flags 0x11 failed: Cannot assign requested address Jun 20 19:55:22.478523 ntpd[1962]: 20 Jun 19:55:22 ntpd[1962]: unable to create socket on eth0 (5) for fe80::49b:c7ff:fe95:24eb%2#123 Jun 20 19:55:22.478523 ntpd[1962]: 20 Jun 19:55:22 ntpd[1962]: failed to init interface for address fe80::49b:c7ff:fe95:24eb%2 Jun 20 19:55:22.478523 ntpd[1962]: 20 Jun 19:55:22 ntpd[1962]: Listening on routing socket on fd #21 for interface updates Jun 20 19:55:22.478098 ntpd[1962]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jun 20 19:55:22.479189 coreos-metadata[1955]: Jun 20 19:55:22.478 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jun 20 19:55:22.478291 ntpd[1962]: Listen normally on 2 lo 127.0.0.1:123 Jun 20 19:55:22.478328 ntpd[1962]: Listen normally on 3 eth0 172.31.30.175:123 Jun 20 19:55:22.478370 ntpd[1962]: Listen normally on 4 lo [::1]:123 Jun 20 19:55:22.478421 ntpd[1962]: bind(21) AF_INET6 fe80::49b:c7ff:fe95:24eb%2#123 flags 0x11 failed: Cannot assign requested address Jun 20 19:55:22.478444 ntpd[1962]: unable to create socket on eth0 (5) for fe80::49b:c7ff:fe95:24eb%2#123 Jun 20 19:55:22.478459 ntpd[1962]: failed to init interface for address fe80::49b:c7ff:fe95:24eb%2 Jun 20 19:55:22.478491 ntpd[1962]: Listening on routing socket on fd #21 for interface updates Jun 20 19:55:22.483575 coreos-metadata[1955]: Jun 20 19:55:22.483 INFO Fetch successful Jun 20 19:55:22.483669 coreos-metadata[1955]: Jun 20 19:55:22.483 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jun 20 19:55:22.489120 coreos-metadata[1955]: Jun 20 19:55:22.488 INFO Fetch successful Jun 20 19:55:22.489120 coreos-metadata[1955]: Jun 20 19:55:22.488 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jun 20 19:55:22.490858 coreos-metadata[1955]: Jun 20 19:55:22.490 INFO Fetch successful Jun 20 19:55:22.490969 coreos-metadata[1955]: Jun 20 19:55:22.490 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jun 20 19:55:22.493876 coreos-metadata[1955]: Jun 20 19:55:22.493 INFO Fetch successful Jun 20 19:55:22.494282 coreos-metadata[1955]: Jun 20 19:55:22.494 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jun 20 19:55:22.499842 coreos-metadata[1955]: Jun 20 19:55:22.497 INFO Fetch failed with 404: resource not found Jun 20 19:55:22.499842 coreos-metadata[1955]: Jun 20 19:55:22.497 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jun 20 19:55:22.500089 systemd[1]: Finished setup-oem.service - Setup OEM. Jun 20 19:55:22.500226 dbus-daemon[1956]: [system] SELinux support is enabled Jun 20 19:55:22.501251 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jun 20 19:55:22.508880 ntpd[1962]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jun 20 19:55:22.510025 coreos-metadata[1955]: Jun 20 19:55:22.509 INFO Fetch successful Jun 20 19:55:22.510025 coreos-metadata[1955]: Jun 20 19:55:22.509 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jun 20 19:55:22.510131 jq[2008]: true Jun 20 19:55:22.510247 ntpd[1962]: 20 Jun 19:55:22 ntpd[1962]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jun 20 19:55:22.510247 ntpd[1962]: 20 Jun 19:55:22 ntpd[1962]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jun 20 19:55:22.508915 ntpd[1962]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jun 20 19:55:22.517689 coreos-metadata[1955]: Jun 20 19:55:22.512 INFO Fetch successful Jun 20 19:55:22.517689 coreos-metadata[1955]: Jun 20 19:55:22.512 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jun 20 19:55:22.513487 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jun 20 19:55:22.513543 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jun 20 19:55:22.516917 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jun 20 19:55:22.516958 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jun 20 19:55:22.522763 coreos-metadata[1955]: Jun 20 19:55:22.519 INFO Fetch successful Jun 20 19:55:22.522763 coreos-metadata[1955]: Jun 20 19:55:22.519 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jun 20 19:55:22.522763 coreos-metadata[1955]: Jun 20 19:55:22.520 INFO Fetch successful Jun 20 19:55:22.522763 coreos-metadata[1955]: Jun 20 19:55:22.521 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jun 20 19:55:22.525096 coreos-metadata[1955]: Jun 20 19:55:22.523 INFO Fetch successful Jun 20 19:55:22.547707 dbus-daemon[1956]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1834 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jun 20 19:55:22.549885 update_engine[1979]: I20250620 19:55:22.549821 1979 update_check_scheduler.cc:74] Next update check in 11m36s Jun 20 19:55:22.558634 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Jun 20 19:55:22.554885 systemd-logind[1969]: Watching system buttons on /dev/input/event2 (Power Button) Jun 20 19:55:22.555070 systemd-logind[1969]: Watching system buttons on /dev/input/event3 (Sleep Button) Jun 20 19:55:22.555093 systemd-logind[1969]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jun 20 19:55:22.555896 systemd-logind[1969]: New seat seat0. Jun 20 19:55:22.590088 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jun 20 19:55:22.591613 systemd[1]: Started systemd-logind.service - User Login Management. Jun 20 19:55:22.593521 systemd[1]: Started update-engine.service - Update Engine. Jun 20 19:55:22.613089 extend-filesystems[2012]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jun 20 19:55:22.613089 extend-filesystems[2012]: old_desc_blocks = 1, new_desc_blocks = 1 Jun 20 19:55:22.613089 extend-filesystems[2012]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Jun 20 19:55:22.611470 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jun 20 19:55:22.653645 extend-filesystems[1959]: Resized filesystem in /dev/nvme0n1p9 Jun 20 19:55:22.613130 systemd[1]: extend-filesystems.service: Deactivated successfully. Jun 20 19:55:22.613520 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jun 20 19:55:22.700818 bash[2047]: Updated "/home/core/.ssh/authorized_keys" Jun 20 19:55:22.704843 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jun 20 19:55:22.714458 systemd[1]: Starting sshkeys.service... Jun 20 19:55:22.731658 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jun 20 19:55:22.733032 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jun 20 19:55:22.760288 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jun 20 19:55:22.768990 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jun 20 19:55:23.052111 coreos-metadata[2064]: Jun 20 19:55:23.050 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jun 20 19:55:23.062940 coreos-metadata[2064]: Jun 20 19:55:23.061 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jun 20 19:55:23.068868 coreos-metadata[2064]: Jun 20 19:55:23.068 INFO Fetch successful Jun 20 19:55:23.068868 coreos-metadata[2064]: Jun 20 19:55:23.068 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jun 20 19:55:23.073797 coreos-metadata[2064]: Jun 20 19:55:23.073 INFO Fetch successful Jun 20 19:55:23.089793 unknown[2064]: wrote ssh authorized keys file for user: core Jun 20 19:55:23.096318 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jun 20 19:55:23.101678 dbus-daemon[1956]: [system] Successfully activated service 'org.freedesktop.hostname1' Jun 20 19:55:23.112363 dbus-daemon[1956]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=2028 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jun 20 19:55:23.132178 systemd[1]: Starting polkit.service - Authorization Manager... Jun 20 19:55:23.138399 update-ssh-keys[2153]: Updated "/home/core/.ssh/authorized_keys" Jun 20 19:55:23.141666 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jun 20 19:55:23.159317 systemd[1]: Finished sshkeys.service. Jun 20 19:55:23.172078 locksmithd[2031]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jun 20 19:55:23.318648 containerd[1986]: time="2025-06-20T19:55:23Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jun 20 19:55:23.334757 containerd[1986]: time="2025-06-20T19:55:23.334426660Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Jun 20 19:55:23.380989 containerd[1986]: time="2025-06-20T19:55:23.380923269Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="11.88µs" Jun 20 19:55:23.380989 containerd[1986]: time="2025-06-20T19:55:23.380975593Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jun 20 19:55:23.380989 containerd[1986]: time="2025-06-20T19:55:23.380998653Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jun 20 19:55:23.381499 containerd[1986]: time="2025-06-20T19:55:23.381186079Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jun 20 19:55:23.381499 containerd[1986]: time="2025-06-20T19:55:23.381213517Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jun 20 19:55:23.381499 containerd[1986]: time="2025-06-20T19:55:23.381260695Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jun 20 19:55:23.381499 containerd[1986]: time="2025-06-20T19:55:23.381334102Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jun 20 19:55:23.381499 containerd[1986]: time="2025-06-20T19:55:23.381349306Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jun 20 19:55:23.381699 containerd[1986]: time="2025-06-20T19:55:23.381611408Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jun 20 19:55:23.381699 containerd[1986]: time="2025-06-20T19:55:23.381633117Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jun 20 19:55:23.381699 containerd[1986]: time="2025-06-20T19:55:23.381650468Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jun 20 19:55:23.381699 containerd[1986]: time="2025-06-20T19:55:23.381662902Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jun 20 19:55:23.383855 containerd[1986]: time="2025-06-20T19:55:23.383813395Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jun 20 19:55:23.384172 containerd[1986]: time="2025-06-20T19:55:23.384146311Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jun 20 19:55:23.384235 containerd[1986]: time="2025-06-20T19:55:23.384195252Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jun 20 19:55:23.384235 containerd[1986]: time="2025-06-20T19:55:23.384212368Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jun 20 19:55:23.384305 containerd[1986]: time="2025-06-20T19:55:23.384247370Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jun 20 19:55:23.384614 containerd[1986]: time="2025-06-20T19:55:23.384589730Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jun 20 19:55:23.384965 containerd[1986]: time="2025-06-20T19:55:23.384679966Z" level=info msg="metadata content store policy set" policy=shared Jun 20 19:55:23.390898 containerd[1986]: time="2025-06-20T19:55:23.390854077Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jun 20 19:55:23.391027 containerd[1986]: time="2025-06-20T19:55:23.390938936Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jun 20 19:55:23.391027 containerd[1986]: time="2025-06-20T19:55:23.390958724Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jun 20 19:55:23.395212 containerd[1986]: time="2025-06-20T19:55:23.392766536Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jun 20 19:55:23.395212 containerd[1986]: time="2025-06-20T19:55:23.392810770Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jun 20 19:55:23.395212 containerd[1986]: time="2025-06-20T19:55:23.392839692Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jun 20 19:55:23.395212 containerd[1986]: time="2025-06-20T19:55:23.392869049Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jun 20 19:55:23.395212 containerd[1986]: time="2025-06-20T19:55:23.392886333Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jun 20 19:55:23.395212 containerd[1986]: time="2025-06-20T19:55:23.392902125Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jun 20 19:55:23.395212 containerd[1986]: time="2025-06-20T19:55:23.392918711Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jun 20 19:55:23.395212 containerd[1986]: time="2025-06-20T19:55:23.392935935Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jun 20 19:55:23.395212 containerd[1986]: time="2025-06-20T19:55:23.392954725Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jun 20 19:55:23.395212 containerd[1986]: time="2025-06-20T19:55:23.393111379Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jun 20 19:55:23.395212 containerd[1986]: time="2025-06-20T19:55:23.393138174Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jun 20 19:55:23.395212 containerd[1986]: time="2025-06-20T19:55:23.393162047Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jun 20 19:55:23.395212 containerd[1986]: time="2025-06-20T19:55:23.393178890Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jun 20 19:55:23.395212 containerd[1986]: time="2025-06-20T19:55:23.393226014Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jun 20 19:55:23.395729 containerd[1986]: time="2025-06-20T19:55:23.393244327Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jun 20 19:55:23.395729 containerd[1986]: time="2025-06-20T19:55:23.393263858Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jun 20 19:55:23.395729 containerd[1986]: time="2025-06-20T19:55:23.393279313Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jun 20 19:55:23.395729 containerd[1986]: time="2025-06-20T19:55:23.393296470Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jun 20 19:55:23.395729 containerd[1986]: time="2025-06-20T19:55:23.393311907Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jun 20 19:55:23.395729 containerd[1986]: time="2025-06-20T19:55:23.393329983Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jun 20 19:55:23.395729 containerd[1986]: time="2025-06-20T19:55:23.393419087Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jun 20 19:55:23.395729 containerd[1986]: time="2025-06-20T19:55:23.393439077Z" level=info msg="Start snapshots syncer" Jun 20 19:55:23.395729 containerd[1986]: time="2025-06-20T19:55:23.393498827Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jun 20 19:55:23.396078 containerd[1986]: time="2025-06-20T19:55:23.393959761Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jun 20 19:55:23.396078 containerd[1986]: time="2025-06-20T19:55:23.394036972Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jun 20 19:55:23.397455 containerd[1986]: time="2025-06-20T19:55:23.396490336Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jun 20 19:55:23.397455 containerd[1986]: time="2025-06-20T19:55:23.396677322Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jun 20 19:55:23.397455 containerd[1986]: time="2025-06-20T19:55:23.396709332Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jun 20 19:55:23.397455 containerd[1986]: time="2025-06-20T19:55:23.396724945Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jun 20 19:55:23.400549 containerd[1986]: time="2025-06-20T19:55:23.398789423Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jun 20 19:55:23.400549 containerd[1986]: time="2025-06-20T19:55:23.398845285Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jun 20 19:55:23.400549 containerd[1986]: time="2025-06-20T19:55:23.398868502Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jun 20 19:55:23.400549 containerd[1986]: time="2025-06-20T19:55:23.398886273Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jun 20 19:55:23.400549 containerd[1986]: time="2025-06-20T19:55:23.398922725Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jun 20 19:55:23.400549 containerd[1986]: time="2025-06-20T19:55:23.398938902Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jun 20 19:55:23.400549 containerd[1986]: time="2025-06-20T19:55:23.398954071Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jun 20 19:55:23.400549 containerd[1986]: time="2025-06-20T19:55:23.399018149Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jun 20 19:55:23.400549 containerd[1986]: time="2025-06-20T19:55:23.399041038Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jun 20 19:55:23.400549 containerd[1986]: time="2025-06-20T19:55:23.399054989Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jun 20 19:55:23.400549 containerd[1986]: time="2025-06-20T19:55:23.399135541Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jun 20 19:55:23.400549 containerd[1986]: time="2025-06-20T19:55:23.399148976Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jun 20 19:55:23.400549 containerd[1986]: time="2025-06-20T19:55:23.399163038Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jun 20 19:55:23.400549 containerd[1986]: time="2025-06-20T19:55:23.399178848Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jun 20 19:55:23.401107 containerd[1986]: time="2025-06-20T19:55:23.399201052Z" level=info msg="runtime interface created" Jun 20 19:55:23.401107 containerd[1986]: time="2025-06-20T19:55:23.399209492Z" level=info msg="created NRI interface" Jun 20 19:55:23.401107 containerd[1986]: time="2025-06-20T19:55:23.399221530Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jun 20 19:55:23.401107 containerd[1986]: time="2025-06-20T19:55:23.399242504Z" level=info msg="Connect containerd service" Jun 20 19:55:23.401107 containerd[1986]: time="2025-06-20T19:55:23.399281066Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jun 20 19:55:23.402283 containerd[1986]: time="2025-06-20T19:55:23.402244360Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 20 19:55:23.433719 ntpd[1962]: bind(24) AF_INET6 fe80::49b:c7ff:fe95:24eb%2#123 flags 0x11 failed: Cannot assign requested address Jun 20 19:55:23.436702 ntpd[1962]: 20 Jun 19:55:23 ntpd[1962]: bind(24) AF_INET6 fe80::49b:c7ff:fe95:24eb%2#123 flags 0x11 failed: Cannot assign requested address Jun 20 19:55:23.436702 ntpd[1962]: 20 Jun 19:55:23 ntpd[1962]: unable to create socket on eth0 (6) for fe80::49b:c7ff:fe95:24eb%2#123 Jun 20 19:55:23.436702 ntpd[1962]: 20 Jun 19:55:23 ntpd[1962]: failed to init interface for address fe80::49b:c7ff:fe95:24eb%2 Jun 20 19:55:23.433797 ntpd[1962]: unable to create socket on eth0 (6) for fe80::49b:c7ff:fe95:24eb%2#123 Jun 20 19:55:23.433813 ntpd[1962]: failed to init interface for address fe80::49b:c7ff:fe95:24eb%2 Jun 20 19:55:23.456561 polkitd[2155]: Started polkitd version 126 Jun 20 19:55:23.468476 polkitd[2155]: Loading rules from directory /etc/polkit-1/rules.d Jun 20 19:55:23.472441 polkitd[2155]: Loading rules from directory /run/polkit-1/rules.d Jun 20 19:55:23.474114 polkitd[2155]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jun 20 19:55:23.475066 polkitd[2155]: Loading rules from directory /usr/local/share/polkit-1/rules.d Jun 20 19:55:23.476516 polkitd[2155]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jun 20 19:55:23.476676 polkitd[2155]: Loading rules from directory /usr/share/polkit-1/rules.d Jun 20 19:55:23.479371 polkitd[2155]: Finished loading, compiling and executing 2 rules Jun 20 19:55:23.480152 systemd[1]: Started polkit.service - Authorization Manager. Jun 20 19:55:23.483908 dbus-daemon[1956]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jun 20 19:55:23.485446 polkitd[2155]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jun 20 19:55:23.509266 sshd_keygen[2002]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jun 20 19:55:23.519714 systemd-hostnamed[2028]: Hostname set to (transient) Jun 20 19:55:23.519861 systemd-resolved[1835]: System hostname changed to 'ip-172-31-30-175'. Jun 20 19:55:23.567481 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jun 20 19:55:23.571591 systemd[1]: Starting issuegen.service - Generate /run/issue... Jun 20 19:55:23.597198 systemd[1]: issuegen.service: Deactivated successfully. Jun 20 19:55:23.597866 systemd[1]: Finished issuegen.service - Generate /run/issue. Jun 20 19:55:23.602975 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jun 20 19:55:23.628664 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jun 20 19:55:23.633968 systemd[1]: Started getty@tty1.service - Getty on tty1. Jun 20 19:55:23.643985 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jun 20 19:55:23.644922 systemd[1]: Reached target getty.target - Login Prompts. Jun 20 19:55:23.778705 containerd[1986]: time="2025-06-20T19:55:23.778660926Z" level=info msg="Start subscribing containerd event" Jun 20 19:55:23.778941 containerd[1986]: time="2025-06-20T19:55:23.778895223Z" level=info msg="Start recovering state" Jun 20 19:55:23.779142 containerd[1986]: time="2025-06-20T19:55:23.779127305Z" level=info msg="Start event monitor" Jun 20 19:55:23.779219 containerd[1986]: time="2025-06-20T19:55:23.779208065Z" level=info msg="Start cni network conf syncer for default" Jun 20 19:55:23.779309 containerd[1986]: time="2025-06-20T19:55:23.779297095Z" level=info msg="Start streaming server" Jun 20 19:55:23.779376 containerd[1986]: time="2025-06-20T19:55:23.779366045Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jun 20 19:55:23.779433 containerd[1986]: time="2025-06-20T19:55:23.779422526Z" level=info msg="runtime interface starting up..." Jun 20 19:55:23.779484 containerd[1986]: time="2025-06-20T19:55:23.779474982Z" level=info msg="starting plugins..." Jun 20 19:55:23.779545 containerd[1986]: time="2025-06-20T19:55:23.779534692Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jun 20 19:55:23.780298 containerd[1986]: time="2025-06-20T19:55:23.780260564Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jun 20 19:55:23.780507 containerd[1986]: time="2025-06-20T19:55:23.780447527Z" level=info msg=serving... address=/run/containerd/containerd.sock Jun 20 19:55:23.780798 systemd[1]: Started containerd.service - containerd container runtime. Jun 20 19:55:23.784796 containerd[1986]: time="2025-06-20T19:55:23.784728866Z" level=info msg="containerd successfully booted in 0.466677s" Jun 20 19:55:23.859051 tar[1984]: linux-amd64/README.md Jun 20 19:55:23.877983 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jun 20 19:55:23.959551 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jun 20 19:55:23.961810 systemd[1]: Started sshd@0-172.31.30.175:22-147.75.109.163:46228.service - OpenSSH per-connection server daemon (147.75.109.163:46228). Jun 20 19:55:24.120931 systemd-networkd[1834]: eth0: Gained IPv6LL Jun 20 19:55:24.123861 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jun 20 19:55:24.125345 systemd[1]: Reached target network-online.target - Network is Online. Jun 20 19:55:24.128537 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jun 20 19:55:24.133879 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:55:24.140880 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jun 20 19:55:24.183800 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jun 20 19:55:24.218539 sshd[2200]: Accepted publickey for core from 147.75.109.163 port 46228 ssh2: RSA SHA256:aTTjzy5mYIy3PeeaxkfgVdnra6PV/x835FJtoX+5LLY Jun 20 19:55:24.224793 sshd-session[2200]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:55:24.236120 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jun 20 19:55:24.240602 amazon-ssm-agent[2203]: Initializing new seelog logger Jun 20 19:55:24.241025 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jun 20 19:55:24.245257 amazon-ssm-agent[2203]: New Seelog Logger Creation Complete Jun 20 19:55:24.245496 amazon-ssm-agent[2203]: 2025/06/20 19:55:24 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jun 20 19:55:24.245554 amazon-ssm-agent[2203]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jun 20 19:55:24.246892 amazon-ssm-agent[2203]: 2025/06/20 19:55:24 processing appconfig overrides Jun 20 19:55:24.247442 amazon-ssm-agent[2203]: 2025/06/20 19:55:24 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jun 20 19:55:24.247537 amazon-ssm-agent[2203]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jun 20 19:55:24.247684 amazon-ssm-agent[2203]: 2025/06/20 19:55:24 processing appconfig overrides Jun 20 19:55:24.247728 amazon-ssm-agent[2203]: 2025-06-20 19:55:24.2473 INFO Proxy environment variables: Jun 20 19:55:24.249384 amazon-ssm-agent[2203]: 2025/06/20 19:55:24 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jun 20 19:55:24.249458 amazon-ssm-agent[2203]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jun 20 19:55:24.249595 amazon-ssm-agent[2203]: 2025/06/20 19:55:24 processing appconfig overrides Jun 20 19:55:24.261106 systemd-logind[1969]: New session 1 of user core. Jun 20 19:55:24.262865 amazon-ssm-agent[2203]: 2025/06/20 19:55:24 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jun 20 19:55:24.262865 amazon-ssm-agent[2203]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jun 20 19:55:24.263116 amazon-ssm-agent[2203]: 2025/06/20 19:55:24 processing appconfig overrides Jun 20 19:55:24.279269 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jun 20 19:55:24.286896 systemd[1]: Starting user@500.service - User Manager for UID 500... Jun 20 19:55:24.305905 (systemd)[2223]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jun 20 19:55:24.311955 systemd-logind[1969]: New session c1 of user core. Jun 20 19:55:24.348834 amazon-ssm-agent[2203]: 2025-06-20 19:55:24.2474 INFO https_proxy: Jun 20 19:55:24.446876 amazon-ssm-agent[2203]: 2025-06-20 19:55:24.2474 INFO http_proxy: Jun 20 19:55:24.547758 amazon-ssm-agent[2203]: 2025-06-20 19:55:24.2474 INFO no_proxy: Jun 20 19:55:24.600949 systemd[2223]: Queued start job for default target default.target. Jun 20 19:55:24.607219 systemd[2223]: Created slice app.slice - User Application Slice. Jun 20 19:55:24.608381 systemd[2223]: Reached target paths.target - Paths. Jun 20 19:55:24.608599 systemd[2223]: Reached target timers.target - Timers. Jun 20 19:55:24.611324 systemd[2223]: Starting dbus.socket - D-Bus User Message Bus Socket... Jun 20 19:55:24.634670 systemd[2223]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jun 20 19:55:24.635284 systemd[2223]: Reached target sockets.target - Sockets. Jun 20 19:55:24.635513 systemd[2223]: Reached target basic.target - Basic System. Jun 20 19:55:24.635722 systemd[2223]: Reached target default.target - Main User Target. Jun 20 19:55:24.635724 systemd[1]: Started user@500.service - User Manager for UID 500. Jun 20 19:55:24.635790 systemd[2223]: Startup finished in 307ms. Jun 20 19:55:24.644642 systemd[1]: Started session-1.scope - Session 1 of User core. Jun 20 19:55:24.645690 amazon-ssm-agent[2203]: 2025-06-20 19:55:24.2477 INFO Checking if agent identity type OnPrem can be assumed Jun 20 19:55:24.673773 amazon-ssm-agent[2203]: 2025/06/20 19:55:24 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jun 20 19:55:24.673773 amazon-ssm-agent[2203]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jun 20 19:55:24.673773 amazon-ssm-agent[2203]: 2025/06/20 19:55:24 processing appconfig overrides Jun 20 19:55:24.699228 amazon-ssm-agent[2203]: 2025-06-20 19:55:24.2492 INFO Checking if agent identity type EC2 can be assumed Jun 20 19:55:24.699361 amazon-ssm-agent[2203]: 2025-06-20 19:55:24.3299 INFO Agent will take identity from EC2 Jun 20 19:55:24.699361 amazon-ssm-agent[2203]: 2025-06-20 19:55:24.3315 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.3.0.0 Jun 20 19:55:24.699361 amazon-ssm-agent[2203]: 2025-06-20 19:55:24.3315 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Jun 20 19:55:24.699361 amazon-ssm-agent[2203]: 2025-06-20 19:55:24.3316 INFO [amazon-ssm-agent] Starting Core Agent Jun 20 19:55:24.699361 amazon-ssm-agent[2203]: 2025-06-20 19:55:24.3316 INFO [amazon-ssm-agent] Registrar detected. Attempting registration Jun 20 19:55:24.699361 amazon-ssm-agent[2203]: 2025-06-20 19:55:24.3316 INFO [Registrar] Starting registrar module Jun 20 19:55:24.699361 amazon-ssm-agent[2203]: 2025-06-20 19:55:24.3330 INFO [EC2Identity] Checking disk for registration info Jun 20 19:55:24.699361 amazon-ssm-agent[2203]: 2025-06-20 19:55:24.3330 INFO [EC2Identity] No registration info found for ec2 instance, attempting registration Jun 20 19:55:24.699361 amazon-ssm-agent[2203]: 2025-06-20 19:55:24.3330 INFO [EC2Identity] Generating registration keypair Jun 20 19:55:24.699361 amazon-ssm-agent[2203]: 2025-06-20 19:55:24.6281 INFO [EC2Identity] Checking write access before registering Jun 20 19:55:24.699361 amazon-ssm-agent[2203]: 2025-06-20 19:55:24.6286 INFO [EC2Identity] Registering EC2 instance with Systems Manager Jun 20 19:55:24.699361 amazon-ssm-agent[2203]: 2025-06-20 19:55:24.6730 INFO [EC2Identity] EC2 registration was successful. Jun 20 19:55:24.699361 amazon-ssm-agent[2203]: 2025-06-20 19:55:24.6730 INFO [amazon-ssm-agent] Registration attempted. Resuming core agent startup. Jun 20 19:55:24.699361 amazon-ssm-agent[2203]: 2025-06-20 19:55:24.6731 INFO [CredentialRefresher] credentialRefresher has started Jun 20 19:55:24.699361 amazon-ssm-agent[2203]: 2025-06-20 19:55:24.6731 INFO [CredentialRefresher] Starting credentials refresher loop Jun 20 19:55:24.699753 amazon-ssm-agent[2203]: 2025-06-20 19:55:24.6988 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jun 20 19:55:24.699753 amazon-ssm-agent[2203]: 2025-06-20 19:55:24.6990 INFO [CredentialRefresher] Credentials ready Jun 20 19:55:24.744766 amazon-ssm-agent[2203]: 2025-06-20 19:55:24.6994 INFO [CredentialRefresher] Next credential rotation will be in 29.99998943995 minutes Jun 20 19:55:24.790714 systemd[1]: Started sshd@1-172.31.30.175:22-147.75.109.163:41034.service - OpenSSH per-connection server daemon (147.75.109.163:41034). Jun 20 19:55:24.965812 sshd[2235]: Accepted publickey for core from 147.75.109.163 port 41034 ssh2: RSA SHA256:aTTjzy5mYIy3PeeaxkfgVdnra6PV/x835FJtoX+5LLY Jun 20 19:55:24.967485 sshd-session[2235]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:55:24.973962 systemd-logind[1969]: New session 2 of user core. Jun 20 19:55:24.978981 systemd[1]: Started session-2.scope - Session 2 of User core. Jun 20 19:55:25.099385 sshd[2237]: Connection closed by 147.75.109.163 port 41034 Jun 20 19:55:25.099939 sshd-session[2235]: pam_unix(sshd:session): session closed for user core Jun 20 19:55:25.103409 systemd[1]: sshd@1-172.31.30.175:22-147.75.109.163:41034.service: Deactivated successfully. Jun 20 19:55:25.105155 systemd[1]: session-2.scope: Deactivated successfully. Jun 20 19:55:25.107535 systemd-logind[1969]: Session 2 logged out. Waiting for processes to exit. Jun 20 19:55:25.109500 systemd-logind[1969]: Removed session 2. Jun 20 19:55:25.135213 systemd[1]: Started sshd@2-172.31.30.175:22-147.75.109.163:41046.service - OpenSSH per-connection server daemon (147.75.109.163:41046). Jun 20 19:55:25.306673 sshd[2243]: Accepted publickey for core from 147.75.109.163 port 41046 ssh2: RSA SHA256:aTTjzy5mYIy3PeeaxkfgVdnra6PV/x835FJtoX+5LLY Jun 20 19:55:25.308151 sshd-session[2243]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:55:25.313306 systemd-logind[1969]: New session 3 of user core. Jun 20 19:55:25.320165 systemd[1]: Started session-3.scope - Session 3 of User core. Jun 20 19:55:25.440561 sshd[2245]: Connection closed by 147.75.109.163 port 41046 Jun 20 19:55:25.441185 sshd-session[2243]: pam_unix(sshd:session): session closed for user core Jun 20 19:55:25.444808 systemd[1]: sshd@2-172.31.30.175:22-147.75.109.163:41046.service: Deactivated successfully. Jun 20 19:55:25.446620 systemd[1]: session-3.scope: Deactivated successfully. Jun 20 19:55:25.450043 systemd-logind[1969]: Session 3 logged out. Waiting for processes to exit. Jun 20 19:55:25.451589 systemd-logind[1969]: Removed session 3. Jun 20 19:55:25.713252 amazon-ssm-agent[2203]: 2025-06-20 19:55:25.7130 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jun 20 19:55:25.814975 amazon-ssm-agent[2203]: 2025-06-20 19:55:25.7152 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2252) started Jun 20 19:55:25.915373 amazon-ssm-agent[2203]: 2025-06-20 19:55:25.7152 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jun 20 19:55:26.381927 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:55:26.382820 systemd[1]: Reached target multi-user.target - Multi-User System. Jun 20 19:55:26.385695 systemd[1]: Startup finished in 2.745s (kernel) + 8.194s (initrd) + 8.603s (userspace) = 19.543s. Jun 20 19:55:26.387983 (kubelet)[2269]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 19:55:26.433228 ntpd[1962]: Listen normally on 7 eth0 [fe80::49b:c7ff:fe95:24eb%2]:123 Jun 20 19:55:26.433580 ntpd[1962]: 20 Jun 19:55:26 ntpd[1962]: Listen normally on 7 eth0 [fe80::49b:c7ff:fe95:24eb%2]:123 Jun 20 19:55:27.578865 kubelet[2269]: E0620 19:55:27.578808 2269 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 19:55:27.582141 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 19:55:27.582348 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 19:55:27.583382 systemd[1]: kubelet.service: Consumed 1.085s CPU time, 268.6M memory peak. Jun 20 19:55:30.503942 systemd-resolved[1835]: Clock change detected. Flushing caches. Jun 20 19:55:36.551120 systemd[1]: Started sshd@3-172.31.30.175:22-147.75.109.163:40208.service - OpenSSH per-connection server daemon (147.75.109.163:40208). Jun 20 19:55:36.724664 sshd[2280]: Accepted publickey for core from 147.75.109.163 port 40208 ssh2: RSA SHA256:aTTjzy5mYIy3PeeaxkfgVdnra6PV/x835FJtoX+5LLY Jun 20 19:55:36.726708 sshd-session[2280]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:55:36.733147 systemd-logind[1969]: New session 4 of user core. Jun 20 19:55:36.739485 systemd[1]: Started session-4.scope - Session 4 of User core. Jun 20 19:55:36.860696 sshd[2282]: Connection closed by 147.75.109.163 port 40208 Jun 20 19:55:36.861392 sshd-session[2280]: pam_unix(sshd:session): session closed for user core Jun 20 19:55:36.868721 systemd[1]: sshd@3-172.31.30.175:22-147.75.109.163:40208.service: Deactivated successfully. Jun 20 19:55:36.872135 systemd[1]: session-4.scope: Deactivated successfully. Jun 20 19:55:36.873371 systemd-logind[1969]: Session 4 logged out. Waiting for processes to exit. Jun 20 19:55:36.875381 systemd-logind[1969]: Removed session 4. Jun 20 19:55:36.897102 systemd[1]: Started sshd@4-172.31.30.175:22-147.75.109.163:40214.service - OpenSSH per-connection server daemon (147.75.109.163:40214). Jun 20 19:55:37.079356 sshd[2288]: Accepted publickey for core from 147.75.109.163 port 40214 ssh2: RSA SHA256:aTTjzy5mYIy3PeeaxkfgVdnra6PV/x835FJtoX+5LLY Jun 20 19:55:37.080733 sshd-session[2288]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:55:37.087339 systemd-logind[1969]: New session 5 of user core. Jun 20 19:55:37.092505 systemd[1]: Started session-5.scope - Session 5 of User core. Jun 20 19:55:37.215618 sshd[2290]: Connection closed by 147.75.109.163 port 40214 Jun 20 19:55:37.216407 sshd-session[2288]: pam_unix(sshd:session): session closed for user core Jun 20 19:55:37.221000 systemd[1]: sshd@4-172.31.30.175:22-147.75.109.163:40214.service: Deactivated successfully. Jun 20 19:55:37.223431 systemd[1]: session-5.scope: Deactivated successfully. Jun 20 19:55:37.224409 systemd-logind[1969]: Session 5 logged out. Waiting for processes to exit. Jun 20 19:55:37.226157 systemd-logind[1969]: Removed session 5. Jun 20 19:55:37.246749 systemd[1]: Started sshd@5-172.31.30.175:22-147.75.109.163:40226.service - OpenSSH per-connection server daemon (147.75.109.163:40226). Jun 20 19:55:37.423836 sshd[2296]: Accepted publickey for core from 147.75.109.163 port 40226 ssh2: RSA SHA256:aTTjzy5mYIy3PeeaxkfgVdnra6PV/x835FJtoX+5LLY Jun 20 19:55:37.425209 sshd-session[2296]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:55:37.430786 systemd-logind[1969]: New session 6 of user core. Jun 20 19:55:37.441477 systemd[1]: Started session-6.scope - Session 6 of User core. Jun 20 19:55:37.560080 sshd[2298]: Connection closed by 147.75.109.163 port 40226 Jun 20 19:55:37.560914 sshd-session[2296]: pam_unix(sshd:session): session closed for user core Jun 20 19:55:37.566390 systemd[1]: sshd@5-172.31.30.175:22-147.75.109.163:40226.service: Deactivated successfully. Jun 20 19:55:37.568682 systemd[1]: session-6.scope: Deactivated successfully. Jun 20 19:55:37.569745 systemd-logind[1969]: Session 6 logged out. Waiting for processes to exit. Jun 20 19:55:37.571707 systemd-logind[1969]: Removed session 6. Jun 20 19:55:37.600049 systemd[1]: Started sshd@6-172.31.30.175:22-147.75.109.163:40242.service - OpenSSH per-connection server daemon (147.75.109.163:40242). Jun 20 19:55:37.776484 sshd[2304]: Accepted publickey for core from 147.75.109.163 port 40242 ssh2: RSA SHA256:aTTjzy5mYIy3PeeaxkfgVdnra6PV/x835FJtoX+5LLY Jun 20 19:55:37.777839 sshd-session[2304]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:55:37.784544 systemd-logind[1969]: New session 7 of user core. Jun 20 19:55:37.793470 systemd[1]: Started session-7.scope - Session 7 of User core. Jun 20 19:55:37.943713 sudo[2307]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jun 20 19:55:37.944213 sudo[2307]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 20 19:55:37.957360 sudo[2307]: pam_unix(sudo:session): session closed for user root Jun 20 19:55:37.980462 sshd[2306]: Connection closed by 147.75.109.163 port 40242 Jun 20 19:55:37.981251 sshd-session[2304]: pam_unix(sshd:session): session closed for user core Jun 20 19:55:37.985626 systemd[1]: sshd@6-172.31.30.175:22-147.75.109.163:40242.service: Deactivated successfully. Jun 20 19:55:37.987821 systemd[1]: session-7.scope: Deactivated successfully. Jun 20 19:55:37.990592 systemd-logind[1969]: Session 7 logged out. Waiting for processes to exit. Jun 20 19:55:37.992035 systemd-logind[1969]: Removed session 7. Jun 20 19:55:38.015962 systemd[1]: Started sshd@7-172.31.30.175:22-147.75.109.163:40256.service - OpenSSH per-connection server daemon (147.75.109.163:40256). Jun 20 19:55:38.185013 sshd[2313]: Accepted publickey for core from 147.75.109.163 port 40256 ssh2: RSA SHA256:aTTjzy5mYIy3PeeaxkfgVdnra6PV/x835FJtoX+5LLY Jun 20 19:55:38.186549 sshd-session[2313]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:55:38.193160 systemd-logind[1969]: New session 8 of user core. Jun 20 19:55:38.200458 systemd[1]: Started session-8.scope - Session 8 of User core. Jun 20 19:55:38.296483 sudo[2317]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jun 20 19:55:38.296845 sudo[2317]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 20 19:55:38.302111 sudo[2317]: pam_unix(sudo:session): session closed for user root Jun 20 19:55:38.307841 sudo[2316]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jun 20 19:55:38.308201 sudo[2316]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 20 19:55:38.318676 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jun 20 19:55:38.356024 augenrules[2339]: No rules Jun 20 19:55:38.357507 systemd[1]: audit-rules.service: Deactivated successfully. Jun 20 19:55:38.357891 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jun 20 19:55:38.359008 sudo[2316]: pam_unix(sudo:session): session closed for user root Jun 20 19:55:38.381628 sshd[2315]: Connection closed by 147.75.109.163 port 40256 Jun 20 19:55:38.382367 sshd-session[2313]: pam_unix(sshd:session): session closed for user core Jun 20 19:55:38.386885 systemd[1]: sshd@7-172.31.30.175:22-147.75.109.163:40256.service: Deactivated successfully. Jun 20 19:55:38.388775 systemd[1]: session-8.scope: Deactivated successfully. Jun 20 19:55:38.389764 systemd-logind[1969]: Session 8 logged out. Waiting for processes to exit. Jun 20 19:55:38.391589 systemd-logind[1969]: Removed session 8. Jun 20 19:55:38.418959 systemd[1]: Started sshd@8-172.31.30.175:22-147.75.109.163:40272.service - OpenSSH per-connection server daemon (147.75.109.163:40272). Jun 20 19:55:38.599081 sshd[2348]: Accepted publickey for core from 147.75.109.163 port 40272 ssh2: RSA SHA256:aTTjzy5mYIy3PeeaxkfgVdnra6PV/x835FJtoX+5LLY Jun 20 19:55:38.600663 sshd-session[2348]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:55:38.607452 systemd-logind[1969]: New session 9 of user core. Jun 20 19:55:38.612474 systemd[1]: Started session-9.scope - Session 9 of User core. Jun 20 19:55:38.711666 sudo[2351]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jun 20 19:55:38.711954 sudo[2351]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 20 19:55:38.713066 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jun 20 19:55:38.714803 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:55:39.051557 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:55:39.066031 (kubelet)[2374]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 19:55:39.147956 kubelet[2374]: E0620 19:55:39.147899 2374 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 19:55:39.151797 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 19:55:39.151941 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 19:55:39.152765 systemd[1]: kubelet.service: Consumed 197ms CPU time, 107.8M memory peak. Jun 20 19:55:39.453386 systemd[1]: Starting docker.service - Docker Application Container Engine... Jun 20 19:55:39.467730 (dockerd)[2385]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jun 20 19:55:39.913447 dockerd[2385]: time="2025-06-20T19:55:39.913386814Z" level=info msg="Starting up" Jun 20 19:55:39.915089 dockerd[2385]: time="2025-06-20T19:55:39.915047704Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jun 20 19:55:39.977580 dockerd[2385]: time="2025-06-20T19:55:39.977526177Z" level=info msg="Loading containers: start." Jun 20 19:55:40.006267 kernel: Initializing XFRM netlink socket Jun 20 19:55:40.265453 (udev-worker)[2406]: Network interface NamePolicy= disabled on kernel command line. Jun 20 19:55:40.307644 systemd-networkd[1834]: docker0: Link UP Jun 20 19:55:40.312390 dockerd[2385]: time="2025-06-20T19:55:40.312347769Z" level=info msg="Loading containers: done." Jun 20 19:55:40.327746 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2535819818-merged.mount: Deactivated successfully. Jun 20 19:55:40.331025 dockerd[2385]: time="2025-06-20T19:55:40.330963133Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jun 20 19:55:40.331147 dockerd[2385]: time="2025-06-20T19:55:40.331050989Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Jun 20 19:55:40.331180 dockerd[2385]: time="2025-06-20T19:55:40.331159723Z" level=info msg="Initializing buildkit" Jun 20 19:55:40.358777 dockerd[2385]: time="2025-06-20T19:55:40.358735648Z" level=info msg="Completed buildkit initialization" Jun 20 19:55:40.367735 dockerd[2385]: time="2025-06-20T19:55:40.367679885Z" level=info msg="Daemon has completed initialization" Jun 20 19:55:40.368559 dockerd[2385]: time="2025-06-20T19:55:40.367880875Z" level=info msg="API listen on /run/docker.sock" Jun 20 19:55:40.367991 systemd[1]: Started docker.service - Docker Application Container Engine. Jun 20 19:55:41.464538 containerd[1986]: time="2025-06-20T19:55:41.464499179Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\"" Jun 20 19:55:42.049943 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3345575867.mount: Deactivated successfully. Jun 20 19:55:43.813029 containerd[1986]: time="2025-06-20T19:55:43.812317746Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:55:43.814263 containerd[1986]: time="2025-06-20T19:55:43.814157574Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.2: active requests=0, bytes read=30079099" Jun 20 19:55:43.815830 containerd[1986]: time="2025-06-20T19:55:43.815767391Z" level=info msg="ImageCreate event name:\"sha256:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:55:43.819134 containerd[1986]: time="2025-06-20T19:55:43.818654396Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:55:43.819855 containerd[1986]: time="2025-06-20T19:55:43.819814239Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.2\" with image id \"sha256:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\", size \"30075899\" in 2.355275562s" Jun 20 19:55:43.819970 containerd[1986]: time="2025-06-20T19:55:43.819864023Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\" returns image reference \"sha256:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e\"" Jun 20 19:55:43.820551 containerd[1986]: time="2025-06-20T19:55:43.820502481Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\"" Jun 20 19:55:46.043573 containerd[1986]: time="2025-06-20T19:55:46.043515570Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:55:46.045289 containerd[1986]: time="2025-06-20T19:55:46.045236091Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.2: active requests=0, bytes read=26018946" Jun 20 19:55:46.048243 containerd[1986]: time="2025-06-20T19:55:46.046883202Z" level=info msg="ImageCreate event name:\"sha256:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:55:46.050693 containerd[1986]: time="2025-06-20T19:55:46.050654339Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:55:46.051539 containerd[1986]: time="2025-06-20T19:55:46.051509162Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.2\" with image id \"sha256:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\", size \"27646507\" in 2.230969636s" Jun 20 19:55:46.051631 containerd[1986]: time="2025-06-20T19:55:46.051619044Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\" returns image reference \"sha256:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2\"" Jun 20 19:55:46.052471 containerd[1986]: time="2025-06-20T19:55:46.052424577Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\"" Jun 20 19:55:47.871589 containerd[1986]: time="2025-06-20T19:55:47.871521169Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:55:47.873627 containerd[1986]: time="2025-06-20T19:55:47.873592148Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.2: active requests=0, bytes read=20155055" Jun 20 19:55:47.876152 containerd[1986]: time="2025-06-20T19:55:47.876086893Z" level=info msg="ImageCreate event name:\"sha256:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:55:47.879846 containerd[1986]: time="2025-06-20T19:55:47.879776332Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:55:47.880668 containerd[1986]: time="2025-06-20T19:55:47.880517993Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.2\" with image id \"sha256:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\", size \"21782634\" in 1.828062798s" Jun 20 19:55:47.880668 containerd[1986]: time="2025-06-20T19:55:47.880551122Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\" returns image reference \"sha256:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b\"" Jun 20 19:55:47.881267 containerd[1986]: time="2025-06-20T19:55:47.881233441Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\"" Jun 20 19:55:49.136967 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2198287543.mount: Deactivated successfully. Jun 20 19:55:49.354601 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jun 20 19:55:49.357839 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:55:49.759414 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:55:49.772023 (kubelet)[2665]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 19:55:49.863319 kubelet[2665]: E0620 19:55:49.863231 2665 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 19:55:49.870673 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 19:55:49.871900 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 19:55:49.872630 systemd[1]: kubelet.service: Consumed 213ms CPU time, 111.1M memory peak. Jun 20 19:55:49.897341 containerd[1986]: time="2025-06-20T19:55:49.897286477Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:55:49.898292 containerd[1986]: time="2025-06-20T19:55:49.898130564Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.2: active requests=0, bytes read=31892746" Jun 20 19:55:49.899321 containerd[1986]: time="2025-06-20T19:55:49.899292765Z" level=info msg="ImageCreate event name:\"sha256:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:55:49.901252 containerd[1986]: time="2025-06-20T19:55:49.900966286Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:55:49.901434 containerd[1986]: time="2025-06-20T19:55:49.901402375Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.2\" with image id \"sha256:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19\", repo tag \"registry.k8s.io/kube-proxy:v1.33.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\", size \"31891765\" in 2.020135586s" Jun 20 19:55:49.901479 containerd[1986]: time="2025-06-20T19:55:49.901440584Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\" returns image reference \"sha256:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19\"" Jun 20 19:55:49.901922 containerd[1986]: time="2025-06-20T19:55:49.901893531Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jun 20 19:55:50.452917 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3034831391.mount: Deactivated successfully. Jun 20 19:55:51.737148 containerd[1986]: time="2025-06-20T19:55:51.737084081Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:55:51.739278 containerd[1986]: time="2025-06-20T19:55:51.739011676Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Jun 20 19:55:51.741568 containerd[1986]: time="2025-06-20T19:55:51.741502959Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:55:51.744983 containerd[1986]: time="2025-06-20T19:55:51.744916271Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:55:51.746941 containerd[1986]: time="2025-06-20T19:55:51.745907455Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.843984569s" Jun 20 19:55:51.746941 containerd[1986]: time="2025-06-20T19:55:51.745948323Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Jun 20 19:55:51.746941 containerd[1986]: time="2025-06-20T19:55:51.746880095Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jun 20 19:55:52.221925 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1177189432.mount: Deactivated successfully. Jun 20 19:55:52.235066 containerd[1986]: time="2025-06-20T19:55:52.235012489Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 20 19:55:52.237168 containerd[1986]: time="2025-06-20T19:55:52.236908386Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jun 20 19:55:52.239561 containerd[1986]: time="2025-06-20T19:55:52.239518094Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 20 19:55:52.242915 containerd[1986]: time="2025-06-20T19:55:52.242841237Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 20 19:55:52.244119 containerd[1986]: time="2025-06-20T19:55:52.243444666Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 496.527946ms" Jun 20 19:55:52.244119 containerd[1986]: time="2025-06-20T19:55:52.243477144Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jun 20 19:55:52.244449 containerd[1986]: time="2025-06-20T19:55:52.244410388Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jun 20 19:55:52.797552 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount376087393.mount: Deactivated successfully. Jun 20 19:55:54.624423 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jun 20 19:55:55.193582 containerd[1986]: time="2025-06-20T19:55:55.193346033Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:55:55.195308 containerd[1986]: time="2025-06-20T19:55:55.195259441Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58247175" Jun 20 19:55:55.197805 containerd[1986]: time="2025-06-20T19:55:55.197740412Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:55:55.201970 containerd[1986]: time="2025-06-20T19:55:55.201925969Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:55:55.202953 containerd[1986]: time="2025-06-20T19:55:55.202921487Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 2.958480647s" Jun 20 19:55:55.203075 containerd[1986]: time="2025-06-20T19:55:55.203062262Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Jun 20 19:55:58.046577 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:55:58.047394 systemd[1]: kubelet.service: Consumed 213ms CPU time, 111.1M memory peak. Jun 20 19:55:58.050099 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:55:58.089369 systemd[1]: Reload requested from client PID 2814 ('systemctl') (unit session-9.scope)... Jun 20 19:55:58.089388 systemd[1]: Reloading... Jun 20 19:55:58.228300 zram_generator::config[2858]: No configuration found. Jun 20 19:55:58.379029 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 19:55:58.517483 systemd[1]: Reloading finished in 427 ms. Jun 20 19:55:58.573886 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jun 20 19:55:58.573983 systemd[1]: kubelet.service: Failed with result 'signal'. Jun 20 19:55:58.574446 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:55:58.574503 systemd[1]: kubelet.service: Consumed 138ms CPU time, 98.2M memory peak. Jun 20 19:55:58.576324 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:55:58.818978 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:55:58.828665 (kubelet)[2921]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 20 19:55:58.886642 kubelet[2921]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 20 19:55:58.887004 kubelet[2921]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jun 20 19:55:58.887049 kubelet[2921]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 20 19:55:58.889723 kubelet[2921]: I0620 19:55:58.889656 2921 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 20 19:55:59.575802 kubelet[2921]: I0620 19:55:59.575751 2921 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jun 20 19:55:59.575802 kubelet[2921]: I0620 19:55:59.575785 2921 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 20 19:55:59.576067 kubelet[2921]: I0620 19:55:59.576049 2921 server.go:956] "Client rotation is on, will bootstrap in background" Jun 20 19:55:59.618967 kubelet[2921]: I0620 19:55:59.618912 2921 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 20 19:55:59.625702 kubelet[2921]: E0620 19:55:59.625650 2921 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.30.175:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.30.175:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jun 20 19:55:59.649545 kubelet[2921]: I0620 19:55:59.649515 2921 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jun 20 19:55:59.658617 kubelet[2921]: I0620 19:55:59.658566 2921 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 20 19:55:59.664791 kubelet[2921]: I0620 19:55:59.664578 2921 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 20 19:55:59.668587 kubelet[2921]: I0620 19:55:59.664637 2921 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-30-175","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jun 20 19:55:59.669698 kubelet[2921]: I0620 19:55:59.669662 2921 topology_manager.go:138] "Creating topology manager with none policy" Jun 20 19:55:59.669698 kubelet[2921]: I0620 19:55:59.669692 2921 container_manager_linux.go:303] "Creating device plugin manager" Jun 20 19:55:59.671000 kubelet[2921]: I0620 19:55:59.670958 2921 state_mem.go:36] "Initialized new in-memory state store" Jun 20 19:55:59.674100 kubelet[2921]: I0620 19:55:59.674069 2921 kubelet.go:480] "Attempting to sync node with API server" Jun 20 19:55:59.674100 kubelet[2921]: I0620 19:55:59.674098 2921 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 20 19:55:59.675904 kubelet[2921]: I0620 19:55:59.675826 2921 kubelet.go:386] "Adding apiserver pod source" Jun 20 19:55:59.685928 kubelet[2921]: I0620 19:55:59.685892 2921 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 20 19:55:59.690253 kubelet[2921]: E0620 19:55:59.690022 2921 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.30.175:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-30-175&limit=500&resourceVersion=0\": dial tcp 172.31.30.175:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jun 20 19:55:59.693159 kubelet[2921]: I0620 19:55:59.693127 2921 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jun 20 19:55:59.694256 kubelet[2921]: I0620 19:55:59.693943 2921 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jun 20 19:55:59.694884 kubelet[2921]: E0620 19:55:59.694849 2921 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.30.175:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.30.175:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jun 20 19:55:59.695373 kubelet[2921]: W0620 19:55:59.695346 2921 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jun 20 19:55:59.700642 kubelet[2921]: I0620 19:55:59.700601 2921 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jun 20 19:55:59.700742 kubelet[2921]: I0620 19:55:59.700676 2921 server.go:1289] "Started kubelet" Jun 20 19:55:59.700973 kubelet[2921]: I0620 19:55:59.700913 2921 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jun 20 19:55:59.702572 kubelet[2921]: I0620 19:55:59.702214 2921 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 20 19:55:59.702572 kubelet[2921]: I0620 19:55:59.702234 2921 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jun 20 19:55:59.708316 kubelet[2921]: I0620 19:55:59.708279 2921 server.go:317] "Adding debug handlers to kubelet server" Jun 20 19:55:59.713410 kubelet[2921]: E0620 19:55:59.709792 2921 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.30.175:6443/api/v1/namespaces/default/events\": dial tcp 172.31.30.175:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-30-175.184ad86b1a442cd2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-30-175,UID:ip-172-31-30-175,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-30-175,},FirstTimestamp:2025-06-20 19:55:59.700634834 +0000 UTC m=+0.867531426,LastTimestamp:2025-06-20 19:55:59.700634834 +0000 UTC m=+0.867531426,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-30-175,}" Jun 20 19:55:59.721498 kubelet[2921]: I0620 19:55:59.720939 2921 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 20 19:55:59.724359 kubelet[2921]: I0620 19:55:59.724336 2921 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jun 20 19:55:59.727916 kubelet[2921]: E0620 19:55:59.727891 2921 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-30-175\" not found" Jun 20 19:55:59.728943 kubelet[2921]: I0620 19:55:59.728925 2921 volume_manager.go:297] "Starting Kubelet Volume Manager" Jun 20 19:55:59.729344 kubelet[2921]: I0620 19:55:59.729326 2921 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jun 20 19:55:59.731286 kubelet[2921]: I0620 19:55:59.730075 2921 reconciler.go:26] "Reconciler: start to sync state" Jun 20 19:55:59.731286 kubelet[2921]: E0620 19:55:59.730979 2921 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.30.175:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-175?timeout=10s\": dial tcp 172.31.30.175:6443: connect: connection refused" interval="200ms" Jun 20 19:55:59.731286 kubelet[2921]: E0620 19:55:59.731078 2921 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.30.175:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.30.175:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jun 20 19:55:59.732821 kubelet[2921]: I0620 19:55:59.732400 2921 factory.go:223] Registration of the systemd container factory successfully Jun 20 19:55:59.732821 kubelet[2921]: I0620 19:55:59.732485 2921 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jun 20 19:55:59.734747 kubelet[2921]: E0620 19:55:59.734724 2921 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 20 19:55:59.735051 kubelet[2921]: I0620 19:55:59.735027 2921 factory.go:223] Registration of the containerd container factory successfully Jun 20 19:55:59.748256 kubelet[2921]: I0620 19:55:59.748189 2921 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jun 20 19:55:59.763896 kubelet[2921]: I0620 19:55:59.763419 2921 cpu_manager.go:221] "Starting CPU manager" policy="none" Jun 20 19:55:59.763896 kubelet[2921]: I0620 19:55:59.763441 2921 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jun 20 19:55:59.763896 kubelet[2921]: I0620 19:55:59.763462 2921 state_mem.go:36] "Initialized new in-memory state store" Jun 20 19:55:59.769049 kubelet[2921]: I0620 19:55:59.769002 2921 policy_none.go:49] "None policy: Start" Jun 20 19:55:59.769049 kubelet[2921]: I0620 19:55:59.769033 2921 memory_manager.go:186] "Starting memorymanager" policy="None" Jun 20 19:55:59.769049 kubelet[2921]: I0620 19:55:59.769045 2921 state_mem.go:35] "Initializing new in-memory state store" Jun 20 19:55:59.773252 kubelet[2921]: I0620 19:55:59.773143 2921 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jun 20 19:55:59.773252 kubelet[2921]: I0620 19:55:59.773170 2921 status_manager.go:230] "Starting to sync pod status with apiserver" Jun 20 19:55:59.773252 kubelet[2921]: I0620 19:55:59.773192 2921 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jun 20 19:55:59.773252 kubelet[2921]: I0620 19:55:59.773200 2921 kubelet.go:2436] "Starting kubelet main sync loop" Jun 20 19:55:59.773429 kubelet[2921]: E0620 19:55:59.773262 2921 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 20 19:55:59.779390 kubelet[2921]: E0620 19:55:59.779353 2921 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.30.175:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.30.175:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jun 20 19:55:59.784455 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jun 20 19:55:59.800294 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jun 20 19:55:59.804762 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jun 20 19:55:59.822569 kubelet[2921]: E0620 19:55:59.822545 2921 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jun 20 19:55:59.822908 kubelet[2921]: I0620 19:55:59.822895 2921 eviction_manager.go:189] "Eviction manager: starting control loop" Jun 20 19:55:59.823006 kubelet[2921]: I0620 19:55:59.822978 2921 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jun 20 19:55:59.823346 kubelet[2921]: I0620 19:55:59.823331 2921 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 20 19:55:59.824526 kubelet[2921]: E0620 19:55:59.824501 2921 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jun 20 19:55:59.824596 kubelet[2921]: E0620 19:55:59.824540 2921 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-30-175\" not found" Jun 20 19:55:59.897804 systemd[1]: Created slice kubepods-burstable-pod948b86b07f401975bc3dc0b664df2d82.slice - libcontainer container kubepods-burstable-pod948b86b07f401975bc3dc0b664df2d82.slice. Jun 20 19:55:59.907216 kubelet[2921]: E0620 19:55:59.906724 2921 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-30-175\" not found" node="ip-172-31-30-175" Jun 20 19:55:59.913763 systemd[1]: Created slice kubepods-burstable-pod7cdfce65acbfffe1166f510e71bdfe7d.slice - libcontainer container kubepods-burstable-pod7cdfce65acbfffe1166f510e71bdfe7d.slice. Jun 20 19:55:59.924862 kubelet[2921]: I0620 19:55:59.924833 2921 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-30-175" Jun 20 19:55:59.925170 kubelet[2921]: E0620 19:55:59.925129 2921 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.30.175:6443/api/v1/nodes\": dial tcp 172.31.30.175:6443: connect: connection refused" node="ip-172-31-30-175" Jun 20 19:55:59.927144 kubelet[2921]: E0620 19:55:59.927117 2921 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-30-175\" not found" node="ip-172-31-30-175" Jun 20 19:55:59.930756 systemd[1]: Created slice kubepods-burstable-podd017630e7cf8f0e0f074525a21c29775.slice - libcontainer container kubepods-burstable-podd017630e7cf8f0e0f074525a21c29775.slice. Jun 20 19:55:59.932022 kubelet[2921]: E0620 19:55:59.931908 2921 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.30.175:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-175?timeout=10s\": dial tcp 172.31.30.175:6443: connect: connection refused" interval="400ms" Jun 20 19:55:59.932765 kubelet[2921]: I0620 19:55:59.932558 2921 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7cdfce65acbfffe1166f510e71bdfe7d-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-30-175\" (UID: \"7cdfce65acbfffe1166f510e71bdfe7d\") " pod="kube-system/kube-controller-manager-ip-172-31-30-175" Jun 20 19:55:59.932765 kubelet[2921]: I0620 19:55:59.932583 2921 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d017630e7cf8f0e0f074525a21c29775-kubeconfig\") pod \"kube-scheduler-ip-172-31-30-175\" (UID: \"d017630e7cf8f0e0f074525a21c29775\") " pod="kube-system/kube-scheduler-ip-172-31-30-175" Jun 20 19:55:59.932765 kubelet[2921]: I0620 19:55:59.932599 2921 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/948b86b07f401975bc3dc0b664df2d82-ca-certs\") pod \"kube-apiserver-ip-172-31-30-175\" (UID: \"948b86b07f401975bc3dc0b664df2d82\") " pod="kube-system/kube-apiserver-ip-172-31-30-175" Jun 20 19:55:59.932765 kubelet[2921]: I0620 19:55:59.932619 2921 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/948b86b07f401975bc3dc0b664df2d82-k8s-certs\") pod \"kube-apiserver-ip-172-31-30-175\" (UID: \"948b86b07f401975bc3dc0b664df2d82\") " pod="kube-system/kube-apiserver-ip-172-31-30-175" Jun 20 19:55:59.932765 kubelet[2921]: I0620 19:55:59.932637 2921 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/948b86b07f401975bc3dc0b664df2d82-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-30-175\" (UID: \"948b86b07f401975bc3dc0b664df2d82\") " pod="kube-system/kube-apiserver-ip-172-31-30-175" Jun 20 19:55:59.932950 kubelet[2921]: I0620 19:55:59.932658 2921 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7cdfce65acbfffe1166f510e71bdfe7d-ca-certs\") pod \"kube-controller-manager-ip-172-31-30-175\" (UID: \"7cdfce65acbfffe1166f510e71bdfe7d\") " pod="kube-system/kube-controller-manager-ip-172-31-30-175" Jun 20 19:55:59.932950 kubelet[2921]: I0620 19:55:59.932673 2921 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7cdfce65acbfffe1166f510e71bdfe7d-k8s-certs\") pod \"kube-controller-manager-ip-172-31-30-175\" (UID: \"7cdfce65acbfffe1166f510e71bdfe7d\") " pod="kube-system/kube-controller-manager-ip-172-31-30-175" Jun 20 19:55:59.932950 kubelet[2921]: I0620 19:55:59.932689 2921 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7cdfce65acbfffe1166f510e71bdfe7d-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-30-175\" (UID: \"7cdfce65acbfffe1166f510e71bdfe7d\") " pod="kube-system/kube-controller-manager-ip-172-31-30-175" Jun 20 19:55:59.932950 kubelet[2921]: I0620 19:55:59.932706 2921 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7cdfce65acbfffe1166f510e71bdfe7d-kubeconfig\") pod \"kube-controller-manager-ip-172-31-30-175\" (UID: \"7cdfce65acbfffe1166f510e71bdfe7d\") " pod="kube-system/kube-controller-manager-ip-172-31-30-175" Jun 20 19:55:59.934187 kubelet[2921]: E0620 19:55:59.934156 2921 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-30-175\" not found" node="ip-172-31-30-175" Jun 20 19:56:00.128100 kubelet[2921]: I0620 19:56:00.128063 2921 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-30-175" Jun 20 19:56:00.128643 kubelet[2921]: E0620 19:56:00.128602 2921 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.30.175:6443/api/v1/nodes\": dial tcp 172.31.30.175:6443: connect: connection refused" node="ip-172-31-30-175" Jun 20 19:56:00.247993 containerd[1986]: time="2025-06-20T19:56:00.247860556Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-30-175,Uid:d017630e7cf8f0e0f074525a21c29775,Namespace:kube-system,Attempt:0,}" Jun 20 19:56:00.252087 containerd[1986]: time="2025-06-20T19:56:00.252026676Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-30-175,Uid:948b86b07f401975bc3dc0b664df2d82,Namespace:kube-system,Attempt:0,}" Jun 20 19:56:00.258934 containerd[1986]: time="2025-06-20T19:56:00.258797514Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-30-175,Uid:7cdfce65acbfffe1166f510e71bdfe7d,Namespace:kube-system,Attempt:0,}" Jun 20 19:56:00.335068 kubelet[2921]: E0620 19:56:00.335023 2921 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.30.175:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-175?timeout=10s\": dial tcp 172.31.30.175:6443: connect: connection refused" interval="800ms" Jun 20 19:56:00.450742 containerd[1986]: time="2025-06-20T19:56:00.450574410Z" level=info msg="connecting to shim 6f40ddabd115de838c53212d25c70177ca34fe24b696e2c8664642c61b7234a9" address="unix:///run/containerd/s/d853f91a41912ebcb969bbf6f2eeb22b8ff1a3eda36912096799647825a4abef" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:56:00.462086 containerd[1986]: time="2025-06-20T19:56:00.462028097Z" level=info msg="connecting to shim f3776c749273f8a2ff642b6e0732438a7ca5e11f4c60111d482f4ea0978c6fc4" address="unix:///run/containerd/s/2a15ad7ed7f6525a0bf6965477ed6ae2396711f4017c9ab22ae613cae8e26011" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:56:00.463082 containerd[1986]: time="2025-06-20T19:56:00.463005357Z" level=info msg="connecting to shim 53b1cf34a75f6e37809a27bc0936936525786fc3ea8ba9e869feb32675b6c435" address="unix:///run/containerd/s/23868064aebf656c95aecb49c112cc80887d158542cfda27b99bebe4c20d041f" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:56:00.535148 kubelet[2921]: I0620 19:56:00.533914 2921 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-30-175" Jun 20 19:56:00.535628 kubelet[2921]: E0620 19:56:00.535401 2921 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.30.175:6443/api/v1/nodes\": dial tcp 172.31.30.175:6443: connect: connection refused" node="ip-172-31-30-175" Jun 20 19:56:00.578655 systemd[1]: Started cri-containerd-f3776c749273f8a2ff642b6e0732438a7ca5e11f4c60111d482f4ea0978c6fc4.scope - libcontainer container f3776c749273f8a2ff642b6e0732438a7ca5e11f4c60111d482f4ea0978c6fc4. Jun 20 19:56:00.597505 systemd[1]: Started cri-containerd-53b1cf34a75f6e37809a27bc0936936525786fc3ea8ba9e869feb32675b6c435.scope - libcontainer container 53b1cf34a75f6e37809a27bc0936936525786fc3ea8ba9e869feb32675b6c435. Jun 20 19:56:00.604881 systemd[1]: Started cri-containerd-6f40ddabd115de838c53212d25c70177ca34fe24b696e2c8664642c61b7234a9.scope - libcontainer container 6f40ddabd115de838c53212d25c70177ca34fe24b696e2c8664642c61b7234a9. Jun 20 19:56:00.697092 containerd[1986]: time="2025-06-20T19:56:00.697045489Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-30-175,Uid:948b86b07f401975bc3dc0b664df2d82,Namespace:kube-system,Attempt:0,} returns sandbox id \"f3776c749273f8a2ff642b6e0732438a7ca5e11f4c60111d482f4ea0978c6fc4\"" Jun 20 19:56:00.699434 containerd[1986]: time="2025-06-20T19:56:00.699390940Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-30-175,Uid:7cdfce65acbfffe1166f510e71bdfe7d,Namespace:kube-system,Attempt:0,} returns sandbox id \"53b1cf34a75f6e37809a27bc0936936525786fc3ea8ba9e869feb32675b6c435\"" Jun 20 19:56:00.709667 containerd[1986]: time="2025-06-20T19:56:00.709620690Z" level=info msg="CreateContainer within sandbox \"f3776c749273f8a2ff642b6e0732438a7ca5e11f4c60111d482f4ea0978c6fc4\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jun 20 19:56:00.712517 containerd[1986]: time="2025-06-20T19:56:00.712173203Z" level=info msg="CreateContainer within sandbox \"53b1cf34a75f6e37809a27bc0936936525786fc3ea8ba9e869feb32675b6c435\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jun 20 19:56:00.737036 containerd[1986]: time="2025-06-20T19:56:00.736990698Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-30-175,Uid:d017630e7cf8f0e0f074525a21c29775,Namespace:kube-system,Attempt:0,} returns sandbox id \"6f40ddabd115de838c53212d25c70177ca34fe24b696e2c8664642c61b7234a9\"" Jun 20 19:56:00.739426 containerd[1986]: time="2025-06-20T19:56:00.739385556Z" level=info msg="Container e814d31832e44f80f4981eea7345b845d2b4dea9ad1f9d2fa3f952abe44289b4: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:56:00.740234 containerd[1986]: time="2025-06-20T19:56:00.739508738Z" level=info msg="Container f659a1edd15e42e40b3d14a3fda5a65c8fbcd65391afd8c9e5f24cdb49f4cd2e: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:56:00.747629 containerd[1986]: time="2025-06-20T19:56:00.747592752Z" level=info msg="CreateContainer within sandbox \"6f40ddabd115de838c53212d25c70177ca34fe24b696e2c8664642c61b7234a9\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jun 20 19:56:00.770106 containerd[1986]: time="2025-06-20T19:56:00.770057506Z" level=info msg="CreateContainer within sandbox \"53b1cf34a75f6e37809a27bc0936936525786fc3ea8ba9e869feb32675b6c435\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"e814d31832e44f80f4981eea7345b845d2b4dea9ad1f9d2fa3f952abe44289b4\"" Jun 20 19:56:00.771786 containerd[1986]: time="2025-06-20T19:56:00.771325185Z" level=info msg="CreateContainer within sandbox \"f3776c749273f8a2ff642b6e0732438a7ca5e11f4c60111d482f4ea0978c6fc4\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"f659a1edd15e42e40b3d14a3fda5a65c8fbcd65391afd8c9e5f24cdb49f4cd2e\"" Jun 20 19:56:00.771786 containerd[1986]: time="2025-06-20T19:56:00.771555822Z" level=info msg="StartContainer for \"e814d31832e44f80f4981eea7345b845d2b4dea9ad1f9d2fa3f952abe44289b4\"" Jun 20 19:56:00.774586 containerd[1986]: time="2025-06-20T19:56:00.774546268Z" level=info msg="connecting to shim e814d31832e44f80f4981eea7345b845d2b4dea9ad1f9d2fa3f952abe44289b4" address="unix:///run/containerd/s/23868064aebf656c95aecb49c112cc80887d158542cfda27b99bebe4c20d041f" protocol=ttrpc version=3 Jun 20 19:56:00.775143 containerd[1986]: time="2025-06-20T19:56:00.775105475Z" level=info msg="StartContainer for \"f659a1edd15e42e40b3d14a3fda5a65c8fbcd65391afd8c9e5f24cdb49f4cd2e\"" Jun 20 19:56:00.776000 containerd[1986]: time="2025-06-20T19:56:00.775964963Z" level=info msg="connecting to shim f659a1edd15e42e40b3d14a3fda5a65c8fbcd65391afd8c9e5f24cdb49f4cd2e" address="unix:///run/containerd/s/2a15ad7ed7f6525a0bf6965477ed6ae2396711f4017c9ab22ae613cae8e26011" protocol=ttrpc version=3 Jun 20 19:56:00.779292 containerd[1986]: time="2025-06-20T19:56:00.779252110Z" level=info msg="Container 3591a516367461fa64bf944b8c6e31708a5fd9ceb25d8449202adf5f881b8681: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:56:00.795500 containerd[1986]: time="2025-06-20T19:56:00.794926644Z" level=info msg="CreateContainer within sandbox \"6f40ddabd115de838c53212d25c70177ca34fe24b696e2c8664642c61b7234a9\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"3591a516367461fa64bf944b8c6e31708a5fd9ceb25d8449202adf5f881b8681\"" Jun 20 19:56:00.796207 containerd[1986]: time="2025-06-20T19:56:00.796177424Z" level=info msg="StartContainer for \"3591a516367461fa64bf944b8c6e31708a5fd9ceb25d8449202adf5f881b8681\"" Jun 20 19:56:00.797520 containerd[1986]: time="2025-06-20T19:56:00.797488738Z" level=info msg="connecting to shim 3591a516367461fa64bf944b8c6e31708a5fd9ceb25d8449202adf5f881b8681" address="unix:///run/containerd/s/d853f91a41912ebcb969bbf6f2eeb22b8ff1a3eda36912096799647825a4abef" protocol=ttrpc version=3 Jun 20 19:56:00.802651 systemd[1]: Started cri-containerd-e814d31832e44f80f4981eea7345b845d2b4dea9ad1f9d2fa3f952abe44289b4.scope - libcontainer container e814d31832e44f80f4981eea7345b845d2b4dea9ad1f9d2fa3f952abe44289b4. Jun 20 19:56:00.817401 systemd[1]: Started cri-containerd-f659a1edd15e42e40b3d14a3fda5a65c8fbcd65391afd8c9e5f24cdb49f4cd2e.scope - libcontainer container f659a1edd15e42e40b3d14a3fda5a65c8fbcd65391afd8c9e5f24cdb49f4cd2e. Jun 20 19:56:00.832465 systemd[1]: Started cri-containerd-3591a516367461fa64bf944b8c6e31708a5fd9ceb25d8449202adf5f881b8681.scope - libcontainer container 3591a516367461fa64bf944b8c6e31708a5fd9ceb25d8449202adf5f881b8681. Jun 20 19:56:00.929043 containerd[1986]: time="2025-06-20T19:56:00.928998903Z" level=info msg="StartContainer for \"e814d31832e44f80f4981eea7345b845d2b4dea9ad1f9d2fa3f952abe44289b4\" returns successfully" Jun 20 19:56:00.944084 containerd[1986]: time="2025-06-20T19:56:00.944043894Z" level=info msg="StartContainer for \"f659a1edd15e42e40b3d14a3fda5a65c8fbcd65391afd8c9e5f24cdb49f4cd2e\" returns successfully" Jun 20 19:56:00.949894 containerd[1986]: time="2025-06-20T19:56:00.949850049Z" level=info msg="StartContainer for \"3591a516367461fa64bf944b8c6e31708a5fd9ceb25d8449202adf5f881b8681\" returns successfully" Jun 20 19:56:00.952475 kubelet[2921]: E0620 19:56:00.952431 2921 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.30.175:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.30.175:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jun 20 19:56:01.051542 kubelet[2921]: E0620 19:56:01.051406 2921 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.30.175:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-30-175&limit=500&resourceVersion=0\": dial tcp 172.31.30.175:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jun 20 19:56:01.091196 kubelet[2921]: E0620 19:56:01.091143 2921 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.30.175:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.30.175:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jun 20 19:56:01.136587 kubelet[2921]: E0620 19:56:01.136536 2921 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.30.175:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-175?timeout=10s\": dial tcp 172.31.30.175:6443: connect: connection refused" interval="1.6s" Jun 20 19:56:01.149441 kubelet[2921]: E0620 19:56:01.149393 2921 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.30.175:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.30.175:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jun 20 19:56:01.341317 kubelet[2921]: I0620 19:56:01.338081 2921 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-30-175" Jun 20 19:56:01.341317 kubelet[2921]: E0620 19:56:01.339730 2921 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.30.175:6443/api/v1/nodes\": dial tcp 172.31.30.175:6443: connect: connection refused" node="ip-172-31-30-175" Jun 20 19:56:01.753123 kubelet[2921]: E0620 19:56:01.753069 2921 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.30.175:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.30.175:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jun 20 19:56:01.802608 kubelet[2921]: E0620 19:56:01.802573 2921 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-30-175\" not found" node="ip-172-31-30-175" Jun 20 19:56:01.825001 kubelet[2921]: E0620 19:56:01.824960 2921 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-30-175\" not found" node="ip-172-31-30-175" Jun 20 19:56:01.831493 kubelet[2921]: E0620 19:56:01.831461 2921 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-30-175\" not found" node="ip-172-31-30-175" Jun 20 19:56:02.737336 kubelet[2921]: E0620 19:56:02.737283 2921 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.30.175:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-175?timeout=10s\": dial tcp 172.31.30.175:6443: connect: connection refused" interval="3.2s" Jun 20 19:56:02.836368 kubelet[2921]: E0620 19:56:02.835915 2921 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-30-175\" not found" node="ip-172-31-30-175" Jun 20 19:56:02.836996 kubelet[2921]: E0620 19:56:02.836975 2921 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-30-175\" not found" node="ip-172-31-30-175" Jun 20 19:56:02.837708 kubelet[2921]: E0620 19:56:02.837330 2921 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-30-175\" not found" node="ip-172-31-30-175" Jun 20 19:56:02.955207 kubelet[2921]: I0620 19:56:02.955159 2921 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-30-175" Jun 20 19:56:02.962265 kubelet[2921]: E0620 19:56:02.960272 2921 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.30.175:6443/api/v1/nodes\": dial tcp 172.31.30.175:6443: connect: connection refused" node="ip-172-31-30-175" Jun 20 19:56:03.027799 kubelet[2921]: E0620 19:56:03.027677 2921 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.30.175:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.30.175:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jun 20 19:56:03.836270 kubelet[2921]: E0620 19:56:03.835790 2921 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-30-175\" not found" node="ip-172-31-30-175" Jun 20 19:56:03.838823 kubelet[2921]: E0620 19:56:03.837710 2921 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-30-175\" not found" node="ip-172-31-30-175" Jun 20 19:56:04.158325 kubelet[2921]: E0620 19:56:04.158067 2921 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-30-175\" not found" node="ip-172-31-30-175" Jun 20 19:56:04.840443 kubelet[2921]: E0620 19:56:04.839616 2921 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-30-175\" not found" node="ip-172-31-30-175" Jun 20 19:56:05.695254 kubelet[2921]: I0620 19:56:05.694699 2921 apiserver.go:52] "Watching apiserver" Jun 20 19:56:05.724807 kubelet[2921]: E0620 19:56:05.724771 2921 csi_plugin.go:397] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ip-172-31-30-175" not found Jun 20 19:56:05.732206 kubelet[2921]: I0620 19:56:05.732155 2921 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jun 20 19:56:05.942180 kubelet[2921]: E0620 19:56:05.942150 2921 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-30-175\" not found" node="ip-172-31-30-175" Jun 20 19:56:06.076655 kubelet[2921]: E0620 19:56:06.076539 2921 csi_plugin.go:397] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ip-172-31-30-175" not found Jun 20 19:56:06.162183 kubelet[2921]: I0620 19:56:06.162136 2921 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-30-175" Jun 20 19:56:06.173279 kubelet[2921]: I0620 19:56:06.173003 2921 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-30-175" Jun 20 19:56:06.231581 kubelet[2921]: I0620 19:56:06.231537 2921 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-30-175" Jun 20 19:56:06.266525 kubelet[2921]: I0620 19:56:06.266479 2921 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-30-175" Jun 20 19:56:06.279995 kubelet[2921]: I0620 19:56:06.279959 2921 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-30-175" Jun 20 19:56:07.794585 systemd[1]: Reload requested from client PID 3199 ('systemctl') (unit session-9.scope)... Jun 20 19:56:07.794609 systemd[1]: Reloading... Jun 20 19:56:07.937260 zram_generator::config[3246]: No configuration found. Jun 20 19:56:08.050373 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 19:56:08.207375 systemd[1]: Reloading finished in 412 ms. Jun 20 19:56:08.243836 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:56:08.259897 systemd[1]: kubelet.service: Deactivated successfully. Jun 20 19:56:08.260124 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:56:08.260183 systemd[1]: kubelet.service: Consumed 1.314s CPU time, 129.5M memory peak. Jun 20 19:56:08.264028 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:56:08.541552 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:56:08.551253 (kubelet)[3303]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 20 19:56:08.627298 kubelet[3303]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 20 19:56:08.627298 kubelet[3303]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jun 20 19:56:08.627298 kubelet[3303]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 20 19:56:08.627298 kubelet[3303]: I0620 19:56:08.627167 3303 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 20 19:56:08.636583 kubelet[3303]: I0620 19:56:08.636541 3303 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jun 20 19:56:08.636583 kubelet[3303]: I0620 19:56:08.636570 3303 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 20 19:56:08.636885 kubelet[3303]: I0620 19:56:08.636861 3303 server.go:956] "Client rotation is on, will bootstrap in background" Jun 20 19:56:08.638108 kubelet[3303]: I0620 19:56:08.638077 3303 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jun 20 19:56:08.640890 kubelet[3303]: I0620 19:56:08.640278 3303 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 20 19:56:08.646774 kubelet[3303]: I0620 19:56:08.646741 3303 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jun 20 19:56:08.650187 kubelet[3303]: I0620 19:56:08.650151 3303 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 20 19:56:08.652510 kubelet[3303]: I0620 19:56:08.652458 3303 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 20 19:56:08.652875 kubelet[3303]: I0620 19:56:08.652495 3303 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-30-175","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jun 20 19:56:08.653037 kubelet[3303]: I0620 19:56:08.652888 3303 topology_manager.go:138] "Creating topology manager with none policy" Jun 20 19:56:08.653037 kubelet[3303]: I0620 19:56:08.652903 3303 container_manager_linux.go:303] "Creating device plugin manager" Jun 20 19:56:08.653037 kubelet[3303]: I0620 19:56:08.652966 3303 state_mem.go:36] "Initialized new in-memory state store" Jun 20 19:56:08.653173 kubelet[3303]: I0620 19:56:08.653159 3303 kubelet.go:480] "Attempting to sync node with API server" Jun 20 19:56:08.653216 kubelet[3303]: I0620 19:56:08.653180 3303 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 20 19:56:08.653216 kubelet[3303]: I0620 19:56:08.653211 3303 kubelet.go:386] "Adding apiserver pod source" Jun 20 19:56:08.653316 kubelet[3303]: I0620 19:56:08.653248 3303 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 20 19:56:08.660627 kubelet[3303]: I0620 19:56:08.660519 3303 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jun 20 19:56:08.663307 kubelet[3303]: I0620 19:56:08.663276 3303 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jun 20 19:56:08.673313 kubelet[3303]: I0620 19:56:08.672270 3303 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jun 20 19:56:08.673313 kubelet[3303]: I0620 19:56:08.672338 3303 server.go:1289] "Started kubelet" Jun 20 19:56:08.673313 kubelet[3303]: I0620 19:56:08.672712 3303 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jun 20 19:56:08.673313 kubelet[3303]: I0620 19:56:08.672900 3303 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jun 20 19:56:08.673682 kubelet[3303]: I0620 19:56:08.673214 3303 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 20 19:56:08.673880 kubelet[3303]: I0620 19:56:08.673860 3303 server.go:317] "Adding debug handlers to kubelet server" Jun 20 19:56:08.680618 kubelet[3303]: I0620 19:56:08.680593 3303 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 20 19:56:08.693092 kubelet[3303]: I0620 19:56:08.693050 3303 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jun 20 19:56:08.697418 kubelet[3303]: I0620 19:56:08.697390 3303 volume_manager.go:297] "Starting Kubelet Volume Manager" Jun 20 19:56:08.697724 kubelet[3303]: E0620 19:56:08.697702 3303 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-30-175\" not found" Jun 20 19:56:08.702129 kubelet[3303]: I0620 19:56:08.702090 3303 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jun 20 19:56:08.707216 kubelet[3303]: I0620 19:56:08.707180 3303 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jun 20 19:56:08.707506 kubelet[3303]: I0620 19:56:08.707343 3303 reconciler.go:26] "Reconciler: start to sync state" Jun 20 19:56:08.714675 kubelet[3303]: I0620 19:56:08.713599 3303 factory.go:223] Registration of the systemd container factory successfully Jun 20 19:56:08.714675 kubelet[3303]: I0620 19:56:08.713737 3303 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jun 20 19:56:08.719921 kubelet[3303]: I0620 19:56:08.719377 3303 factory.go:223] Registration of the containerd container factory successfully Jun 20 19:56:08.723550 kubelet[3303]: I0620 19:56:08.723520 3303 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jun 20 19:56:08.723734 kubelet[3303]: I0620 19:56:08.723721 3303 status_manager.go:230] "Starting to sync pod status with apiserver" Jun 20 19:56:08.723847 kubelet[3303]: I0620 19:56:08.723835 3303 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jun 20 19:56:08.723918 kubelet[3303]: I0620 19:56:08.723909 3303 kubelet.go:2436] "Starting kubelet main sync loop" Jun 20 19:56:08.724040 kubelet[3303]: E0620 19:56:08.724019 3303 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 20 19:56:08.784281 kubelet[3303]: I0620 19:56:08.784201 3303 cpu_manager.go:221] "Starting CPU manager" policy="none" Jun 20 19:56:08.784647 kubelet[3303]: I0620 19:56:08.784576 3303 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jun 20 19:56:08.784647 kubelet[3303]: I0620 19:56:08.784607 3303 state_mem.go:36] "Initialized new in-memory state store" Jun 20 19:56:08.785232 kubelet[3303]: I0620 19:56:08.785108 3303 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jun 20 19:56:08.785363 kubelet[3303]: I0620 19:56:08.785333 3303 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jun 20 19:56:08.785702 kubelet[3303]: I0620 19:56:08.785426 3303 policy_none.go:49] "None policy: Start" Jun 20 19:56:08.785702 kubelet[3303]: I0620 19:56:08.785446 3303 memory_manager.go:186] "Starting memorymanager" policy="None" Jun 20 19:56:08.785702 kubelet[3303]: I0620 19:56:08.785461 3303 state_mem.go:35] "Initializing new in-memory state store" Jun 20 19:56:08.785702 kubelet[3303]: I0620 19:56:08.785601 3303 state_mem.go:75] "Updated machine memory state" Jun 20 19:56:08.792548 kubelet[3303]: E0620 19:56:08.792462 3303 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jun 20 19:56:08.795210 kubelet[3303]: I0620 19:56:08.795178 3303 eviction_manager.go:189] "Eviction manager: starting control loop" Jun 20 19:56:08.796412 kubelet[3303]: I0620 19:56:08.795353 3303 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jun 20 19:56:08.796412 kubelet[3303]: I0620 19:56:08.795757 3303 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 20 19:56:08.802360 kubelet[3303]: E0620 19:56:08.802330 3303 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jun 20 19:56:08.824998 kubelet[3303]: I0620 19:56:08.824961 3303 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-30-175" Jun 20 19:56:08.825677 kubelet[3303]: I0620 19:56:08.825650 3303 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-30-175" Jun 20 19:56:08.826515 kubelet[3303]: I0620 19:56:08.826499 3303 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-30-175" Jun 20 19:56:08.839097 kubelet[3303]: E0620 19:56:08.839013 3303 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-30-175\" already exists" pod="kube-system/kube-apiserver-ip-172-31-30-175" Jun 20 19:56:08.839468 kubelet[3303]: E0620 19:56:08.839439 3303 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-30-175\" already exists" pod="kube-system/kube-scheduler-ip-172-31-30-175" Jun 20 19:56:08.839582 kubelet[3303]: E0620 19:56:08.839437 3303 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-30-175\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-30-175" Jun 20 19:56:08.898070 kubelet[3303]: I0620 19:56:08.897997 3303 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-30-175" Jun 20 19:56:08.907217 kubelet[3303]: I0620 19:56:08.907170 3303 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-30-175" Jun 20 19:56:08.907689 kubelet[3303]: I0620 19:56:08.907290 3303 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-30-175" Jun 20 19:56:08.911251 kubelet[3303]: I0620 19:56:08.911188 3303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/948b86b07f401975bc3dc0b664df2d82-ca-certs\") pod \"kube-apiserver-ip-172-31-30-175\" (UID: \"948b86b07f401975bc3dc0b664df2d82\") " pod="kube-system/kube-apiserver-ip-172-31-30-175" Jun 20 19:56:08.911397 kubelet[3303]: I0620 19:56:08.911266 3303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7cdfce65acbfffe1166f510e71bdfe7d-ca-certs\") pod \"kube-controller-manager-ip-172-31-30-175\" (UID: \"7cdfce65acbfffe1166f510e71bdfe7d\") " pod="kube-system/kube-controller-manager-ip-172-31-30-175" Jun 20 19:56:08.911397 kubelet[3303]: I0620 19:56:08.911296 3303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7cdfce65acbfffe1166f510e71bdfe7d-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-30-175\" (UID: \"7cdfce65acbfffe1166f510e71bdfe7d\") " pod="kube-system/kube-controller-manager-ip-172-31-30-175" Jun 20 19:56:08.911528 kubelet[3303]: I0620 19:56:08.911423 3303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7cdfce65acbfffe1166f510e71bdfe7d-k8s-certs\") pod \"kube-controller-manager-ip-172-31-30-175\" (UID: \"7cdfce65acbfffe1166f510e71bdfe7d\") " pod="kube-system/kube-controller-manager-ip-172-31-30-175" Jun 20 19:56:08.911528 kubelet[3303]: I0620 19:56:08.911450 3303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7cdfce65acbfffe1166f510e71bdfe7d-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-30-175\" (UID: \"7cdfce65acbfffe1166f510e71bdfe7d\") " pod="kube-system/kube-controller-manager-ip-172-31-30-175" Jun 20 19:56:08.912202 kubelet[3303]: I0620 19:56:08.911575 3303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d017630e7cf8f0e0f074525a21c29775-kubeconfig\") pod \"kube-scheduler-ip-172-31-30-175\" (UID: \"d017630e7cf8f0e0f074525a21c29775\") " pod="kube-system/kube-scheduler-ip-172-31-30-175" Jun 20 19:56:08.912202 kubelet[3303]: I0620 19:56:08.911601 3303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/948b86b07f401975bc3dc0b664df2d82-k8s-certs\") pod \"kube-apiserver-ip-172-31-30-175\" (UID: \"948b86b07f401975bc3dc0b664df2d82\") " pod="kube-system/kube-apiserver-ip-172-31-30-175" Jun 20 19:56:08.912202 kubelet[3303]: I0620 19:56:08.911724 3303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/948b86b07f401975bc3dc0b664df2d82-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-30-175\" (UID: \"948b86b07f401975bc3dc0b664df2d82\") " pod="kube-system/kube-apiserver-ip-172-31-30-175" Jun 20 19:56:08.912202 kubelet[3303]: I0620 19:56:08.911754 3303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7cdfce65acbfffe1166f510e71bdfe7d-kubeconfig\") pod \"kube-controller-manager-ip-172-31-30-175\" (UID: \"7cdfce65acbfffe1166f510e71bdfe7d\") " pod="kube-system/kube-controller-manager-ip-172-31-30-175" Jun 20 19:56:08.912380 update_engine[1979]: I20250620 19:56:08.912268 1979 update_attempter.cc:509] Updating boot flags... Jun 20 19:56:09.669806 kubelet[3303]: I0620 19:56:09.668644 3303 apiserver.go:52] "Watching apiserver" Jun 20 19:56:09.707632 kubelet[3303]: I0620 19:56:09.707595 3303 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jun 20 19:56:09.762508 kubelet[3303]: I0620 19:56:09.762478 3303 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-30-175" Jun 20 19:56:09.803629 kubelet[3303]: E0620 19:56:09.799964 3303 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-30-175\" already exists" pod="kube-system/kube-apiserver-ip-172-31-30-175" Jun 20 19:56:09.950558 kubelet[3303]: I0620 19:56:09.950417 3303 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-30-175" podStartSLOduration=3.950396995 podStartE2EDuration="3.950396995s" podCreationTimestamp="2025-06-20 19:56:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:56:09.948186631 +0000 UTC m=+1.387476316" watchObservedRunningTime="2025-06-20 19:56:09.950396995 +0000 UTC m=+1.389686675" Jun 20 19:56:09.954558 kubelet[3303]: I0620 19:56:09.954492 3303 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-30-175" podStartSLOduration=3.9544714880000003 podStartE2EDuration="3.954471488s" podCreationTimestamp="2025-06-20 19:56:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:56:09.911743523 +0000 UTC m=+1.351033207" watchObservedRunningTime="2025-06-20 19:56:09.954471488 +0000 UTC m=+1.393761175" Jun 20 19:56:10.098906 kubelet[3303]: I0620 19:56:10.098826 3303 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-30-175" podStartSLOduration=4.098805257 podStartE2EDuration="4.098805257s" podCreationTimestamp="2025-06-20 19:56:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:56:09.977835285 +0000 UTC m=+1.417124974" watchObservedRunningTime="2025-06-20 19:56:10.098805257 +0000 UTC m=+1.538094943" Jun 20 19:56:13.781506 kubelet[3303]: I0620 19:56:13.781450 3303 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jun 20 19:56:13.782866 containerd[1986]: time="2025-06-20T19:56:13.782831831Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jun 20 19:56:13.783598 kubelet[3303]: I0620 19:56:13.783062 3303 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jun 20 19:56:14.838526 systemd[1]: Created slice kubepods-besteffort-podf44fc805_098c_4b14_bded_eb852cc656d6.slice - libcontainer container kubepods-besteffort-podf44fc805_098c_4b14_bded_eb852cc656d6.slice. Jun 20 19:56:14.868587 kubelet[3303]: I0620 19:56:14.868546 3303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f44fc805-098c-4b14-bded-eb852cc656d6-lib-modules\") pod \"kube-proxy-ddl9h\" (UID: \"f44fc805-098c-4b14-bded-eb852cc656d6\") " pod="kube-system/kube-proxy-ddl9h" Jun 20 19:56:14.870520 kubelet[3303]: I0620 19:56:14.870340 3303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5s7vd\" (UniqueName: \"kubernetes.io/projected/f44fc805-098c-4b14-bded-eb852cc656d6-kube-api-access-5s7vd\") pod \"kube-proxy-ddl9h\" (UID: \"f44fc805-098c-4b14-bded-eb852cc656d6\") " pod="kube-system/kube-proxy-ddl9h" Jun 20 19:56:14.870520 kubelet[3303]: I0620 19:56:14.870399 3303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f44fc805-098c-4b14-bded-eb852cc656d6-kube-proxy\") pod \"kube-proxy-ddl9h\" (UID: \"f44fc805-098c-4b14-bded-eb852cc656d6\") " pod="kube-system/kube-proxy-ddl9h" Jun 20 19:56:14.870520 kubelet[3303]: I0620 19:56:14.870426 3303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f44fc805-098c-4b14-bded-eb852cc656d6-xtables-lock\") pod \"kube-proxy-ddl9h\" (UID: \"f44fc805-098c-4b14-bded-eb852cc656d6\") " pod="kube-system/kube-proxy-ddl9h" Jun 20 19:56:15.015104 systemd[1]: Created slice kubepods-besteffort-podb259f88f_6ed0_4105_8e81_539fdac15924.slice - libcontainer container kubepods-besteffort-podb259f88f_6ed0_4105_8e81_539fdac15924.slice. Jun 20 19:56:15.071981 kubelet[3303]: I0620 19:56:15.071778 3303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/b259f88f-6ed0-4105-8e81-539fdac15924-var-lib-calico\") pod \"tigera-operator-68f7c7984d-qn5zb\" (UID: \"b259f88f-6ed0-4105-8e81-539fdac15924\") " pod="tigera-operator/tigera-operator-68f7c7984d-qn5zb" Jun 20 19:56:15.071981 kubelet[3303]: I0620 19:56:15.071880 3303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8h5w\" (UniqueName: \"kubernetes.io/projected/b259f88f-6ed0-4105-8e81-539fdac15924-kube-api-access-c8h5w\") pod \"tigera-operator-68f7c7984d-qn5zb\" (UID: \"b259f88f-6ed0-4105-8e81-539fdac15924\") " pod="tigera-operator/tigera-operator-68f7c7984d-qn5zb" Jun 20 19:56:15.148059 containerd[1986]: time="2025-06-20T19:56:15.148011367Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ddl9h,Uid:f44fc805-098c-4b14-bded-eb852cc656d6,Namespace:kube-system,Attempt:0,}" Jun 20 19:56:15.186561 containerd[1986]: time="2025-06-20T19:56:15.186484776Z" level=info msg="connecting to shim c422ca131eff6950578094299e1ffd85debf2adb2df603fdad1b9e76793af324" address="unix:///run/containerd/s/6334edc68461089281557e88cb2e789e9bf8ab04cbb71ddabdc698258fbd4f30" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:56:15.228473 systemd[1]: Started cri-containerd-c422ca131eff6950578094299e1ffd85debf2adb2df603fdad1b9e76793af324.scope - libcontainer container c422ca131eff6950578094299e1ffd85debf2adb2df603fdad1b9e76793af324. Jun 20 19:56:15.264066 containerd[1986]: time="2025-06-20T19:56:15.264025506Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ddl9h,Uid:f44fc805-098c-4b14-bded-eb852cc656d6,Namespace:kube-system,Attempt:0,} returns sandbox id \"c422ca131eff6950578094299e1ffd85debf2adb2df603fdad1b9e76793af324\"" Jun 20 19:56:15.273483 containerd[1986]: time="2025-06-20T19:56:15.273441459Z" level=info msg="CreateContainer within sandbox \"c422ca131eff6950578094299e1ffd85debf2adb2df603fdad1b9e76793af324\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jun 20 19:56:15.295659 containerd[1986]: time="2025-06-20T19:56:15.295539393Z" level=info msg="Container 98fbc2c0ca089fc76e4050425f1e8366ec11a4795c1795801925b62a3fc3adf5: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:56:15.310633 containerd[1986]: time="2025-06-20T19:56:15.310588978Z" level=info msg="CreateContainer within sandbox \"c422ca131eff6950578094299e1ffd85debf2adb2df603fdad1b9e76793af324\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"98fbc2c0ca089fc76e4050425f1e8366ec11a4795c1795801925b62a3fc3adf5\"" Jun 20 19:56:15.311438 containerd[1986]: time="2025-06-20T19:56:15.311411105Z" level=info msg="StartContainer for \"98fbc2c0ca089fc76e4050425f1e8366ec11a4795c1795801925b62a3fc3adf5\"" Jun 20 19:56:15.313042 containerd[1986]: time="2025-06-20T19:56:15.312979642Z" level=info msg="connecting to shim 98fbc2c0ca089fc76e4050425f1e8366ec11a4795c1795801925b62a3fc3adf5" address="unix:///run/containerd/s/6334edc68461089281557e88cb2e789e9bf8ab04cbb71ddabdc698258fbd4f30" protocol=ttrpc version=3 Jun 20 19:56:15.319586 containerd[1986]: time="2025-06-20T19:56:15.319538798Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-68f7c7984d-qn5zb,Uid:b259f88f-6ed0-4105-8e81-539fdac15924,Namespace:tigera-operator,Attempt:0,}" Jun 20 19:56:15.335429 systemd[1]: Started cri-containerd-98fbc2c0ca089fc76e4050425f1e8366ec11a4795c1795801925b62a3fc3adf5.scope - libcontainer container 98fbc2c0ca089fc76e4050425f1e8366ec11a4795c1795801925b62a3fc3adf5. Jun 20 19:56:15.362422 containerd[1986]: time="2025-06-20T19:56:15.362350850Z" level=info msg="connecting to shim 7a0be87f558b52d49ed335d3bf234a500f58acc212ceb2f32933b1d070004ca6" address="unix:///run/containerd/s/1034a775efa11cd60a769bdb2f523534e8ac7a0443463d6cedfd662c81ec56e9" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:56:15.399480 systemd[1]: Started cri-containerd-7a0be87f558b52d49ed335d3bf234a500f58acc212ceb2f32933b1d070004ca6.scope - libcontainer container 7a0be87f558b52d49ed335d3bf234a500f58acc212ceb2f32933b1d070004ca6. Jun 20 19:56:15.424245 containerd[1986]: time="2025-06-20T19:56:15.424160145Z" level=info msg="StartContainer for \"98fbc2c0ca089fc76e4050425f1e8366ec11a4795c1795801925b62a3fc3adf5\" returns successfully" Jun 20 19:56:15.495653 containerd[1986]: time="2025-06-20T19:56:15.495602616Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-68f7c7984d-qn5zb,Uid:b259f88f-6ed0-4105-8e81-539fdac15924,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"7a0be87f558b52d49ed335d3bf234a500f58acc212ceb2f32933b1d070004ca6\"" Jun 20 19:56:15.499070 containerd[1986]: time="2025-06-20T19:56:15.499040323Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.1\"" Jun 20 19:56:15.795470 kubelet[3303]: I0620 19:56:15.795235 3303 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-ddl9h" podStartSLOduration=1.7952056220000001 podStartE2EDuration="1.795205622s" podCreationTimestamp="2025-06-20 19:56:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:56:15.794986376 +0000 UTC m=+7.234276063" watchObservedRunningTime="2025-06-20 19:56:15.795205622 +0000 UTC m=+7.234495322" Jun 20 19:56:16.896642 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount671512043.mount: Deactivated successfully. Jun 20 19:56:17.821056 containerd[1986]: time="2025-06-20T19:56:17.820981272Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:56:17.822357 containerd[1986]: time="2025-06-20T19:56:17.822061731Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.1: active requests=0, bytes read=25059858" Jun 20 19:56:17.823569 containerd[1986]: time="2025-06-20T19:56:17.823527482Z" level=info msg="ImageCreate event name:\"sha256:9fe1a04a0e6c440395d63018f1a72bb1ed07d81ed81be41e9b8adcc35a64164c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:56:17.853936 containerd[1986]: time="2025-06-20T19:56:17.853884786Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:a2a468d1ac1b6a7049c1c2505cd933461fcadb127b5c3f98f03bd8e402bce456\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:56:17.854775 containerd[1986]: time="2025-06-20T19:56:17.854742728Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.1\" with image id \"sha256:9fe1a04a0e6c440395d63018f1a72bb1ed07d81ed81be41e9b8adcc35a64164c\", repo tag \"quay.io/tigera/operator:v1.38.1\", repo digest \"quay.io/tigera/operator@sha256:a2a468d1ac1b6a7049c1c2505cd933461fcadb127b5c3f98f03bd8e402bce456\", size \"25055853\" in 2.355465808s" Jun 20 19:56:17.854904 containerd[1986]: time="2025-06-20T19:56:17.854890530Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.1\" returns image reference \"sha256:9fe1a04a0e6c440395d63018f1a72bb1ed07d81ed81be41e9b8adcc35a64164c\"" Jun 20 19:56:17.859589 containerd[1986]: time="2025-06-20T19:56:17.859553373Z" level=info msg="CreateContainer within sandbox \"7a0be87f558b52d49ed335d3bf234a500f58acc212ceb2f32933b1d070004ca6\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jun 20 19:56:17.874075 containerd[1986]: time="2025-06-20T19:56:17.873558705Z" level=info msg="Container 45cd74a1f588c567723c2a5c19de457b3544f0b9414dbdbdf4ce1879b80efcee: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:56:17.889787 containerd[1986]: time="2025-06-20T19:56:17.889748298Z" level=info msg="CreateContainer within sandbox \"7a0be87f558b52d49ed335d3bf234a500f58acc212ceb2f32933b1d070004ca6\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"45cd74a1f588c567723c2a5c19de457b3544f0b9414dbdbdf4ce1879b80efcee\"" Jun 20 19:56:17.890829 containerd[1986]: time="2025-06-20T19:56:17.890757321Z" level=info msg="StartContainer for \"45cd74a1f588c567723c2a5c19de457b3544f0b9414dbdbdf4ce1879b80efcee\"" Jun 20 19:56:17.892124 containerd[1986]: time="2025-06-20T19:56:17.892065890Z" level=info msg="connecting to shim 45cd74a1f588c567723c2a5c19de457b3544f0b9414dbdbdf4ce1879b80efcee" address="unix:///run/containerd/s/1034a775efa11cd60a769bdb2f523534e8ac7a0443463d6cedfd662c81ec56e9" protocol=ttrpc version=3 Jun 20 19:56:17.918551 systemd[1]: Started cri-containerd-45cd74a1f588c567723c2a5c19de457b3544f0b9414dbdbdf4ce1879b80efcee.scope - libcontainer container 45cd74a1f588c567723c2a5c19de457b3544f0b9414dbdbdf4ce1879b80efcee. Jun 20 19:56:17.953724 containerd[1986]: time="2025-06-20T19:56:17.953679079Z" level=info msg="StartContainer for \"45cd74a1f588c567723c2a5c19de457b3544f0b9414dbdbdf4ce1879b80efcee\" returns successfully" Jun 20 19:56:18.813156 kubelet[3303]: I0620 19:56:18.813104 3303 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-68f7c7984d-qn5zb" podStartSLOduration=2.454643437 podStartE2EDuration="4.813086862s" podCreationTimestamp="2025-06-20 19:56:14 +0000 UTC" firstStartedPulling="2025-06-20 19:56:15.497306794 +0000 UTC m=+6.936596457" lastFinishedPulling="2025-06-20 19:56:17.855750216 +0000 UTC m=+9.295039882" observedRunningTime="2025-06-20 19:56:18.813039897 +0000 UTC m=+10.252329581" watchObservedRunningTime="2025-06-20 19:56:18.813086862 +0000 UTC m=+10.252376545" Jun 20 19:56:25.005490 sudo[2351]: pam_unix(sudo:session): session closed for user root Jun 20 19:56:25.029239 sshd[2350]: Connection closed by 147.75.109.163 port 40272 Jun 20 19:56:25.034410 sshd-session[2348]: pam_unix(sshd:session): session closed for user core Jun 20 19:56:25.043604 systemd[1]: sshd@8-172.31.30.175:22-147.75.109.163:40272.service: Deactivated successfully. Jun 20 19:56:25.052932 systemd[1]: session-9.scope: Deactivated successfully. Jun 20 19:56:25.054167 systemd[1]: session-9.scope: Consumed 5.220s CPU time, 152.8M memory peak. Jun 20 19:56:25.058530 systemd-logind[1969]: Session 9 logged out. Waiting for processes to exit. Jun 20 19:56:25.062772 systemd-logind[1969]: Removed session 9. Jun 20 19:56:29.190817 systemd[1]: Created slice kubepods-besteffort-pod0752edf7_bdbb_4ca7_a49d_3180e68dc5fe.slice - libcontainer container kubepods-besteffort-pod0752edf7_bdbb_4ca7_a49d_3180e68dc5fe.slice. Jun 20 19:56:29.263788 kubelet[3303]: I0620 19:56:29.263628 3303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0752edf7-bdbb-4ca7-a49d-3180e68dc5fe-tigera-ca-bundle\") pod \"calico-typha-79dfb57759-xp9r6\" (UID: \"0752edf7-bdbb-4ca7-a49d-3180e68dc5fe\") " pod="calico-system/calico-typha-79dfb57759-xp9r6" Jun 20 19:56:29.263788 kubelet[3303]: I0620 19:56:29.263690 3303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/0752edf7-bdbb-4ca7-a49d-3180e68dc5fe-typha-certs\") pod \"calico-typha-79dfb57759-xp9r6\" (UID: \"0752edf7-bdbb-4ca7-a49d-3180e68dc5fe\") " pod="calico-system/calico-typha-79dfb57759-xp9r6" Jun 20 19:56:29.263788 kubelet[3303]: I0620 19:56:29.263715 3303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdgdl\" (UniqueName: \"kubernetes.io/projected/0752edf7-bdbb-4ca7-a49d-3180e68dc5fe-kube-api-access-cdgdl\") pod \"calico-typha-79dfb57759-xp9r6\" (UID: \"0752edf7-bdbb-4ca7-a49d-3180e68dc5fe\") " pod="calico-system/calico-typha-79dfb57759-xp9r6" Jun 20 19:56:29.500898 containerd[1986]: time="2025-06-20T19:56:29.500333157Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-79dfb57759-xp9r6,Uid:0752edf7-bdbb-4ca7-a49d-3180e68dc5fe,Namespace:calico-system,Attempt:0,}" Jun 20 19:56:29.529600 systemd[1]: Created slice kubepods-besteffort-pod6c3afffb_29b6_43a5_84e9_f4b3e503285c.slice - libcontainer container kubepods-besteffort-pod6c3afffb_29b6_43a5_84e9_f4b3e503285c.slice. Jun 20 19:56:29.554471 containerd[1986]: time="2025-06-20T19:56:29.554416823Z" level=info msg="connecting to shim c749c0cbade26d28919a4575c57bcc5d480e7784d674544cedb4410ec1299240" address="unix:///run/containerd/s/a9b87ce26bc2904930da58a775345a32526cb39ca2a13ac5e7ca18484c0c261f" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:56:29.565927 kubelet[3303]: I0620 19:56:29.565381 3303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/6c3afffb-29b6-43a5-84e9-f4b3e503285c-cni-bin-dir\") pod \"calico-node-hnwtx\" (UID: \"6c3afffb-29b6-43a5-84e9-f4b3e503285c\") " pod="calico-system/calico-node-hnwtx" Jun 20 19:56:29.565927 kubelet[3303]: I0620 19:56:29.565423 3303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6c3afffb-29b6-43a5-84e9-f4b3e503285c-tigera-ca-bundle\") pod \"calico-node-hnwtx\" (UID: \"6c3afffb-29b6-43a5-84e9-f4b3e503285c\") " pod="calico-system/calico-node-hnwtx" Jun 20 19:56:29.565927 kubelet[3303]: I0620 19:56:29.565440 3303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/6c3afffb-29b6-43a5-84e9-f4b3e503285c-var-lib-calico\") pod \"calico-node-hnwtx\" (UID: \"6c3afffb-29b6-43a5-84e9-f4b3e503285c\") " pod="calico-system/calico-node-hnwtx" Jun 20 19:56:29.565927 kubelet[3303]: I0620 19:56:29.565454 3303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/6c3afffb-29b6-43a5-84e9-f4b3e503285c-var-run-calico\") pod \"calico-node-hnwtx\" (UID: \"6c3afffb-29b6-43a5-84e9-f4b3e503285c\") " pod="calico-system/calico-node-hnwtx" Jun 20 19:56:29.565927 kubelet[3303]: I0620 19:56:29.565469 3303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/6c3afffb-29b6-43a5-84e9-f4b3e503285c-policysync\") pod \"calico-node-hnwtx\" (UID: \"6c3afffb-29b6-43a5-84e9-f4b3e503285c\") " pod="calico-system/calico-node-hnwtx" Jun 20 19:56:29.566319 kubelet[3303]: I0620 19:56:29.565488 3303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6c3afffb-29b6-43a5-84e9-f4b3e503285c-xtables-lock\") pod \"calico-node-hnwtx\" (UID: \"6c3afffb-29b6-43a5-84e9-f4b3e503285c\") " pod="calico-system/calico-node-hnwtx" Jun 20 19:56:29.566319 kubelet[3303]: I0620 19:56:29.565527 3303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4p8nt\" (UniqueName: \"kubernetes.io/projected/6c3afffb-29b6-43a5-84e9-f4b3e503285c-kube-api-access-4p8nt\") pod \"calico-node-hnwtx\" (UID: \"6c3afffb-29b6-43a5-84e9-f4b3e503285c\") " pod="calico-system/calico-node-hnwtx" Jun 20 19:56:29.566319 kubelet[3303]: I0620 19:56:29.565547 3303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/6c3afffb-29b6-43a5-84e9-f4b3e503285c-cni-log-dir\") pod \"calico-node-hnwtx\" (UID: \"6c3afffb-29b6-43a5-84e9-f4b3e503285c\") " pod="calico-system/calico-node-hnwtx" Jun 20 19:56:29.566319 kubelet[3303]: I0620 19:56:29.565560 3303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/6c3afffb-29b6-43a5-84e9-f4b3e503285c-flexvol-driver-host\") pod \"calico-node-hnwtx\" (UID: \"6c3afffb-29b6-43a5-84e9-f4b3e503285c\") " pod="calico-system/calico-node-hnwtx" Jun 20 19:56:29.566319 kubelet[3303]: I0620 19:56:29.565575 3303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/6c3afffb-29b6-43a5-84e9-f4b3e503285c-cni-net-dir\") pod \"calico-node-hnwtx\" (UID: \"6c3afffb-29b6-43a5-84e9-f4b3e503285c\") " pod="calico-system/calico-node-hnwtx" Jun 20 19:56:29.566450 kubelet[3303]: I0620 19:56:29.565588 3303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6c3afffb-29b6-43a5-84e9-f4b3e503285c-lib-modules\") pod \"calico-node-hnwtx\" (UID: \"6c3afffb-29b6-43a5-84e9-f4b3e503285c\") " pod="calico-system/calico-node-hnwtx" Jun 20 19:56:29.566450 kubelet[3303]: I0620 19:56:29.565603 3303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/6c3afffb-29b6-43a5-84e9-f4b3e503285c-node-certs\") pod \"calico-node-hnwtx\" (UID: \"6c3afffb-29b6-43a5-84e9-f4b3e503285c\") " pod="calico-system/calico-node-hnwtx" Jun 20 19:56:29.599686 systemd[1]: Started cri-containerd-c749c0cbade26d28919a4575c57bcc5d480e7784d674544cedb4410ec1299240.scope - libcontainer container c749c0cbade26d28919a4575c57bcc5d480e7784d674544cedb4410ec1299240. Jun 20 19:56:29.682720 kubelet[3303]: E0620 19:56:29.681070 3303 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:56:29.682720 kubelet[3303]: W0620 19:56:29.681146 3303 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:56:29.682720 kubelet[3303]: E0620 19:56:29.681210 3303 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:56:29.689364 kubelet[3303]: E0620 19:56:29.689246 3303 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:56:29.689364 kubelet[3303]: W0620 19:56:29.689277 3303 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:56:29.689364 kubelet[3303]: E0620 19:56:29.689304 3303 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:56:29.694372 kubelet[3303]: E0620 19:56:29.694274 3303 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:56:29.694372 kubelet[3303]: W0620 19:56:29.694297 3303 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:56:29.694372 kubelet[3303]: E0620 19:56:29.694321 3303 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:56:29.699747 containerd[1986]: time="2025-06-20T19:56:29.699608021Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-79dfb57759-xp9r6,Uid:0752edf7-bdbb-4ca7-a49d-3180e68dc5fe,Namespace:calico-system,Attempt:0,} returns sandbox id \"c749c0cbade26d28919a4575c57bcc5d480e7784d674544cedb4410ec1299240\"" Jun 20 19:56:29.705053 containerd[1986]: time="2025-06-20T19:56:29.704889633Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.1\"" Jun 20 19:56:29.794126 kubelet[3303]: E0620 19:56:29.794004 3303 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-92w4t" podUID="d1b328f6-f5a3-4a18-86d2-b8e5e3cf3098" Jun 20 19:56:29.848472 containerd[1986]: time="2025-06-20T19:56:29.848425501Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-hnwtx,Uid:6c3afffb-29b6-43a5-84e9-f4b3e503285c,Namespace:calico-system,Attempt:0,}" Jun 20 19:56:29.864653 kubelet[3303]: E0620 19:56:29.864598 3303 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:56:29.864976 kubelet[3303]: W0620 19:56:29.864626 3303 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:56:29.864976 kubelet[3303]: E0620 19:56:29.864843 3303 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:56:29.865395 kubelet[3303]: E0620 19:56:29.865348 3303 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:56:29.865395 kubelet[3303]: W0620 19:56:29.865365 3303 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:56:29.865663 kubelet[3303]: E0620 19:56:29.865532 3303 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:56:29.866075 kubelet[3303]: E0620 19:56:29.865876 3303 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:56:29.866075 kubelet[3303]: W0620 19:56:29.865889 3303 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:56:29.866075 kubelet[3303]: E0620 19:56:29.865903 3303 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:56:29.868312 kubelet[3303]: E0620 19:56:29.868272 3303 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:56:29.868312 kubelet[3303]: W0620 19:56:29.868289 3303 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:56:29.868540 kubelet[3303]: E0620 19:56:29.868454 3303 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:56:29.868991 kubelet[3303]: E0620 19:56:29.868879 3303 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:56:29.868991 kubelet[3303]: W0620 19:56:29.868894 3303 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:56:29.868991 kubelet[3303]: E0620 19:56:29.868909 3303 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:56:29.869382 kubelet[3303]: E0620 19:56:29.869276 3303 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:56:29.869382 kubelet[3303]: W0620 19:56:29.869291 3303 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:56:29.869382 kubelet[3303]: E0620 19:56:29.869304 3303 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:56:29.869867 kubelet[3303]: E0620 19:56:29.869735 3303 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:56:29.869867 kubelet[3303]: W0620 19:56:29.869751 3303 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:56:29.869867 kubelet[3303]: E0620 19:56:29.869765 3303 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:56:29.870242 kubelet[3303]: E0620 19:56:29.870184 3303 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:56:29.870242 kubelet[3303]: W0620 19:56:29.870199 3303 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:56:29.871095 kubelet[3303]: E0620 19:56:29.870213 3303 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:56:29.871095 kubelet[3303]: E0620 19:56:29.870582 3303 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:56:29.871095 kubelet[3303]: W0620 19:56:29.870593 3303 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:56:29.871095 kubelet[3303]: E0620 19:56:29.870606 3303 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:56:29.871095 kubelet[3303]: E0620 19:56:29.870852 3303 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:56:29.871095 kubelet[3303]: W0620 19:56:29.870889 3303 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:56:29.871095 kubelet[3303]: E0620 19:56:29.870902 3303 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:56:29.871833 kubelet[3303]: E0620 19:56:29.871391 3303 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:56:29.871833 kubelet[3303]: W0620 19:56:29.871402 3303 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:56:29.871833 kubelet[3303]: E0620 19:56:29.871513 3303 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:56:29.872195 kubelet[3303]: E0620 19:56:29.872115 3303 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:56:29.872195 kubelet[3303]: W0620 19:56:29.872128 3303 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:56:29.872195 kubelet[3303]: E0620 19:56:29.872142 3303 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:56:29.872509 kubelet[3303]: E0620 19:56:29.872489 3303 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:56:29.872509 kubelet[3303]: W0620 19:56:29.872505 3303 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:56:29.872627 kubelet[3303]: E0620 19:56:29.872519 3303 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:56:29.872736 kubelet[3303]: E0620 19:56:29.872717 3303 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:56:29.872736 kubelet[3303]: W0620 19:56:29.872732 3303 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:56:29.872835 kubelet[3303]: E0620 19:56:29.872744 3303 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:56:29.872928 kubelet[3303]: E0620 19:56:29.872908 3303 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:56:29.872928 kubelet[3303]: W0620 19:56:29.872923 3303 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:56:29.873147 kubelet[3303]: E0620 19:56:29.872934 3303 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:56:29.873364 kubelet[3303]: E0620 19:56:29.873271 3303 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:56:29.873364 kubelet[3303]: W0620 19:56:29.873284 3303 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:56:29.873364 kubelet[3303]: E0620 19:56:29.873297 3303 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:56:29.873797 kubelet[3303]: E0620 19:56:29.873698 3303 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:56:29.873797 kubelet[3303]: W0620 19:56:29.873711 3303 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:56:29.873797 kubelet[3303]: E0620 19:56:29.873724 3303 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:56:29.874261 kubelet[3303]: E0620 19:56:29.874094 3303 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:56:29.874261 kubelet[3303]: W0620 19:56:29.874170 3303 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:56:29.874261 kubelet[3303]: E0620 19:56:29.874185 3303 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:56:29.874705 kubelet[3303]: E0620 19:56:29.874628 3303 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:56:29.874705 kubelet[3303]: W0620 19:56:29.874641 3303 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:56:29.874705 kubelet[3303]: E0620 19:56:29.874655 3303 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:56:29.875266 kubelet[3303]: E0620 19:56:29.875051 3303 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:56:29.875266 kubelet[3303]: W0620 19:56:29.875064 3303 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:56:29.875266 kubelet[3303]: E0620 19:56:29.875078 3303 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:56:29.875690 kubelet[3303]: E0620 19:56:29.875677 3303 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:56:29.875917 kubelet[3303]: W0620 19:56:29.875775 3303 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:56:29.875917 kubelet[3303]: E0620 19:56:29.875795 3303 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:56:29.875917 kubelet[3303]: I0620 19:56:29.875833 3303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d1b328f6-f5a3-4a18-86d2-b8e5e3cf3098-kubelet-dir\") pod \"csi-node-driver-92w4t\" (UID: \"d1b328f6-f5a3-4a18-86d2-b8e5e3cf3098\") " pod="calico-system/csi-node-driver-92w4t" Jun 20 19:56:29.876421 kubelet[3303]: E0620 19:56:29.876258 3303 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:56:29.876421 kubelet[3303]: W0620 19:56:29.876273 3303 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:56:29.876421 kubelet[3303]: E0620 19:56:29.876287 3303 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:56:29.876421 kubelet[3303]: I0620 19:56:29.876309 3303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/d1b328f6-f5a3-4a18-86d2-b8e5e3cf3098-socket-dir\") pod \"csi-node-driver-92w4t\" (UID: \"d1b328f6-f5a3-4a18-86d2-b8e5e3cf3098\") " pod="calico-system/csi-node-driver-92w4t" Jun 20 19:56:29.876812 kubelet[3303]: E0620 19:56:29.876769 3303 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:56:29.876812 kubelet[3303]: W0620 19:56:29.876785 3303 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:56:29.876812 kubelet[3303]: E0620 19:56:29.876798 3303 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:56:29.877207 kubelet[3303]: I0620 19:56:29.876964 3303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/d1b328f6-f5a3-4a18-86d2-b8e5e3cf3098-registration-dir\") pod \"csi-node-driver-92w4t\" (UID: \"d1b328f6-f5a3-4a18-86d2-b8e5e3cf3098\") " pod="calico-system/csi-node-driver-92w4t" Jun 20 19:56:29.877549 kubelet[3303]: E0620 19:56:29.877510 3303 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:56:29.877549 kubelet[3303]: W0620 19:56:29.877524 3303 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:56:29.877758 kubelet[3303]: E0620 19:56:29.877636 3303 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:56:29.878079 kubelet[3303]: E0620 19:56:29.878067 3303 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:56:29.878345 kubelet[3303]: W0620 19:56:29.878252 3303 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:56:29.878345 kubelet[3303]: E0620 19:56:29.878270 3303 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:56:29.878849 kubelet[3303]: E0620 19:56:29.878818 3303 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:56:29.878971 kubelet[3303]: W0620 19:56:29.878929 3303 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:56:29.878971 kubelet[3303]: E0620 19:56:29.878948 3303 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:56:29.879499 kubelet[3303]: E0620 19:56:29.879454 3303 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:56:29.879499 kubelet[3303]: W0620 19:56:29.879469 3303 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:56:29.879499 kubelet[3303]: E0620 19:56:29.879483 3303 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:56:29.879873 kubelet[3303]: I0620 19:56:29.879785 3303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/d1b328f6-f5a3-4a18-86d2-b8e5e3cf3098-varrun\") pod \"csi-node-driver-92w4t\" (UID: \"d1b328f6-f5a3-4a18-86d2-b8e5e3cf3098\") " pod="calico-system/csi-node-driver-92w4t" Jun 20 19:56:29.880181 kubelet[3303]: E0620 19:56:29.880147 3303 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:56:29.880301 kubelet[3303]: W0620 19:56:29.880161 3303 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:56:29.880301 kubelet[3303]: E0620 19:56:29.880261 3303 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:56:29.880686 kubelet[3303]: E0620 19:56:29.880650 3303 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:56:29.880686 kubelet[3303]: W0620 19:56:29.880663 3303 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:56:29.880970 kubelet[3303]: E0620 19:56:29.880865 3303 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:56:29.881365 kubelet[3303]: E0620 19:56:29.881308 3303 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:56:29.881365 kubelet[3303]: W0620 19:56:29.881321 3303 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:56:29.881365 kubelet[3303]: E0620 19:56:29.881333 3303 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:56:29.881607 kubelet[3303]: I0620 19:56:29.881473 3303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hn8d\" (UniqueName: \"kubernetes.io/projected/d1b328f6-f5a3-4a18-86d2-b8e5e3cf3098-kube-api-access-8hn8d\") pod \"csi-node-driver-92w4t\" (UID: \"d1b328f6-f5a3-4a18-86d2-b8e5e3cf3098\") " pod="calico-system/csi-node-driver-92w4t" Jun 20 19:56:29.881938 kubelet[3303]: E0620 19:56:29.881924 3303 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:56:29.882078 kubelet[3303]: W0620 19:56:29.882009 3303 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:56:29.882078 kubelet[3303]: E0620 19:56:29.882026 3303 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:56:29.882508 kubelet[3303]: E0620 19:56:29.882495 3303 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:56:29.882604 kubelet[3303]: W0620 19:56:29.882590 3303 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:56:29.882833 kubelet[3303]: E0620 19:56:29.882673 3303 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:56:29.883345 kubelet[3303]: E0620 19:56:29.883303 3303 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:56:29.883345 kubelet[3303]: W0620 19:56:29.883316 3303 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:56:29.883345 kubelet[3303]: E0620 19:56:29.883329 3303 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:56:29.883739 kubelet[3303]: E0620 19:56:29.883691 3303 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:56:29.883739 kubelet[3303]: W0620 19:56:29.883704 3303 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:56:29.883739 kubelet[3303]: E0620 19:56:29.883717 3303 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:56:29.884235 kubelet[3303]: E0620 19:56:29.884169 3303 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:56:29.884235 kubelet[3303]: W0620 19:56:29.884185 3303 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:56:29.884235 kubelet[3303]: E0620 19:56:29.884200 3303 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:56:29.898792 containerd[1986]: time="2025-06-20T19:56:29.898430371Z" level=info msg="connecting to shim a8de73ba87362a171b29f6b37aa2fd51acc645e2b00fd3ebc153203516c646b8" address="unix:///run/containerd/s/36d4406bdd9c1ed2240dd4f1489ee36c4b2e77bd9db97395538fae15a7e793e0" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:56:29.935477 systemd[1]: Started cri-containerd-a8de73ba87362a171b29f6b37aa2fd51acc645e2b00fd3ebc153203516c646b8.scope - libcontainer container a8de73ba87362a171b29f6b37aa2fd51acc645e2b00fd3ebc153203516c646b8. Jun 20 19:56:29.984749 kubelet[3303]: E0620 19:56:29.984264 3303 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:56:29.985114 kubelet[3303]: W0620 19:56:29.984959 3303 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:56:29.985114 kubelet[3303]: E0620 19:56:29.984996 3303 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:56:29.986199 kubelet[3303]: E0620 19:56:29.986170 3303 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:56:29.986386 kubelet[3303]: W0620 19:56:29.986267 3303 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:56:29.986386 kubelet[3303]: E0620 19:56:29.986294 3303 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:56:29.987405 kubelet[3303]: E0620 19:56:29.986934 3303 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:56:29.987609 kubelet[3303]: W0620 19:56:29.986950 3303 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:56:29.987609 kubelet[3303]: E0620 19:56:29.987548 3303 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:56:29.988126 kubelet[3303]: E0620 19:56:29.988097 3303 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:56:29.989268 kubelet[3303]: W0620 19:56:29.988111 3303 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:56:29.989268 kubelet[3303]: E0620 19:56:29.989249 3303 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:56:29.989836 kubelet[3303]: E0620 19:56:29.989775 3303 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:56:29.989836 kubelet[3303]: W0620 19:56:29.989803 3303 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:56:29.989836 kubelet[3303]: E0620 19:56:29.989820 3303 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:56:29.990302 kubelet[3303]: E0620 19:56:29.990279 3303 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:56:29.990417 kubelet[3303]: W0620 19:56:29.990347 3303 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:56:29.990417 kubelet[3303]: E0620 19:56:29.990363 3303 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:56:29.990894 kubelet[3303]: E0620 19:56:29.990852 3303 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:56:29.990894 kubelet[3303]: W0620 19:56:29.990867 3303 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:56:29.990894 kubelet[3303]: E0620 19:56:29.990880 3303 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:56:29.991591 kubelet[3303]: E0620 19:56:29.991535 3303 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:56:29.991591 kubelet[3303]: W0620 19:56:29.991552 3303 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:56:29.991591 kubelet[3303]: E0620 19:56:29.991565 3303 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:56:29.992081 kubelet[3303]: E0620 19:56:29.992033 3303 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:56:29.992081 kubelet[3303]: W0620 19:56:29.992047 3303 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:56:29.992297 kubelet[3303]: E0620 19:56:29.992060 3303 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:56:29.993144 kubelet[3303]: E0620 19:56:29.993100 3303 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:56:29.993144 kubelet[3303]: W0620 19:56:29.993115 3303 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:56:29.993144 kubelet[3303]: E0620 19:56:29.993128 3303 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:56:29.993683 kubelet[3303]: E0620 19:56:29.993645 3303 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:56:29.993683 kubelet[3303]: W0620 19:56:29.993657 3303 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:56:29.993683 kubelet[3303]: E0620 19:56:29.993670 3303 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:56:29.994213 kubelet[3303]: E0620 19:56:29.994173 3303 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:56:29.994213 kubelet[3303]: W0620 19:56:29.994187 3303 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:56:29.994213 kubelet[3303]: E0620 19:56:29.994199 3303 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:56:29.995576 kubelet[3303]: E0620 19:56:29.995513 3303 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:56:29.995576 kubelet[3303]: W0620 19:56:29.995544 3303 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:56:29.995576 kubelet[3303]: E0620 19:56:29.995560 3303 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:56:29.996060 kubelet[3303]: E0620 19:56:29.996013 3303 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:56:29.996060 kubelet[3303]: W0620 19:56:29.996032 3303 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:56:29.996060 kubelet[3303]: E0620 19:56:29.996045 3303 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:56:29.996682 kubelet[3303]: E0620 19:56:29.996575 3303 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:56:29.996682 kubelet[3303]: W0620 19:56:29.996589 3303 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:56:29.996682 kubelet[3303]: E0620 19:56:29.996606 3303 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:56:29.997656 kubelet[3303]: E0620 19:56:29.997296 3303 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:56:29.997656 kubelet[3303]: W0620 19:56:29.997311 3303 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:56:29.997656 kubelet[3303]: E0620 19:56:29.997325 3303 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:56:29.998145 kubelet[3303]: E0620 19:56:29.998079 3303 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:56:29.998145 kubelet[3303]: W0620 19:56:29.998117 3303 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:56:29.998145 kubelet[3303]: E0620 19:56:29.998131 3303 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:56:29.999005 kubelet[3303]: E0620 19:56:29.998946 3303 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:56:29.999005 kubelet[3303]: W0620 19:56:29.998960 3303 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:56:29.999005 kubelet[3303]: E0620 19:56:29.998973 3303 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:56:29.999911 kubelet[3303]: E0620 19:56:29.999856 3303 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:56:29.999911 kubelet[3303]: W0620 19:56:29.999870 3303 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:56:29.999911 kubelet[3303]: E0620 19:56:29.999884 3303 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:56:30.000736 kubelet[3303]: E0620 19:56:30.000691 3303 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:56:30.000736 kubelet[3303]: W0620 19:56:30.000707 3303 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:56:30.000736 kubelet[3303]: E0620 19:56:30.000721 3303 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:56:30.001422 kubelet[3303]: E0620 19:56:30.001381 3303 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:56:30.001422 kubelet[3303]: W0620 19:56:30.001395 3303 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:56:30.001422 kubelet[3303]: E0620 19:56:30.001408 3303 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:56:30.002687 kubelet[3303]: E0620 19:56:30.002662 3303 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:56:30.002858 kubelet[3303]: W0620 19:56:30.002765 3303 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:56:30.002858 kubelet[3303]: E0620 19:56:30.002783 3303 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:56:30.003192 kubelet[3303]: E0620 19:56:30.003151 3303 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:56:30.003192 kubelet[3303]: W0620 19:56:30.003165 3303 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:56:30.003192 kubelet[3303]: E0620 19:56:30.003177 3303 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:56:30.003664 kubelet[3303]: E0620 19:56:30.003624 3303 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:56:30.003664 kubelet[3303]: W0620 19:56:30.003637 3303 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:56:30.003664 kubelet[3303]: E0620 19:56:30.003649 3303 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:56:30.004233 kubelet[3303]: E0620 19:56:30.004204 3303 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:56:30.004859 kubelet[3303]: W0620 19:56:30.004395 3303 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:56:30.004859 kubelet[3303]: E0620 19:56:30.004417 3303 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:56:30.027463 kubelet[3303]: E0620 19:56:30.027436 3303 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:56:30.027689 kubelet[3303]: W0620 19:56:30.027610 3303 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:56:30.027689 kubelet[3303]: E0620 19:56:30.027644 3303 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:56:30.063891 containerd[1986]: time="2025-06-20T19:56:30.063381638Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-hnwtx,Uid:6c3afffb-29b6-43a5-84e9-f4b3e503285c,Namespace:calico-system,Attempt:0,} returns sandbox id \"a8de73ba87362a171b29f6b37aa2fd51acc645e2b00fd3ebc153203516c646b8\"" Jun 20 19:56:31.183432 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4019698356.mount: Deactivated successfully. Jun 20 19:56:31.725957 kubelet[3303]: E0620 19:56:31.725758 3303 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-92w4t" podUID="d1b328f6-f5a3-4a18-86d2-b8e5e3cf3098" Jun 20 19:56:32.249617 containerd[1986]: time="2025-06-20T19:56:32.249561432Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:56:32.250801 containerd[1986]: time="2025-06-20T19:56:32.250747781Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.1: active requests=0, bytes read=35227888" Jun 20 19:56:32.251911 containerd[1986]: time="2025-06-20T19:56:32.251836572Z" level=info msg="ImageCreate event name:\"sha256:11d920cd1d8c935bdf3cb40dd9e67f22c3624df627bdd58cf6d0e503230688d7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:56:32.254306 containerd[1986]: time="2025-06-20T19:56:32.254238225Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f1edaa4eaa6349a958c409e0dab2d6ee7d1234e5f0eeefc9f508d0b1c9d7d0d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:56:32.255208 containerd[1986]: time="2025-06-20T19:56:32.254934517Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.1\" with image id \"sha256:11d920cd1d8c935bdf3cb40dd9e67f22c3624df627bdd58cf6d0e503230688d7\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f1edaa4eaa6349a958c409e0dab2d6ee7d1234e5f0eeefc9f508d0b1c9d7d0d1\", size \"35227742\" in 2.549984602s" Jun 20 19:56:32.255208 containerd[1986]: time="2025-06-20T19:56:32.254978546Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.1\" returns image reference \"sha256:11d920cd1d8c935bdf3cb40dd9e67f22c3624df627bdd58cf6d0e503230688d7\"" Jun 20 19:56:32.256623 containerd[1986]: time="2025-06-20T19:56:32.256584894Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.1\"" Jun 20 19:56:32.280958 containerd[1986]: time="2025-06-20T19:56:32.280883804Z" level=info msg="CreateContainer within sandbox \"c749c0cbade26d28919a4575c57bcc5d480e7784d674544cedb4410ec1299240\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jun 20 19:56:32.292718 containerd[1986]: time="2025-06-20T19:56:32.292497171Z" level=info msg="Container c0624a35b14d860b6de8f4e9abc00c650989b73891f3f6c55d1146c7f1c172eb: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:56:32.299776 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2118736503.mount: Deactivated successfully. Jun 20 19:56:32.310394 containerd[1986]: time="2025-06-20T19:56:32.310210600Z" level=info msg="CreateContainer within sandbox \"c749c0cbade26d28919a4575c57bcc5d480e7784d674544cedb4410ec1299240\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"c0624a35b14d860b6de8f4e9abc00c650989b73891f3f6c55d1146c7f1c172eb\"" Jun 20 19:56:32.312600 containerd[1986]: time="2025-06-20T19:56:32.312531540Z" level=info msg="StartContainer for \"c0624a35b14d860b6de8f4e9abc00c650989b73891f3f6c55d1146c7f1c172eb\"" Jun 20 19:56:32.314734 containerd[1986]: time="2025-06-20T19:56:32.314686115Z" level=info msg="connecting to shim c0624a35b14d860b6de8f4e9abc00c650989b73891f3f6c55d1146c7f1c172eb" address="unix:///run/containerd/s/a9b87ce26bc2904930da58a775345a32526cb39ca2a13ac5e7ca18484c0c261f" protocol=ttrpc version=3 Jun 20 19:56:32.379468 systemd[1]: Started cri-containerd-c0624a35b14d860b6de8f4e9abc00c650989b73891f3f6c55d1146c7f1c172eb.scope - libcontainer container c0624a35b14d860b6de8f4e9abc00c650989b73891f3f6c55d1146c7f1c172eb. Jun 20 19:56:32.474298 containerd[1986]: time="2025-06-20T19:56:32.473565809Z" level=info msg="StartContainer for \"c0624a35b14d860b6de8f4e9abc00c650989b73891f3f6c55d1146c7f1c172eb\" returns successfully" Jun 20 19:56:32.896368 kubelet[3303]: E0620 19:56:32.896339 3303 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:56:32.896368 kubelet[3303]: W0620 19:56:32.896365 3303 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:56:32.913866 kubelet[3303]: E0620 19:56:32.913792 3303 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:56:32.914644 kubelet[3303]: E0620 19:56:32.914578 3303 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:56:32.914644 kubelet[3303]: W0620 19:56:32.914601 3303 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:56:32.915283 kubelet[3303]: E0620 19:56:32.914725 3303 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:56:32.915537 kubelet[3303]: E0620 19:56:32.915384 3303 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:56:32.915699 kubelet[3303]: W0620 19:56:32.915582 3303 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:56:32.915699 kubelet[3303]: E0620 19:56:32.915604 3303 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:56:32.916361 kubelet[3303]: E0620 19:56:32.916256 3303 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:56:32.916361 kubelet[3303]: W0620 19:56:32.916274 3303 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:56:32.916361 kubelet[3303]: E0620 19:56:32.916290 3303 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:56:32.917242 kubelet[3303]: E0620 19:56:32.916929 3303 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:56:32.917242 kubelet[3303]: W0620 19:56:32.916945 3303 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:56:32.917242 kubelet[3303]: E0620 19:56:32.916959 3303 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:56:32.917506 kubelet[3303]: E0620 19:56:32.917477 3303 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:56:32.917506 kubelet[3303]: W0620 19:56:32.917494 3303 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:56:32.917603 kubelet[3303]: E0620 19:56:32.917509 3303 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:56:32.918139 kubelet[3303]: E0620 19:56:32.917933 3303 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:56:32.918139 kubelet[3303]: W0620 19:56:32.917948 3303 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:56:32.918139 kubelet[3303]: E0620 19:56:32.917962 3303 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:56:32.919331 kubelet[3303]: E0620 19:56:32.919302 3303 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:56:32.919331 kubelet[3303]: W0620 19:56:32.919318 3303 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:56:32.919451 kubelet[3303]: E0620 19:56:32.919334 3303 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:56:32.919772 kubelet[3303]: E0620 19:56:32.919723 3303 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:56:32.919772 kubelet[3303]: W0620 19:56:32.919740 3303 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:56:32.919772 kubelet[3303]: E0620 19:56:32.919753 3303 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:56:32.920039 kubelet[3303]: E0620 19:56:32.920019 3303 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:56:32.920039 kubelet[3303]: W0620 19:56:32.920030 3303 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:56:32.920121 kubelet[3303]: E0620 19:56:32.920042 3303 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:56:32.920423 kubelet[3303]: E0620 19:56:32.920280 3303 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:56:32.920423 kubelet[3303]: W0620 19:56:32.920294 3303 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:56:32.920423 kubelet[3303]: E0620 19:56:32.920306 3303 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:56:32.920627 kubelet[3303]: E0620 19:56:32.920496 3303 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:56:32.920627 kubelet[3303]: W0620 19:56:32.920506 3303 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:56:32.920627 kubelet[3303]: E0620 19:56:32.920517 3303 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:56:32.920957 kubelet[3303]: E0620 19:56:32.920768 3303 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:56:32.920957 kubelet[3303]: W0620 19:56:32.920778 3303 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:56:32.920957 kubelet[3303]: E0620 19:56:32.920789 3303 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:56:32.921486 kubelet[3303]: E0620 19:56:32.921458 3303 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:56:32.921486 kubelet[3303]: W0620 19:56:32.921485 3303 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:56:32.921612 kubelet[3303]: E0620 19:56:32.921499 3303 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:56:32.922354 kubelet[3303]: E0620 19:56:32.922329 3303 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:56:32.922354 kubelet[3303]: W0620 19:56:32.922345 3303 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:56:32.922634 kubelet[3303]: E0620 19:56:32.922360 3303 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:56:32.922691 kubelet[3303]: E0620 19:56:32.922665 3303 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:56:32.922691 kubelet[3303]: W0620 19:56:32.922675 3303 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:56:32.922691 kubelet[3303]: E0620 19:56:32.922688 3303 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:56:32.922956 kubelet[3303]: E0620 19:56:32.922934 3303 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:56:32.922956 kubelet[3303]: W0620 19:56:32.922949 3303 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:56:32.923079 kubelet[3303]: E0620 19:56:32.922962 3303 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:56:32.923445 kubelet[3303]: E0620 19:56:32.923421 3303 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:56:32.923445 kubelet[3303]: W0620 19:56:32.923439 3303 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:56:32.923552 kubelet[3303]: E0620 19:56:32.923452 3303 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:56:32.924523 kubelet[3303]: E0620 19:56:32.924505 3303 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:56:32.924523 kubelet[3303]: W0620 19:56:32.924521 3303 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:56:32.924652 kubelet[3303]: E0620 19:56:32.924535 3303 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:56:32.924895 kubelet[3303]: E0620 19:56:32.924768 3303 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:56:32.924895 kubelet[3303]: W0620 19:56:32.924779 3303 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:56:32.924895 kubelet[3303]: E0620 19:56:32.924791 3303 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:56:32.925738 kubelet[3303]: E0620 19:56:32.925022 3303 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:56:32.925738 kubelet[3303]: W0620 19:56:32.925032 3303 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:56:32.925738 kubelet[3303]: E0620 19:56:32.925044 3303 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:56:32.925738 kubelet[3303]: E0620 19:56:32.925313 3303 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:56:32.925738 kubelet[3303]: W0620 19:56:32.925352 3303 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:56:32.925738 kubelet[3303]: E0620 19:56:32.925364 3303 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:56:32.926344 kubelet[3303]: E0620 19:56:32.926326 3303 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:56:32.926344 kubelet[3303]: W0620 19:56:32.926343 3303 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:56:32.926698 kubelet[3303]: E0620 19:56:32.926357 3303 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:56:32.926698 kubelet[3303]: E0620 19:56:32.926647 3303 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:56:32.926698 kubelet[3303]: W0620 19:56:32.926659 3303 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:56:32.926698 kubelet[3303]: E0620 19:56:32.926672 3303 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:56:32.928562 kubelet[3303]: E0620 19:56:32.928451 3303 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:56:32.928562 kubelet[3303]: W0620 19:56:32.928470 3303 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:56:32.928562 kubelet[3303]: E0620 19:56:32.928486 3303 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:56:32.928742 kubelet[3303]: E0620 19:56:32.928668 3303 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:56:32.928742 kubelet[3303]: W0620 19:56:32.928678 3303 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:56:32.928742 kubelet[3303]: E0620 19:56:32.928692 3303 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:56:32.929535 kubelet[3303]: E0620 19:56:32.929123 3303 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:56:32.929535 kubelet[3303]: W0620 19:56:32.929136 3303 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:56:32.929535 kubelet[3303]: E0620 19:56:32.929149 3303 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:56:32.930531 kubelet[3303]: E0620 19:56:32.930485 3303 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:56:32.930531 kubelet[3303]: W0620 19:56:32.930500 3303 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:56:32.930531 kubelet[3303]: E0620 19:56:32.930515 3303 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:56:32.931043 kubelet[3303]: E0620 19:56:32.930998 3303 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:56:32.931043 kubelet[3303]: W0620 19:56:32.931013 3303 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:56:32.931043 kubelet[3303]: E0620 19:56:32.931027 3303 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:56:32.932481 kubelet[3303]: E0620 19:56:32.932383 3303 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:56:32.932481 kubelet[3303]: W0620 19:56:32.932399 3303 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:56:32.932481 kubelet[3303]: E0620 19:56:32.932414 3303 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:56:32.932968 kubelet[3303]: E0620 19:56:32.932840 3303 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:56:32.932968 kubelet[3303]: W0620 19:56:32.932855 3303 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:56:32.932968 kubelet[3303]: E0620 19:56:32.932869 3303 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:56:32.935029 kubelet[3303]: E0620 19:56:32.934966 3303 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:56:32.936402 kubelet[3303]: W0620 19:56:32.935280 3303 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:56:32.936402 kubelet[3303]: E0620 19:56:32.935306 3303 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:56:32.936890 kubelet[3303]: E0620 19:56:32.936874 3303 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:56:32.936993 kubelet[3303]: W0620 19:56:32.936979 3303 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:56:32.937430 kubelet[3303]: E0620 19:56:32.937411 3303 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:56:33.559944 containerd[1986]: time="2025-06-20T19:56:33.559894473Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:56:33.562692 containerd[1986]: time="2025-06-20T19:56:33.562502758Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.1: active requests=0, bytes read=4441627" Jun 20 19:56:33.565242 containerd[1986]: time="2025-06-20T19:56:33.565180459Z" level=info msg="ImageCreate event name:\"sha256:2eb0d46821080fd806e1b7f8ca42889800fcb3f0af912b6fbb09a13b21454d48\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:56:33.568009 containerd[1986]: time="2025-06-20T19:56:33.567932452Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:b9246fe925ee5b8a5c7dfe1d1c3c29063cbfd512663088b135a015828c20401e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:56:33.597957 containerd[1986]: time="2025-06-20T19:56:33.568581625Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.1\" with image id \"sha256:2eb0d46821080fd806e1b7f8ca42889800fcb3f0af912b6fbb09a13b21454d48\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:b9246fe925ee5b8a5c7dfe1d1c3c29063cbfd512663088b135a015828c20401e\", size \"5934290\" in 1.311964168s" Jun 20 19:56:33.597957 containerd[1986]: time="2025-06-20T19:56:33.568617705Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.1\" returns image reference \"sha256:2eb0d46821080fd806e1b7f8ca42889800fcb3f0af912b6fbb09a13b21454d48\"" Jun 20 19:56:33.597957 containerd[1986]: time="2025-06-20T19:56:33.574603340Z" level=info msg="CreateContainer within sandbox \"a8de73ba87362a171b29f6b37aa2fd51acc645e2b00fd3ebc153203516c646b8\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jun 20 19:56:33.597957 containerd[1986]: time="2025-06-20T19:56:33.586031976Z" level=info msg="Container fd0496f650e8c12c0d482e681b9b8fc118e16bd3ece51a04731c6ca9a1248c0e: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:56:33.600494 containerd[1986]: time="2025-06-20T19:56:33.600447598Z" level=info msg="CreateContainer within sandbox \"a8de73ba87362a171b29f6b37aa2fd51acc645e2b00fd3ebc153203516c646b8\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"fd0496f650e8c12c0d482e681b9b8fc118e16bd3ece51a04731c6ca9a1248c0e\"" Jun 20 19:56:33.602440 containerd[1986]: time="2025-06-20T19:56:33.601065723Z" level=info msg="StartContainer for \"fd0496f650e8c12c0d482e681b9b8fc118e16bd3ece51a04731c6ca9a1248c0e\"" Jun 20 19:56:33.603192 containerd[1986]: time="2025-06-20T19:56:33.603139242Z" level=info msg="connecting to shim fd0496f650e8c12c0d482e681b9b8fc118e16bd3ece51a04731c6ca9a1248c0e" address="unix:///run/containerd/s/36d4406bdd9c1ed2240dd4f1489ee36c4b2e77bd9db97395538fae15a7e793e0" protocol=ttrpc version=3 Jun 20 19:56:33.639453 systemd[1]: Started cri-containerd-fd0496f650e8c12c0d482e681b9b8fc118e16bd3ece51a04731c6ca9a1248c0e.scope - libcontainer container fd0496f650e8c12c0d482e681b9b8fc118e16bd3ece51a04731c6ca9a1248c0e. Jun 20 19:56:33.705149 containerd[1986]: time="2025-06-20T19:56:33.705107101Z" level=info msg="StartContainer for \"fd0496f650e8c12c0d482e681b9b8fc118e16bd3ece51a04731c6ca9a1248c0e\" returns successfully" Jun 20 19:56:33.722689 systemd[1]: cri-containerd-fd0496f650e8c12c0d482e681b9b8fc118e16bd3ece51a04731c6ca9a1248c0e.scope: Deactivated successfully. Jun 20 19:56:33.725060 kubelet[3303]: E0620 19:56:33.724580 3303 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-92w4t" podUID="d1b328f6-f5a3-4a18-86d2-b8e5e3cf3098" Jun 20 19:56:33.800288 containerd[1986]: time="2025-06-20T19:56:33.799362813Z" level=info msg="received exit event container_id:\"fd0496f650e8c12c0d482e681b9b8fc118e16bd3ece51a04731c6ca9a1248c0e\" id:\"fd0496f650e8c12c0d482e681b9b8fc118e16bd3ece51a04731c6ca9a1248c0e\" pid:4254 exited_at:{seconds:1750449393 nanos:729727950}" Jun 20 19:56:33.805034 containerd[1986]: time="2025-06-20T19:56:33.804971043Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fd0496f650e8c12c0d482e681b9b8fc118e16bd3ece51a04731c6ca9a1248c0e\" id:\"fd0496f650e8c12c0d482e681b9b8fc118e16bd3ece51a04731c6ca9a1248c0e\" pid:4254 exited_at:{seconds:1750449393 nanos:729727950}" Jun 20 19:56:33.839306 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fd0496f650e8c12c0d482e681b9b8fc118e16bd3ece51a04731c6ca9a1248c0e-rootfs.mount: Deactivated successfully. Jun 20 19:56:33.897454 kubelet[3303]: I0620 19:56:33.897168 3303 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 20 19:56:33.929433 kubelet[3303]: I0620 19:56:33.929371 3303 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-79dfb57759-xp9r6" podStartSLOduration=2.374031087 podStartE2EDuration="4.926578574s" podCreationTimestamp="2025-06-20 19:56:29 +0000 UTC" firstStartedPulling="2025-06-20 19:56:29.70376102 +0000 UTC m=+21.143050688" lastFinishedPulling="2025-06-20 19:56:32.256308512 +0000 UTC m=+23.695598175" observedRunningTime="2025-06-20 19:56:32.951531498 +0000 UTC m=+24.390821184" watchObservedRunningTime="2025-06-20 19:56:33.926578574 +0000 UTC m=+25.365868259" Jun 20 19:56:34.903527 containerd[1986]: time="2025-06-20T19:56:34.903312868Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.1\"" Jun 20 19:56:35.725128 kubelet[3303]: E0620 19:56:35.725083 3303 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-92w4t" podUID="d1b328f6-f5a3-4a18-86d2-b8e5e3cf3098" Jun 20 19:56:37.725265 kubelet[3303]: E0620 19:56:37.725179 3303 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-92w4t" podUID="d1b328f6-f5a3-4a18-86d2-b8e5e3cf3098" Jun 20 19:56:39.726901 kubelet[3303]: E0620 19:56:39.726843 3303 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-92w4t" podUID="d1b328f6-f5a3-4a18-86d2-b8e5e3cf3098" Jun 20 19:56:40.028646 containerd[1986]: time="2025-06-20T19:56:40.028497641Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:56:40.029794 containerd[1986]: time="2025-06-20T19:56:40.029667319Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.1: active requests=0, bytes read=70405879" Jun 20 19:56:40.030949 containerd[1986]: time="2025-06-20T19:56:40.030912437Z" level=info msg="ImageCreate event name:\"sha256:0d2cd976ff6ee711927e02b1c2ba0b532275ff85d5dc05fc413cc660d5bec68e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:56:40.034854 containerd[1986]: time="2025-06-20T19:56:40.034804091Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:930b33311eec7523e36d95977281681d74d33efff937302b26516b2bc03a5fe9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:56:40.035549 containerd[1986]: time="2025-06-20T19:56:40.035508539Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.1\" with image id \"sha256:0d2cd976ff6ee711927e02b1c2ba0b532275ff85d5dc05fc413cc660d5bec68e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:930b33311eec7523e36d95977281681d74d33efff937302b26516b2bc03a5fe9\", size \"71898582\" in 5.132133777s" Jun 20 19:56:40.035549 containerd[1986]: time="2025-06-20T19:56:40.035544097Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.1\" returns image reference \"sha256:0d2cd976ff6ee711927e02b1c2ba0b532275ff85d5dc05fc413cc660d5bec68e\"" Jun 20 19:56:40.040998 containerd[1986]: time="2025-06-20T19:56:40.040704958Z" level=info msg="CreateContainer within sandbox \"a8de73ba87362a171b29f6b37aa2fd51acc645e2b00fd3ebc153203516c646b8\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jun 20 19:56:40.061797 containerd[1986]: time="2025-06-20T19:56:40.061745498Z" level=info msg="Container 9c485254285c53ed6ca96b18080b8272c9838d645f1a4e4f571d85801356654c: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:56:40.077838 containerd[1986]: time="2025-06-20T19:56:40.077775029Z" level=info msg="CreateContainer within sandbox \"a8de73ba87362a171b29f6b37aa2fd51acc645e2b00fd3ebc153203516c646b8\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"9c485254285c53ed6ca96b18080b8272c9838d645f1a4e4f571d85801356654c\"" Jun 20 19:56:40.078889 containerd[1986]: time="2025-06-20T19:56:40.078533870Z" level=info msg="StartContainer for \"9c485254285c53ed6ca96b18080b8272c9838d645f1a4e4f571d85801356654c\"" Jun 20 19:56:40.081488 containerd[1986]: time="2025-06-20T19:56:40.081456505Z" level=info msg="connecting to shim 9c485254285c53ed6ca96b18080b8272c9838d645f1a4e4f571d85801356654c" address="unix:///run/containerd/s/36d4406bdd9c1ed2240dd4f1489ee36c4b2e77bd9db97395538fae15a7e793e0" protocol=ttrpc version=3 Jun 20 19:56:40.120623 systemd[1]: Started cri-containerd-9c485254285c53ed6ca96b18080b8272c9838d645f1a4e4f571d85801356654c.scope - libcontainer container 9c485254285c53ed6ca96b18080b8272c9838d645f1a4e4f571d85801356654c. Jun 20 19:56:40.190973 containerd[1986]: time="2025-06-20T19:56:40.190370837Z" level=info msg="StartContainer for \"9c485254285c53ed6ca96b18080b8272c9838d645f1a4e4f571d85801356654c\" returns successfully" Jun 20 19:56:40.901585 systemd[1]: cri-containerd-9c485254285c53ed6ca96b18080b8272c9838d645f1a4e4f571d85801356654c.scope: Deactivated successfully. Jun 20 19:56:40.903541 systemd[1]: cri-containerd-9c485254285c53ed6ca96b18080b8272c9838d645f1a4e4f571d85801356654c.scope: Consumed 587ms CPU time, 158.4M memory peak, 5.5M read from disk, 171.2M written to disk. Jun 20 19:56:41.016183 containerd[1986]: time="2025-06-20T19:56:41.015893677Z" level=info msg="received exit event container_id:\"9c485254285c53ed6ca96b18080b8272c9838d645f1a4e4f571d85801356654c\" id:\"9c485254285c53ed6ca96b18080b8272c9838d645f1a4e4f571d85801356654c\" pid:4311 exited_at:{seconds:1750449401 nanos:15387952}" Jun 20 19:56:41.016183 containerd[1986]: time="2025-06-20T19:56:41.016171215Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9c485254285c53ed6ca96b18080b8272c9838d645f1a4e4f571d85801356654c\" id:\"9c485254285c53ed6ca96b18080b8272c9838d645f1a4e4f571d85801356654c\" pid:4311 exited_at:{seconds:1750449401 nanos:15387952}" Jun 20 19:56:41.051247 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9c485254285c53ed6ca96b18080b8272c9838d645f1a4e4f571d85801356654c-rootfs.mount: Deactivated successfully. Jun 20 19:56:41.070270 kubelet[3303]: I0620 19:56:41.070241 3303 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jun 20 19:56:41.144984 systemd[1]: Created slice kubepods-burstable-podf097a8f6_c458_48b9_bc63_afb7bb73c93e.slice - libcontainer container kubepods-burstable-podf097a8f6_c458_48b9_bc63_afb7bb73c93e.slice. Jun 20 19:56:41.166649 systemd[1]: Created slice kubepods-burstable-podb561cf8f_35ea_48f5_afe2_0ab762d67dd4.slice - libcontainer container kubepods-burstable-podb561cf8f_35ea_48f5_afe2_0ab762d67dd4.slice. Jun 20 19:56:41.246201 systemd[1]: Created slice kubepods-besteffort-podbd80f03f_2e8f_45b1_98ea_c2e27c2ca62e.slice - libcontainer container kubepods-besteffort-podbd80f03f_2e8f_45b1_98ea_c2e27c2ca62e.slice. Jun 20 19:56:41.277244 systemd[1]: Created slice kubepods-besteffort-pod16631420_885d_40eb_a4ee_c415ba82428d.slice - libcontainer container kubepods-besteffort-pod16631420_885d_40eb_a4ee_c415ba82428d.slice. Jun 20 19:56:41.285797 systemd[1]: Created slice kubepods-besteffort-podafa6ee2a_ffdb_4330_9d6c_b45312853c2e.slice - libcontainer container kubepods-besteffort-podafa6ee2a_ffdb_4330_9d6c_b45312853c2e.slice. Jun 20 19:56:41.287711 kubelet[3303]: I0620 19:56:41.287212 3303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m25zf\" (UniqueName: \"kubernetes.io/projected/f097a8f6-c458-48b9-bc63-afb7bb73c93e-kube-api-access-m25zf\") pod \"coredns-674b8bbfcf-x5fjm\" (UID: \"f097a8f6-c458-48b9-bc63-afb7bb73c93e\") " pod="kube-system/coredns-674b8bbfcf-x5fjm" Jun 20 19:56:41.287711 kubelet[3303]: I0620 19:56:41.287267 3303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5pm2\" (UniqueName: \"kubernetes.io/projected/b561cf8f-35ea-48f5-afe2-0ab762d67dd4-kube-api-access-n5pm2\") pod \"coredns-674b8bbfcf-nlbrf\" (UID: \"b561cf8f-35ea-48f5-afe2-0ab762d67dd4\") " pod="kube-system/coredns-674b8bbfcf-nlbrf" Jun 20 19:56:41.287711 kubelet[3303]: I0620 19:56:41.287289 3303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f097a8f6-c458-48b9-bc63-afb7bb73c93e-config-volume\") pod \"coredns-674b8bbfcf-x5fjm\" (UID: \"f097a8f6-c458-48b9-bc63-afb7bb73c93e\") " pod="kube-system/coredns-674b8bbfcf-x5fjm" Jun 20 19:56:41.287711 kubelet[3303]: I0620 19:56:41.287307 3303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b561cf8f-35ea-48f5-afe2-0ab762d67dd4-config-volume\") pod \"coredns-674b8bbfcf-nlbrf\" (UID: \"b561cf8f-35ea-48f5-afe2-0ab762d67dd4\") " pod="kube-system/coredns-674b8bbfcf-nlbrf" Jun 20 19:56:41.296730 systemd[1]: Created slice kubepods-besteffort-podfbc552a9_a018_43df_8b4a_79f304752516.slice - libcontainer container kubepods-besteffort-podfbc552a9_a018_43df_8b4a_79f304752516.slice. Jun 20 19:56:41.305832 systemd[1]: Created slice kubepods-besteffort-pod4475fcaa_a4f4_4fb2_8cf0_a3d06d65a6a7.slice - libcontainer container kubepods-besteffort-pod4475fcaa_a4f4_4fb2_8cf0_a3d06d65a6a7.slice. Jun 20 19:56:41.391598 kubelet[3303]: I0620 19:56:41.390146 3303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/bd80f03f-2e8f-45b1-98ea-c2e27c2ca62e-calico-apiserver-certs\") pod \"calico-apiserver-54b78b49cf-c8srm\" (UID: \"bd80f03f-2e8f-45b1-98ea-c2e27c2ca62e\") " pod="calico-apiserver/calico-apiserver-54b78b49cf-c8srm" Jun 20 19:56:41.391598 kubelet[3303]: I0620 19:56:41.390234 3303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgng9\" (UniqueName: \"kubernetes.io/projected/4475fcaa-a4f4-4fb2-8cf0-a3d06d65a6a7-kube-api-access-lgng9\") pod \"whisker-5bb6976b58-hf5v2\" (UID: \"4475fcaa-a4f4-4fb2-8cf0-a3d06d65a6a7\") " pod="calico-system/whisker-5bb6976b58-hf5v2" Jun 20 19:56:41.391598 kubelet[3303]: I0620 19:56:41.390269 3303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llb6q\" (UniqueName: \"kubernetes.io/projected/bd80f03f-2e8f-45b1-98ea-c2e27c2ca62e-kube-api-access-llb6q\") pod \"calico-apiserver-54b78b49cf-c8srm\" (UID: \"bd80f03f-2e8f-45b1-98ea-c2e27c2ca62e\") " pod="calico-apiserver/calico-apiserver-54b78b49cf-c8srm" Jun 20 19:56:41.391598 kubelet[3303]: I0620 19:56:41.390315 3303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/16631420-885d-40eb-a4ee-c415ba82428d-calico-apiserver-certs\") pod \"calico-apiserver-54b78b49cf-hbj2b\" (UID: \"16631420-885d-40eb-a4ee-c415ba82428d\") " pod="calico-apiserver/calico-apiserver-54b78b49cf-hbj2b" Jun 20 19:56:41.391598 kubelet[3303]: I0620 19:56:41.390341 3303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/afa6ee2a-ffdb-4330-9d6c-b45312853c2e-tigera-ca-bundle\") pod \"calico-kube-controllers-66d5f994b4-9qcrd\" (UID: \"afa6ee2a-ffdb-4330-9d6c-b45312853c2e\") " pod="calico-system/calico-kube-controllers-66d5f994b4-9qcrd" Jun 20 19:56:41.391939 kubelet[3303]: I0620 19:56:41.390403 3303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4pzt\" (UniqueName: \"kubernetes.io/projected/16631420-885d-40eb-a4ee-c415ba82428d-kube-api-access-x4pzt\") pod \"calico-apiserver-54b78b49cf-hbj2b\" (UID: \"16631420-885d-40eb-a4ee-c415ba82428d\") " pod="calico-apiserver/calico-apiserver-54b78b49cf-hbj2b" Jun 20 19:56:41.391939 kubelet[3303]: I0620 19:56:41.390431 3303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/4475fcaa-a4f4-4fb2-8cf0-a3d06d65a6a7-whisker-backend-key-pair\") pod \"whisker-5bb6976b58-hf5v2\" (UID: \"4475fcaa-a4f4-4fb2-8cf0-a3d06d65a6a7\") " pod="calico-system/whisker-5bb6976b58-hf5v2" Jun 20 19:56:41.391939 kubelet[3303]: I0620 19:56:41.390471 3303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fbc552a9-a018-43df-8b4a-79f304752516-goldmane-ca-bundle\") pod \"goldmane-5bd85449d4-2hbtg\" (UID: \"fbc552a9-a018-43df-8b4a-79f304752516\") " pod="calico-system/goldmane-5bd85449d4-2hbtg" Jun 20 19:56:41.391939 kubelet[3303]: I0620 19:56:41.390554 3303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69fz7\" (UniqueName: \"kubernetes.io/projected/fbc552a9-a018-43df-8b4a-79f304752516-kube-api-access-69fz7\") pod \"goldmane-5bd85449d4-2hbtg\" (UID: \"fbc552a9-a018-43df-8b4a-79f304752516\") " pod="calico-system/goldmane-5bd85449d4-2hbtg" Jun 20 19:56:41.391939 kubelet[3303]: I0620 19:56:41.390634 3303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4475fcaa-a4f4-4fb2-8cf0-a3d06d65a6a7-whisker-ca-bundle\") pod \"whisker-5bb6976b58-hf5v2\" (UID: \"4475fcaa-a4f4-4fb2-8cf0-a3d06d65a6a7\") " pod="calico-system/whisker-5bb6976b58-hf5v2" Jun 20 19:56:41.392154 kubelet[3303]: I0620 19:56:41.390658 3303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fbc552a9-a018-43df-8b4a-79f304752516-config\") pod \"goldmane-5bd85449d4-2hbtg\" (UID: \"fbc552a9-a018-43df-8b4a-79f304752516\") " pod="calico-system/goldmane-5bd85449d4-2hbtg" Jun 20 19:56:41.392154 kubelet[3303]: I0620 19:56:41.390703 3303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/fbc552a9-a018-43df-8b4a-79f304752516-goldmane-key-pair\") pod \"goldmane-5bd85449d4-2hbtg\" (UID: \"fbc552a9-a018-43df-8b4a-79f304752516\") " pod="calico-system/goldmane-5bd85449d4-2hbtg" Jun 20 19:56:41.392154 kubelet[3303]: I0620 19:56:41.390731 3303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmjgd\" (UniqueName: \"kubernetes.io/projected/afa6ee2a-ffdb-4330-9d6c-b45312853c2e-kube-api-access-xmjgd\") pod \"calico-kube-controllers-66d5f994b4-9qcrd\" (UID: \"afa6ee2a-ffdb-4330-9d6c-b45312853c2e\") " pod="calico-system/calico-kube-controllers-66d5f994b4-9qcrd" Jun 20 19:56:41.531189 containerd[1986]: time="2025-06-20T19:56:41.529530259Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-x5fjm,Uid:f097a8f6-c458-48b9-bc63-afb7bb73c93e,Namespace:kube-system,Attempt:0,}" Jun 20 19:56:41.546052 containerd[1986]: time="2025-06-20T19:56:41.545972979Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-nlbrf,Uid:b561cf8f-35ea-48f5-afe2-0ab762d67dd4,Namespace:kube-system,Attempt:0,}" Jun 20 19:56:41.575662 containerd[1986]: time="2025-06-20T19:56:41.575627353Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54b78b49cf-c8srm,Uid:bd80f03f-2e8f-45b1-98ea-c2e27c2ca62e,Namespace:calico-apiserver,Attempt:0,}" Jun 20 19:56:41.590978 containerd[1986]: time="2025-06-20T19:56:41.590943821Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54b78b49cf-hbj2b,Uid:16631420-885d-40eb-a4ee-c415ba82428d,Namespace:calico-apiserver,Attempt:0,}" Jun 20 19:56:41.595438 containerd[1986]: time="2025-06-20T19:56:41.595394226Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-66d5f994b4-9qcrd,Uid:afa6ee2a-ffdb-4330-9d6c-b45312853c2e,Namespace:calico-system,Attempt:0,}" Jun 20 19:56:41.603572 containerd[1986]: time="2025-06-20T19:56:41.603508585Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5bd85449d4-2hbtg,Uid:fbc552a9-a018-43df-8b4a-79f304752516,Namespace:calico-system,Attempt:0,}" Jun 20 19:56:41.618204 containerd[1986]: time="2025-06-20T19:56:41.618095137Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5bb6976b58-hf5v2,Uid:4475fcaa-a4f4-4fb2-8cf0-a3d06d65a6a7,Namespace:calico-system,Attempt:0,}" Jun 20 19:56:41.731769 systemd[1]: Created slice kubepods-besteffort-podd1b328f6_f5a3_4a18_86d2_b8e5e3cf3098.slice - libcontainer container kubepods-besteffort-podd1b328f6_f5a3_4a18_86d2_b8e5e3cf3098.slice. Jun 20 19:56:41.735977 containerd[1986]: time="2025-06-20T19:56:41.735932786Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-92w4t,Uid:d1b328f6-f5a3-4a18-86d2-b8e5e3cf3098,Namespace:calico-system,Attempt:0,}" Jun 20 19:56:41.917053 containerd[1986]: time="2025-06-20T19:56:41.916995350Z" level=error msg="Failed to destroy network for sandbox \"d354844b07c80ccb866bfaa92026945205ec5617d47079e9414c06733a5dacbf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:56:41.918752 containerd[1986]: time="2025-06-20T19:56:41.918705927Z" level=error msg="Failed to destroy network for sandbox \"dbb2ccdde63819675418a251d92109511dbb98dbefef9313dabb4384333eaf68\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:56:41.920694 containerd[1986]: time="2025-06-20T19:56:41.920641875Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-66d5f994b4-9qcrd,Uid:afa6ee2a-ffdb-4330-9d6c-b45312853c2e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d354844b07c80ccb866bfaa92026945205ec5617d47079e9414c06733a5dacbf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:56:41.922859 containerd[1986]: time="2025-06-20T19:56:41.922807962Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-x5fjm,Uid:f097a8f6-c458-48b9-bc63-afb7bb73c93e,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"dbb2ccdde63819675418a251d92109511dbb98dbefef9313dabb4384333eaf68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:56:41.923541 kubelet[3303]: E0620 19:56:41.923512 3303 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dbb2ccdde63819675418a251d92109511dbb98dbefef9313dabb4384333eaf68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:56:41.923651 kubelet[3303]: E0620 19:56:41.923568 3303 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dbb2ccdde63819675418a251d92109511dbb98dbefef9313dabb4384333eaf68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-x5fjm" Jun 20 19:56:41.923651 kubelet[3303]: E0620 19:56:41.923589 3303 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dbb2ccdde63819675418a251d92109511dbb98dbefef9313dabb4384333eaf68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-x5fjm" Jun 20 19:56:41.923708 kubelet[3303]: E0620 19:56:41.923638 3303 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-x5fjm_kube-system(f097a8f6-c458-48b9-bc63-afb7bb73c93e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-x5fjm_kube-system(f097a8f6-c458-48b9-bc63-afb7bb73c93e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dbb2ccdde63819675418a251d92109511dbb98dbefef9313dabb4384333eaf68\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-x5fjm" podUID="f097a8f6-c458-48b9-bc63-afb7bb73c93e" Jun 20 19:56:41.923837 kubelet[3303]: E0620 19:56:41.923117 3303 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d354844b07c80ccb866bfaa92026945205ec5617d47079e9414c06733a5dacbf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:56:41.923953 kubelet[3303]: E0620 19:56:41.923852 3303 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d354844b07c80ccb866bfaa92026945205ec5617d47079e9414c06733a5dacbf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-66d5f994b4-9qcrd" Jun 20 19:56:41.923984 kubelet[3303]: E0620 19:56:41.923961 3303 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d354844b07c80ccb866bfaa92026945205ec5617d47079e9414c06733a5dacbf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-66d5f994b4-9qcrd" Jun 20 19:56:41.924012 kubelet[3303]: E0620 19:56:41.924000 3303 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-66d5f994b4-9qcrd_calico-system(afa6ee2a-ffdb-4330-9d6c-b45312853c2e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-66d5f994b4-9qcrd_calico-system(afa6ee2a-ffdb-4330-9d6c-b45312853c2e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d354844b07c80ccb866bfaa92026945205ec5617d47079e9414c06733a5dacbf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-66d5f994b4-9qcrd" podUID="afa6ee2a-ffdb-4330-9d6c-b45312853c2e" Jun 20 19:56:41.940444 containerd[1986]: time="2025-06-20T19:56:41.940398869Z" level=error msg="Failed to destroy network for sandbox \"fe07ad502b9e03ea121455123499321e3607374d6318cf151d50a8cba9220ae3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:56:41.941976 containerd[1986]: time="2025-06-20T19:56:41.941945329Z" level=error msg="Failed to destroy network for sandbox \"90fdc5b37b2b77626726a0a8ba5ce7cec48a5e74ba25eb33575d9226fd22df2e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:56:41.944363 containerd[1986]: time="2025-06-20T19:56:41.944321432Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-92w4t,Uid:d1b328f6-f5a3-4a18-86d2-b8e5e3cf3098,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe07ad502b9e03ea121455123499321e3607374d6318cf151d50a8cba9220ae3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:56:41.946829 kubelet[3303]: E0620 19:56:41.946772 3303 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe07ad502b9e03ea121455123499321e3607374d6318cf151d50a8cba9220ae3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:56:41.947308 kubelet[3303]: E0620 19:56:41.947280 3303 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe07ad502b9e03ea121455123499321e3607374d6318cf151d50a8cba9220ae3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-92w4t" Jun 20 19:56:41.947395 kubelet[3303]: E0620 19:56:41.947331 3303 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe07ad502b9e03ea121455123499321e3607374d6318cf151d50a8cba9220ae3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-92w4t" Jun 20 19:56:41.947430 kubelet[3303]: E0620 19:56:41.947400 3303 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-92w4t_calico-system(d1b328f6-f5a3-4a18-86d2-b8e5e3cf3098)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-92w4t_calico-system(d1b328f6-f5a3-4a18-86d2-b8e5e3cf3098)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fe07ad502b9e03ea121455123499321e3607374d6318cf151d50a8cba9220ae3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-92w4t" podUID="d1b328f6-f5a3-4a18-86d2-b8e5e3cf3098" Jun 20 19:56:41.947604 containerd[1986]: time="2025-06-20T19:56:41.946797558Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5bb6976b58-hf5v2,Uid:4475fcaa-a4f4-4fb2-8cf0-a3d06d65a6a7,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"90fdc5b37b2b77626726a0a8ba5ce7cec48a5e74ba25eb33575d9226fd22df2e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:56:41.948239 kubelet[3303]: E0620 19:56:41.948149 3303 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"90fdc5b37b2b77626726a0a8ba5ce7cec48a5e74ba25eb33575d9226fd22df2e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:56:41.948239 kubelet[3303]: E0620 19:56:41.948187 3303 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"90fdc5b37b2b77626726a0a8ba5ce7cec48a5e74ba25eb33575d9226fd22df2e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5bb6976b58-hf5v2" Jun 20 19:56:41.948239 kubelet[3303]: E0620 19:56:41.948206 3303 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"90fdc5b37b2b77626726a0a8ba5ce7cec48a5e74ba25eb33575d9226fd22df2e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5bb6976b58-hf5v2" Jun 20 19:56:41.948453 kubelet[3303]: E0620 19:56:41.948418 3303 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5bb6976b58-hf5v2_calico-system(4475fcaa-a4f4-4fb2-8cf0-a3d06d65a6a7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5bb6976b58-hf5v2_calico-system(4475fcaa-a4f4-4fb2-8cf0-a3d06d65a6a7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"90fdc5b37b2b77626726a0a8ba5ce7cec48a5e74ba25eb33575d9226fd22df2e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5bb6976b58-hf5v2" podUID="4475fcaa-a4f4-4fb2-8cf0-a3d06d65a6a7" Jun 20 19:56:41.963295 containerd[1986]: time="2025-06-20T19:56:41.962977319Z" level=error msg="Failed to destroy network for sandbox \"8b1f6685f4b66f0d0a963ef6981de21204ca18c91d4309395e3cb9cdde3af553\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:56:41.965801 containerd[1986]: time="2025-06-20T19:56:41.965746465Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54b78b49cf-hbj2b,Uid:16631420-885d-40eb-a4ee-c415ba82428d,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b1f6685f4b66f0d0a963ef6981de21204ca18c91d4309395e3cb9cdde3af553\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:56:41.966348 kubelet[3303]: E0620 19:56:41.966005 3303 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b1f6685f4b66f0d0a963ef6981de21204ca18c91d4309395e3cb9cdde3af553\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:56:41.966348 kubelet[3303]: E0620 19:56:41.966186 3303 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b1f6685f4b66f0d0a963ef6981de21204ca18c91d4309395e3cb9cdde3af553\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-54b78b49cf-hbj2b" Jun 20 19:56:41.966484 kubelet[3303]: E0620 19:56:41.966458 3303 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b1f6685f4b66f0d0a963ef6981de21204ca18c91d4309395e3cb9cdde3af553\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-54b78b49cf-hbj2b" Jun 20 19:56:41.967899 kubelet[3303]: E0620 19:56:41.967735 3303 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-54b78b49cf-hbj2b_calico-apiserver(16631420-885d-40eb-a4ee-c415ba82428d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-54b78b49cf-hbj2b_calico-apiserver(16631420-885d-40eb-a4ee-c415ba82428d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8b1f6685f4b66f0d0a963ef6981de21204ca18c91d4309395e3cb9cdde3af553\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-54b78b49cf-hbj2b" podUID="16631420-885d-40eb-a4ee-c415ba82428d" Jun 20 19:56:41.973908 containerd[1986]: time="2025-06-20T19:56:41.973614416Z" level=error msg="Failed to destroy network for sandbox \"f646687e712f795ead0ab3007cf74fd2f99652664464e8d8d4ab84c1418e788b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:56:41.977336 containerd[1986]: time="2025-06-20T19:56:41.977277603Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-nlbrf,Uid:b561cf8f-35ea-48f5-afe2-0ab762d67dd4,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f646687e712f795ead0ab3007cf74fd2f99652664464e8d8d4ab84c1418e788b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:56:41.978130 kubelet[3303]: E0620 19:56:41.977652 3303 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f646687e712f795ead0ab3007cf74fd2f99652664464e8d8d4ab84c1418e788b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:56:41.978130 kubelet[3303]: E0620 19:56:41.977720 3303 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f646687e712f795ead0ab3007cf74fd2f99652664464e8d8d4ab84c1418e788b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-nlbrf" Jun 20 19:56:41.978130 kubelet[3303]: E0620 19:56:41.977751 3303 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f646687e712f795ead0ab3007cf74fd2f99652664464e8d8d4ab84c1418e788b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-nlbrf" Jun 20 19:56:41.978369 kubelet[3303]: E0620 19:56:41.977808 3303 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-nlbrf_kube-system(b561cf8f-35ea-48f5-afe2-0ab762d67dd4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-nlbrf_kube-system(b561cf8f-35ea-48f5-afe2-0ab762d67dd4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f646687e712f795ead0ab3007cf74fd2f99652664464e8d8d4ab84c1418e788b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-nlbrf" podUID="b561cf8f-35ea-48f5-afe2-0ab762d67dd4" Jun 20 19:56:41.980400 containerd[1986]: time="2025-06-20T19:56:41.980356714Z" level=error msg="Failed to destroy network for sandbox \"048243cc29b8b50d61470228b3bc43d75b222bfef031e20b19ae08c18b87e120\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:56:41.984884 containerd[1986]: time="2025-06-20T19:56:41.983724591Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54b78b49cf-c8srm,Uid:bd80f03f-2e8f-45b1-98ea-c2e27c2ca62e,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"048243cc29b8b50d61470228b3bc43d75b222bfef031e20b19ae08c18b87e120\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:56:41.985081 kubelet[3303]: E0620 19:56:41.984379 3303 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"048243cc29b8b50d61470228b3bc43d75b222bfef031e20b19ae08c18b87e120\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:56:41.985081 kubelet[3303]: E0620 19:56:41.984439 3303 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"048243cc29b8b50d61470228b3bc43d75b222bfef031e20b19ae08c18b87e120\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-54b78b49cf-c8srm" Jun 20 19:56:41.985081 kubelet[3303]: E0620 19:56:41.984460 3303 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"048243cc29b8b50d61470228b3bc43d75b222bfef031e20b19ae08c18b87e120\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-54b78b49cf-c8srm" Jun 20 19:56:41.985283 kubelet[3303]: E0620 19:56:41.984511 3303 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-54b78b49cf-c8srm_calico-apiserver(bd80f03f-2e8f-45b1-98ea-c2e27c2ca62e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-54b78b49cf-c8srm_calico-apiserver(bd80f03f-2e8f-45b1-98ea-c2e27c2ca62e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"048243cc29b8b50d61470228b3bc43d75b222bfef031e20b19ae08c18b87e120\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-54b78b49cf-c8srm" podUID="bd80f03f-2e8f-45b1-98ea-c2e27c2ca62e" Jun 20 19:56:41.986715 containerd[1986]: time="2025-06-20T19:56:41.986671479Z" level=error msg="Failed to destroy network for sandbox \"ff980cfcda632186e205de54523271606e05f40bead90510affd80b8cec16ea5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:56:41.989139 containerd[1986]: time="2025-06-20T19:56:41.989080272Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5bd85449d4-2hbtg,Uid:fbc552a9-a018-43df-8b4a-79f304752516,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff980cfcda632186e205de54523271606e05f40bead90510affd80b8cec16ea5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:56:41.989835 kubelet[3303]: E0620 19:56:41.989648 3303 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff980cfcda632186e205de54523271606e05f40bead90510affd80b8cec16ea5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:56:41.989835 kubelet[3303]: E0620 19:56:41.989738 3303 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff980cfcda632186e205de54523271606e05f40bead90510affd80b8cec16ea5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5bd85449d4-2hbtg" Jun 20 19:56:41.989835 kubelet[3303]: E0620 19:56:41.989775 3303 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff980cfcda632186e205de54523271606e05f40bead90510affd80b8cec16ea5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5bd85449d4-2hbtg" Jun 20 19:56:41.990362 kubelet[3303]: E0620 19:56:41.990023 3303 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-5bd85449d4-2hbtg_calico-system(fbc552a9-a018-43df-8b4a-79f304752516)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-5bd85449d4-2hbtg_calico-system(fbc552a9-a018-43df-8b4a-79f304752516)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ff980cfcda632186e205de54523271606e05f40bead90510affd80b8cec16ea5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-5bd85449d4-2hbtg" podUID="fbc552a9-a018-43df-8b4a-79f304752516" Jun 20 19:56:42.020161 containerd[1986]: time="2025-06-20T19:56:42.019946713Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.1\"" Jun 20 19:56:48.173027 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3389114721.mount: Deactivated successfully. Jun 20 19:56:48.248929 containerd[1986]: time="2025-06-20T19:56:48.248871242Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:56:48.260593 containerd[1986]: time="2025-06-20T19:56:48.260483938Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.1: active requests=0, bytes read=156518913" Jun 20 19:56:48.292464 containerd[1986]: time="2025-06-20T19:56:48.272038299Z" level=info msg="ImageCreate event name:\"sha256:9ac26af2ca9c35e475f921a9bcf40c7c0ce106819208883b006e64c489251722\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:56:48.292464 containerd[1986]: time="2025-06-20T19:56:48.274523380Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:8da6d025e5cf2ff5080c801ac8611bedb513e5922500fcc8161d8164e4679597\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:56:48.292464 containerd[1986]: time="2025-06-20T19:56:48.274905508Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.1\" with image id \"sha256:9ac26af2ca9c35e475f921a9bcf40c7c0ce106819208883b006e64c489251722\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:8da6d025e5cf2ff5080c801ac8611bedb513e5922500fcc8161d8164e4679597\", size \"156518775\" in 6.254890703s" Jun 20 19:56:48.292464 containerd[1986]: time="2025-06-20T19:56:48.274945664Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.1\" returns image reference \"sha256:9ac26af2ca9c35e475f921a9bcf40c7c0ce106819208883b006e64c489251722\"" Jun 20 19:56:48.337774 containerd[1986]: time="2025-06-20T19:56:48.337727692Z" level=info msg="CreateContainer within sandbox \"a8de73ba87362a171b29f6b37aa2fd51acc645e2b00fd3ebc153203516c646b8\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jun 20 19:56:48.374413 containerd[1986]: time="2025-06-20T19:56:48.374362126Z" level=info msg="Container 130dde779bef3273b1b16c04443e458f9a51c1f944b28d1338b17c4322230aca: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:56:48.375580 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2217115383.mount: Deactivated successfully. Jun 20 19:56:48.425431 containerd[1986]: time="2025-06-20T19:56:48.425157100Z" level=info msg="CreateContainer within sandbox \"a8de73ba87362a171b29f6b37aa2fd51acc645e2b00fd3ebc153203516c646b8\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"130dde779bef3273b1b16c04443e458f9a51c1f944b28d1338b17c4322230aca\"" Jun 20 19:56:48.425982 containerd[1986]: time="2025-06-20T19:56:48.425727617Z" level=info msg="StartContainer for \"130dde779bef3273b1b16c04443e458f9a51c1f944b28d1338b17c4322230aca\"" Jun 20 19:56:48.433628 containerd[1986]: time="2025-06-20T19:56:48.433530452Z" level=info msg="connecting to shim 130dde779bef3273b1b16c04443e458f9a51c1f944b28d1338b17c4322230aca" address="unix:///run/containerd/s/36d4406bdd9c1ed2240dd4f1489ee36c4b2e77bd9db97395538fae15a7e793e0" protocol=ttrpc version=3 Jun 20 19:56:48.638554 systemd[1]: Started cri-containerd-130dde779bef3273b1b16c04443e458f9a51c1f944b28d1338b17c4322230aca.scope - libcontainer container 130dde779bef3273b1b16c04443e458f9a51c1f944b28d1338b17c4322230aca. Jun 20 19:56:48.733484 containerd[1986]: time="2025-06-20T19:56:48.733125505Z" level=info msg="StartContainer for \"130dde779bef3273b1b16c04443e458f9a51c1f944b28d1338b17c4322230aca\" returns successfully" Jun 20 19:56:49.168388 kubelet[3303]: I0620 19:56:49.166672 3303 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-hnwtx" podStartSLOduration=1.9577071849999998 podStartE2EDuration="20.166647859s" podCreationTimestamp="2025-06-20 19:56:29 +0000 UTC" firstStartedPulling="2025-06-20 19:56:30.066796612 +0000 UTC m=+21.506086277" lastFinishedPulling="2025-06-20 19:56:48.275737287 +0000 UTC m=+39.715026951" observedRunningTime="2025-06-20 19:56:49.165761113 +0000 UTC m=+40.605050798" watchObservedRunningTime="2025-06-20 19:56:49.166647859 +0000 UTC m=+40.605937629" Jun 20 19:56:50.130655 kubelet[3303]: I0620 19:56:50.129243 3303 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 20 19:56:50.966899 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jun 20 19:56:50.968439 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jun 20 19:56:50.981338 containerd[1986]: time="2025-06-20T19:56:50.981207208Z" level=info msg="TaskExit event in podsandbox handler container_id:\"130dde779bef3273b1b16c04443e458f9a51c1f944b28d1338b17c4322230aca\" id:\"ae90e695e5e6d5f6a5df106000920e079a8bf9be820b79e55ba84289a1ee4420\" pid:4622 exit_status:1 exited_at:{seconds:1750449410 nanos:980623235}" Jun 20 19:56:51.134070 containerd[1986]: time="2025-06-20T19:56:51.133927273Z" level=info msg="TaskExit event in podsandbox handler container_id:\"130dde779bef3273b1b16c04443e458f9a51c1f944b28d1338b17c4322230aca\" id:\"9c9b44d06e5b6595d0f8f3c2993efc0c918b411fe6151603f6a46a5fda6ebb5a\" pid:4655 exit_status:1 exited_at:{seconds:1750449411 nanos:133568139}" Jun 20 19:56:51.258772 containerd[1986]: time="2025-06-20T19:56:51.258648796Z" level=info msg="TaskExit event in podsandbox handler container_id:\"130dde779bef3273b1b16c04443e458f9a51c1f944b28d1338b17c4322230aca\" id:\"d0a6d475448f87ec080fd14c0ada969e8aef06d666b9a4e258aaf97332c870b6\" pid:4680 exit_status:1 exited_at:{seconds:1750449411 nanos:258249081}" Jun 20 19:56:51.673191 kubelet[3303]: I0620 19:56:51.673138 3303 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/4475fcaa-a4f4-4fb2-8cf0-a3d06d65a6a7-whisker-backend-key-pair\") pod \"4475fcaa-a4f4-4fb2-8cf0-a3d06d65a6a7\" (UID: \"4475fcaa-a4f4-4fb2-8cf0-a3d06d65a6a7\") " Jun 20 19:56:51.674285 kubelet[3303]: I0620 19:56:51.673215 3303 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4475fcaa-a4f4-4fb2-8cf0-a3d06d65a6a7-whisker-ca-bundle\") pod \"4475fcaa-a4f4-4fb2-8cf0-a3d06d65a6a7\" (UID: \"4475fcaa-a4f4-4fb2-8cf0-a3d06d65a6a7\") " Jun 20 19:56:51.674285 kubelet[3303]: I0620 19:56:51.673269 3303 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lgng9\" (UniqueName: \"kubernetes.io/projected/4475fcaa-a4f4-4fb2-8cf0-a3d06d65a6a7-kube-api-access-lgng9\") pod \"4475fcaa-a4f4-4fb2-8cf0-a3d06d65a6a7\" (UID: \"4475fcaa-a4f4-4fb2-8cf0-a3d06d65a6a7\") " Jun 20 19:56:51.690017 kubelet[3303]: I0620 19:56:51.689106 3303 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4475fcaa-a4f4-4fb2-8cf0-a3d06d65a6a7-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "4475fcaa-a4f4-4fb2-8cf0-a3d06d65a6a7" (UID: "4475fcaa-a4f4-4fb2-8cf0-a3d06d65a6a7"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jun 20 19:56:51.704056 kubelet[3303]: I0620 19:56:51.700939 3303 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4475fcaa-a4f4-4fb2-8cf0-a3d06d65a6a7-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "4475fcaa-a4f4-4fb2-8cf0-a3d06d65a6a7" (UID: "4475fcaa-a4f4-4fb2-8cf0-a3d06d65a6a7"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jun 20 19:56:51.707751 kubelet[3303]: I0620 19:56:51.707474 3303 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4475fcaa-a4f4-4fb2-8cf0-a3d06d65a6a7-kube-api-access-lgng9" (OuterVolumeSpecName: "kube-api-access-lgng9") pod "4475fcaa-a4f4-4fb2-8cf0-a3d06d65a6a7" (UID: "4475fcaa-a4f4-4fb2-8cf0-a3d06d65a6a7"). InnerVolumeSpecName "kube-api-access-lgng9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jun 20 19:56:51.708299 systemd[1]: var-lib-kubelet-pods-4475fcaa\x2da4f4\x2d4fb2\x2d8cf0\x2da3d06d65a6a7-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jun 20 19:56:51.715911 systemd[1]: var-lib-kubelet-pods-4475fcaa\x2da4f4\x2d4fb2\x2d8cf0\x2da3d06d65a6a7-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dlgng9.mount: Deactivated successfully. Jun 20 19:56:51.774897 kubelet[3303]: I0620 19:56:51.774731 3303 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4475fcaa-a4f4-4fb2-8cf0-a3d06d65a6a7-whisker-ca-bundle\") on node \"ip-172-31-30-175\" DevicePath \"\"" Jun 20 19:56:51.774897 kubelet[3303]: I0620 19:56:51.774756 3303 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-lgng9\" (UniqueName: \"kubernetes.io/projected/4475fcaa-a4f4-4fb2-8cf0-a3d06d65a6a7-kube-api-access-lgng9\") on node \"ip-172-31-30-175\" DevicePath \"\"" Jun 20 19:56:51.774897 kubelet[3303]: I0620 19:56:51.774766 3303 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/4475fcaa-a4f4-4fb2-8cf0-a3d06d65a6a7-whisker-backend-key-pair\") on node \"ip-172-31-30-175\" DevicePath \"\"" Jun 20 19:56:52.141164 systemd[1]: Removed slice kubepods-besteffort-pod4475fcaa_a4f4_4fb2_8cf0_a3d06d65a6a7.slice - libcontainer container kubepods-besteffort-pod4475fcaa_a4f4_4fb2_8cf0_a3d06d65a6a7.slice. Jun 20 19:56:52.261596 systemd[1]: Created slice kubepods-besteffort-podc0ce6036_8720_49b9_a92f_30cc5195fab1.slice - libcontainer container kubepods-besteffort-podc0ce6036_8720_49b9_a92f_30cc5195fab1.slice. Jun 20 19:56:52.379721 kubelet[3303]: I0620 19:56:52.379674 3303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7g5k\" (UniqueName: \"kubernetes.io/projected/c0ce6036-8720-49b9-a92f-30cc5195fab1-kube-api-access-b7g5k\") pod \"whisker-988b5b6d7-4wshk\" (UID: \"c0ce6036-8720-49b9-a92f-30cc5195fab1\") " pod="calico-system/whisker-988b5b6d7-4wshk" Jun 20 19:56:52.379721 kubelet[3303]: I0620 19:56:52.379731 3303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c0ce6036-8720-49b9-a92f-30cc5195fab1-whisker-backend-key-pair\") pod \"whisker-988b5b6d7-4wshk\" (UID: \"c0ce6036-8720-49b9-a92f-30cc5195fab1\") " pod="calico-system/whisker-988b5b6d7-4wshk" Jun 20 19:56:52.379721 kubelet[3303]: I0620 19:56:52.379750 3303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c0ce6036-8720-49b9-a92f-30cc5195fab1-whisker-ca-bundle\") pod \"whisker-988b5b6d7-4wshk\" (UID: \"c0ce6036-8720-49b9-a92f-30cc5195fab1\") " pod="calico-system/whisker-988b5b6d7-4wshk" Jun 20 19:56:52.566693 containerd[1986]: time="2025-06-20T19:56:52.566580063Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-988b5b6d7-4wshk,Uid:c0ce6036-8720-49b9-a92f-30cc5195fab1,Namespace:calico-system,Attempt:0,}" Jun 20 19:56:52.733812 kubelet[3303]: I0620 19:56:52.733770 3303 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4475fcaa-a4f4-4fb2-8cf0-a3d06d65a6a7" path="/var/lib/kubelet/pods/4475fcaa-a4f4-4fb2-8cf0-a3d06d65a6a7/volumes" Jun 20 19:56:53.132126 (udev-worker)[4636]: Network interface NamePolicy= disabled on kernel command line. Jun 20 19:56:53.138249 systemd-networkd[1834]: cali37cf0736ec1: Link UP Jun 20 19:56:53.138724 systemd-networkd[1834]: cali37cf0736ec1: Gained carrier Jun 20 19:56:53.191388 containerd[1986]: 2025-06-20 19:56:52.607 [INFO][4720] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jun 20 19:56:53.191388 containerd[1986]: 2025-06-20 19:56:52.671 [INFO][4720] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--30--175-k8s-whisker--988b5b6d7--4wshk-eth0 whisker-988b5b6d7- calico-system c0ce6036-8720-49b9-a92f-30cc5195fab1 934 0 2025-06-20 19:56:52 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:988b5b6d7 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-30-175 whisker-988b5b6d7-4wshk eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali37cf0736ec1 [] [] }} ContainerID="921ac1e34fe0233a7b7966381141ae7e01b78d800f2d040d54a01a7e89a2d258" Namespace="calico-system" Pod="whisker-988b5b6d7-4wshk" WorkloadEndpoint="ip--172--31--30--175-k8s-whisker--988b5b6d7--4wshk-" Jun 20 19:56:53.191388 containerd[1986]: 2025-06-20 19:56:52.671 [INFO][4720] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="921ac1e34fe0233a7b7966381141ae7e01b78d800f2d040d54a01a7e89a2d258" Namespace="calico-system" Pod="whisker-988b5b6d7-4wshk" WorkloadEndpoint="ip--172--31--30--175-k8s-whisker--988b5b6d7--4wshk-eth0" Jun 20 19:56:53.191388 containerd[1986]: 2025-06-20 19:56:53.015 [INFO][4727] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="921ac1e34fe0233a7b7966381141ae7e01b78d800f2d040d54a01a7e89a2d258" HandleID="k8s-pod-network.921ac1e34fe0233a7b7966381141ae7e01b78d800f2d040d54a01a7e89a2d258" Workload="ip--172--31--30--175-k8s-whisker--988b5b6d7--4wshk-eth0" Jun 20 19:56:53.191931 containerd[1986]: 2025-06-20 19:56:53.018 [INFO][4727] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="921ac1e34fe0233a7b7966381141ae7e01b78d800f2d040d54a01a7e89a2d258" HandleID="k8s-pod-network.921ac1e34fe0233a7b7966381141ae7e01b78d800f2d040d54a01a7e89a2d258" Workload="ip--172--31--30--175-k8s-whisker--988b5b6d7--4wshk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fd90), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-30-175", "pod":"whisker-988b5b6d7-4wshk", "timestamp":"2025-06-20 19:56:53.015970263 +0000 UTC"}, Hostname:"ip-172-31-30-175", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 20 19:56:53.191931 containerd[1986]: 2025-06-20 19:56:53.018 [INFO][4727] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 19:56:53.191931 containerd[1986]: 2025-06-20 19:56:53.019 [INFO][4727] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 19:56:53.191931 containerd[1986]: 2025-06-20 19:56:53.019 [INFO][4727] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-30-175' Jun 20 19:56:53.191931 containerd[1986]: 2025-06-20 19:56:53.051 [INFO][4727] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.921ac1e34fe0233a7b7966381141ae7e01b78d800f2d040d54a01a7e89a2d258" host="ip-172-31-30-175" Jun 20 19:56:53.191931 containerd[1986]: 2025-06-20 19:56:53.070 [INFO][4727] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-30-175" Jun 20 19:56:53.191931 containerd[1986]: 2025-06-20 19:56:53.079 [INFO][4727] ipam/ipam.go 511: Trying affinity for 192.168.9.64/26 host="ip-172-31-30-175" Jun 20 19:56:53.191931 containerd[1986]: 2025-06-20 19:56:53.083 [INFO][4727] ipam/ipam.go 158: Attempting to load block cidr=192.168.9.64/26 host="ip-172-31-30-175" Jun 20 19:56:53.191931 containerd[1986]: 2025-06-20 19:56:53.088 [INFO][4727] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.9.64/26 host="ip-172-31-30-175" Jun 20 19:56:53.192576 containerd[1986]: 2025-06-20 19:56:53.088 [INFO][4727] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.9.64/26 handle="k8s-pod-network.921ac1e34fe0233a7b7966381141ae7e01b78d800f2d040d54a01a7e89a2d258" host="ip-172-31-30-175" Jun 20 19:56:53.192576 containerd[1986]: 2025-06-20 19:56:53.091 [INFO][4727] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.921ac1e34fe0233a7b7966381141ae7e01b78d800f2d040d54a01a7e89a2d258 Jun 20 19:56:53.192576 containerd[1986]: 2025-06-20 19:56:53.098 [INFO][4727] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.9.64/26 handle="k8s-pod-network.921ac1e34fe0233a7b7966381141ae7e01b78d800f2d040d54a01a7e89a2d258" host="ip-172-31-30-175" Jun 20 19:56:53.192576 containerd[1986]: 2025-06-20 19:56:53.110 [INFO][4727] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.9.65/26] block=192.168.9.64/26 handle="k8s-pod-network.921ac1e34fe0233a7b7966381141ae7e01b78d800f2d040d54a01a7e89a2d258" host="ip-172-31-30-175" Jun 20 19:56:53.192576 containerd[1986]: 2025-06-20 19:56:53.110 [INFO][4727] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.9.65/26] handle="k8s-pod-network.921ac1e34fe0233a7b7966381141ae7e01b78d800f2d040d54a01a7e89a2d258" host="ip-172-31-30-175" Jun 20 19:56:53.192576 containerd[1986]: 2025-06-20 19:56:53.110 [INFO][4727] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 19:56:53.192576 containerd[1986]: 2025-06-20 19:56:53.110 [INFO][4727] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.9.65/26] IPv6=[] ContainerID="921ac1e34fe0233a7b7966381141ae7e01b78d800f2d040d54a01a7e89a2d258" HandleID="k8s-pod-network.921ac1e34fe0233a7b7966381141ae7e01b78d800f2d040d54a01a7e89a2d258" Workload="ip--172--31--30--175-k8s-whisker--988b5b6d7--4wshk-eth0" Jun 20 19:56:53.193017 containerd[1986]: 2025-06-20 19:56:53.115 [INFO][4720] cni-plugin/k8s.go 418: Populated endpoint ContainerID="921ac1e34fe0233a7b7966381141ae7e01b78d800f2d040d54a01a7e89a2d258" Namespace="calico-system" Pod="whisker-988b5b6d7-4wshk" WorkloadEndpoint="ip--172--31--30--175-k8s-whisker--988b5b6d7--4wshk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--175-k8s-whisker--988b5b6d7--4wshk-eth0", GenerateName:"whisker-988b5b6d7-", Namespace:"calico-system", SelfLink:"", UID:"c0ce6036-8720-49b9-a92f-30cc5195fab1", ResourceVersion:"934", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 56, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"988b5b6d7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-175", ContainerID:"", Pod:"whisker-988b5b6d7-4wshk", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.9.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali37cf0736ec1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:56:53.193017 containerd[1986]: 2025-06-20 19:56:53.115 [INFO][4720] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.9.65/32] ContainerID="921ac1e34fe0233a7b7966381141ae7e01b78d800f2d040d54a01a7e89a2d258" Namespace="calico-system" Pod="whisker-988b5b6d7-4wshk" WorkloadEndpoint="ip--172--31--30--175-k8s-whisker--988b5b6d7--4wshk-eth0" Jun 20 19:56:53.193326 containerd[1986]: 2025-06-20 19:56:53.115 [INFO][4720] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali37cf0736ec1 ContainerID="921ac1e34fe0233a7b7966381141ae7e01b78d800f2d040d54a01a7e89a2d258" Namespace="calico-system" Pod="whisker-988b5b6d7-4wshk" WorkloadEndpoint="ip--172--31--30--175-k8s-whisker--988b5b6d7--4wshk-eth0" Jun 20 19:56:53.193326 containerd[1986]: 2025-06-20 19:56:53.153 [INFO][4720] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="921ac1e34fe0233a7b7966381141ae7e01b78d800f2d040d54a01a7e89a2d258" Namespace="calico-system" Pod="whisker-988b5b6d7-4wshk" WorkloadEndpoint="ip--172--31--30--175-k8s-whisker--988b5b6d7--4wshk-eth0" Jun 20 19:56:53.193427 containerd[1986]: 2025-06-20 19:56:53.154 [INFO][4720] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="921ac1e34fe0233a7b7966381141ae7e01b78d800f2d040d54a01a7e89a2d258" Namespace="calico-system" Pod="whisker-988b5b6d7-4wshk" WorkloadEndpoint="ip--172--31--30--175-k8s-whisker--988b5b6d7--4wshk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--175-k8s-whisker--988b5b6d7--4wshk-eth0", GenerateName:"whisker-988b5b6d7-", Namespace:"calico-system", SelfLink:"", UID:"c0ce6036-8720-49b9-a92f-30cc5195fab1", ResourceVersion:"934", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 56, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"988b5b6d7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-175", ContainerID:"921ac1e34fe0233a7b7966381141ae7e01b78d800f2d040d54a01a7e89a2d258", Pod:"whisker-988b5b6d7-4wshk", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.9.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali37cf0736ec1", MAC:"32:b0:cf:ca:b0:c5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:56:53.193568 containerd[1986]: 2025-06-20 19:56:53.184 [INFO][4720] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="921ac1e34fe0233a7b7966381141ae7e01b78d800f2d040d54a01a7e89a2d258" Namespace="calico-system" Pod="whisker-988b5b6d7-4wshk" WorkloadEndpoint="ip--172--31--30--175-k8s-whisker--988b5b6d7--4wshk-eth0" Jun 20 19:56:53.437158 containerd[1986]: time="2025-06-20T19:56:53.437020822Z" level=info msg="connecting to shim 921ac1e34fe0233a7b7966381141ae7e01b78d800f2d040d54a01a7e89a2d258" address="unix:///run/containerd/s/dd7301b1f31debae588d4365736629e3590d5a09da4d1f3e7aa4220b952ff840" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:56:53.491594 systemd[1]: Started cri-containerd-921ac1e34fe0233a7b7966381141ae7e01b78d800f2d040d54a01a7e89a2d258.scope - libcontainer container 921ac1e34fe0233a7b7966381141ae7e01b78d800f2d040d54a01a7e89a2d258. Jun 20 19:56:53.619321 containerd[1986]: time="2025-06-20T19:56:53.619273622Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-988b5b6d7-4wshk,Uid:c0ce6036-8720-49b9-a92f-30cc5195fab1,Namespace:calico-system,Attempt:0,} returns sandbox id \"921ac1e34fe0233a7b7966381141ae7e01b78d800f2d040d54a01a7e89a2d258\"" Jun 20 19:56:53.641180 containerd[1986]: time="2025-06-20T19:56:53.641123470Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.1\"" Jun 20 19:56:53.725339 containerd[1986]: time="2025-06-20T19:56:53.724999572Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54b78b49cf-c8srm,Uid:bd80f03f-2e8f-45b1-98ea-c2e27c2ca62e,Namespace:calico-apiserver,Attempt:0,}" Jun 20 19:56:53.866433 systemd-networkd[1834]: cali6cb488f17ce: Link UP Jun 20 19:56:53.868134 systemd-networkd[1834]: cali6cb488f17ce: Gained carrier Jun 20 19:56:53.889055 containerd[1986]: 2025-06-20 19:56:53.759 [INFO][4879] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jun 20 19:56:53.889055 containerd[1986]: 2025-06-20 19:56:53.772 [INFO][4879] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--30--175-k8s-calico--apiserver--54b78b49cf--c8srm-eth0 calico-apiserver-54b78b49cf- calico-apiserver bd80f03f-2e8f-45b1-98ea-c2e27c2ca62e 858 0 2025-06-20 19:56:25 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:54b78b49cf projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-30-175 calico-apiserver-54b78b49cf-c8srm eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali6cb488f17ce [] [] }} ContainerID="380dcf96d75b6796b02028710f691be59f4f2dee357ef9efa0173d30db91497a" Namespace="calico-apiserver" Pod="calico-apiserver-54b78b49cf-c8srm" WorkloadEndpoint="ip--172--31--30--175-k8s-calico--apiserver--54b78b49cf--c8srm-" Jun 20 19:56:53.889055 containerd[1986]: 2025-06-20 19:56:53.773 [INFO][4879] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="380dcf96d75b6796b02028710f691be59f4f2dee357ef9efa0173d30db91497a" Namespace="calico-apiserver" Pod="calico-apiserver-54b78b49cf-c8srm" WorkloadEndpoint="ip--172--31--30--175-k8s-calico--apiserver--54b78b49cf--c8srm-eth0" Jun 20 19:56:53.889055 containerd[1986]: 2025-06-20 19:56:53.803 [INFO][4890] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="380dcf96d75b6796b02028710f691be59f4f2dee357ef9efa0173d30db91497a" HandleID="k8s-pod-network.380dcf96d75b6796b02028710f691be59f4f2dee357ef9efa0173d30db91497a" Workload="ip--172--31--30--175-k8s-calico--apiserver--54b78b49cf--c8srm-eth0" Jun 20 19:56:53.889796 containerd[1986]: 2025-06-20 19:56:53.803 [INFO][4890] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="380dcf96d75b6796b02028710f691be59f4f2dee357ef9efa0173d30db91497a" HandleID="k8s-pod-network.380dcf96d75b6796b02028710f691be59f4f2dee357ef9efa0173d30db91497a" Workload="ip--172--31--30--175-k8s-calico--apiserver--54b78b49cf--c8srm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f950), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-30-175", "pod":"calico-apiserver-54b78b49cf-c8srm", "timestamp":"2025-06-20 19:56:53.803150384 +0000 UTC"}, Hostname:"ip-172-31-30-175", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 20 19:56:53.889796 containerd[1986]: 2025-06-20 19:56:53.803 [INFO][4890] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 19:56:53.889796 containerd[1986]: 2025-06-20 19:56:53.803 [INFO][4890] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 19:56:53.889796 containerd[1986]: 2025-06-20 19:56:53.803 [INFO][4890] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-30-175' Jun 20 19:56:53.889796 containerd[1986]: 2025-06-20 19:56:53.813 [INFO][4890] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.380dcf96d75b6796b02028710f691be59f4f2dee357ef9efa0173d30db91497a" host="ip-172-31-30-175" Jun 20 19:56:53.889796 containerd[1986]: 2025-06-20 19:56:53.820 [INFO][4890] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-30-175" Jun 20 19:56:53.889796 containerd[1986]: 2025-06-20 19:56:53.828 [INFO][4890] ipam/ipam.go 511: Trying affinity for 192.168.9.64/26 host="ip-172-31-30-175" Jun 20 19:56:53.889796 containerd[1986]: 2025-06-20 19:56:53.831 [INFO][4890] ipam/ipam.go 158: Attempting to load block cidr=192.168.9.64/26 host="ip-172-31-30-175" Jun 20 19:56:53.889796 containerd[1986]: 2025-06-20 19:56:53.835 [INFO][4890] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.9.64/26 host="ip-172-31-30-175" Jun 20 19:56:53.890122 containerd[1986]: 2025-06-20 19:56:53.836 [INFO][4890] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.9.64/26 handle="k8s-pod-network.380dcf96d75b6796b02028710f691be59f4f2dee357ef9efa0173d30db91497a" host="ip-172-31-30-175" Jun 20 19:56:53.890122 containerd[1986]: 2025-06-20 19:56:53.838 [INFO][4890] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.380dcf96d75b6796b02028710f691be59f4f2dee357ef9efa0173d30db91497a Jun 20 19:56:53.890122 containerd[1986]: 2025-06-20 19:56:53.844 [INFO][4890] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.9.64/26 handle="k8s-pod-network.380dcf96d75b6796b02028710f691be59f4f2dee357ef9efa0173d30db91497a" host="ip-172-31-30-175" Jun 20 19:56:53.890122 containerd[1986]: 2025-06-20 19:56:53.854 [INFO][4890] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.9.66/26] block=192.168.9.64/26 handle="k8s-pod-network.380dcf96d75b6796b02028710f691be59f4f2dee357ef9efa0173d30db91497a" host="ip-172-31-30-175" Jun 20 19:56:53.890122 containerd[1986]: 2025-06-20 19:56:53.854 [INFO][4890] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.9.66/26] handle="k8s-pod-network.380dcf96d75b6796b02028710f691be59f4f2dee357ef9efa0173d30db91497a" host="ip-172-31-30-175" Jun 20 19:56:53.890122 containerd[1986]: 2025-06-20 19:56:53.854 [INFO][4890] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 19:56:53.890122 containerd[1986]: 2025-06-20 19:56:53.855 [INFO][4890] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.9.66/26] IPv6=[] ContainerID="380dcf96d75b6796b02028710f691be59f4f2dee357ef9efa0173d30db91497a" HandleID="k8s-pod-network.380dcf96d75b6796b02028710f691be59f4f2dee357ef9efa0173d30db91497a" Workload="ip--172--31--30--175-k8s-calico--apiserver--54b78b49cf--c8srm-eth0" Jun 20 19:56:53.890329 containerd[1986]: 2025-06-20 19:56:53.859 [INFO][4879] cni-plugin/k8s.go 418: Populated endpoint ContainerID="380dcf96d75b6796b02028710f691be59f4f2dee357ef9efa0173d30db91497a" Namespace="calico-apiserver" Pod="calico-apiserver-54b78b49cf-c8srm" WorkloadEndpoint="ip--172--31--30--175-k8s-calico--apiserver--54b78b49cf--c8srm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--175-k8s-calico--apiserver--54b78b49cf--c8srm-eth0", GenerateName:"calico-apiserver-54b78b49cf-", Namespace:"calico-apiserver", SelfLink:"", UID:"bd80f03f-2e8f-45b1-98ea-c2e27c2ca62e", ResourceVersion:"858", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 56, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"54b78b49cf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-175", ContainerID:"", Pod:"calico-apiserver-54b78b49cf-c8srm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.9.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6cb488f17ce", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:56:53.890408 containerd[1986]: 2025-06-20 19:56:53.859 [INFO][4879] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.9.66/32] ContainerID="380dcf96d75b6796b02028710f691be59f4f2dee357ef9efa0173d30db91497a" Namespace="calico-apiserver" Pod="calico-apiserver-54b78b49cf-c8srm" WorkloadEndpoint="ip--172--31--30--175-k8s-calico--apiserver--54b78b49cf--c8srm-eth0" Jun 20 19:56:53.890408 containerd[1986]: 2025-06-20 19:56:53.859 [INFO][4879] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6cb488f17ce ContainerID="380dcf96d75b6796b02028710f691be59f4f2dee357ef9efa0173d30db91497a" Namespace="calico-apiserver" Pod="calico-apiserver-54b78b49cf-c8srm" WorkloadEndpoint="ip--172--31--30--175-k8s-calico--apiserver--54b78b49cf--c8srm-eth0" Jun 20 19:56:53.890408 containerd[1986]: 2025-06-20 19:56:53.869 [INFO][4879] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="380dcf96d75b6796b02028710f691be59f4f2dee357ef9efa0173d30db91497a" Namespace="calico-apiserver" Pod="calico-apiserver-54b78b49cf-c8srm" WorkloadEndpoint="ip--172--31--30--175-k8s-calico--apiserver--54b78b49cf--c8srm-eth0" Jun 20 19:56:53.890493 containerd[1986]: 2025-06-20 19:56:53.869 [INFO][4879] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="380dcf96d75b6796b02028710f691be59f4f2dee357ef9efa0173d30db91497a" Namespace="calico-apiserver" Pod="calico-apiserver-54b78b49cf-c8srm" WorkloadEndpoint="ip--172--31--30--175-k8s-calico--apiserver--54b78b49cf--c8srm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--175-k8s-calico--apiserver--54b78b49cf--c8srm-eth0", GenerateName:"calico-apiserver-54b78b49cf-", Namespace:"calico-apiserver", SelfLink:"", UID:"bd80f03f-2e8f-45b1-98ea-c2e27c2ca62e", ResourceVersion:"858", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 56, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"54b78b49cf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-175", ContainerID:"380dcf96d75b6796b02028710f691be59f4f2dee357ef9efa0173d30db91497a", Pod:"calico-apiserver-54b78b49cf-c8srm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.9.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6cb488f17ce", MAC:"ae:74:c1:16:63:26", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:56:53.890554 containerd[1986]: 2025-06-20 19:56:53.884 [INFO][4879] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="380dcf96d75b6796b02028710f691be59f4f2dee357ef9efa0173d30db91497a" Namespace="calico-apiserver" Pod="calico-apiserver-54b78b49cf-c8srm" WorkloadEndpoint="ip--172--31--30--175-k8s-calico--apiserver--54b78b49cf--c8srm-eth0" Jun 20 19:56:53.923340 containerd[1986]: time="2025-06-20T19:56:53.923274733Z" level=info msg="connecting to shim 380dcf96d75b6796b02028710f691be59f4f2dee357ef9efa0173d30db91497a" address="unix:///run/containerd/s/7f0a655e334221d1dd4af6e1e8acc262608272acc0929baec0fcba5f43f0c485" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:56:53.961440 systemd[1]: Started cri-containerd-380dcf96d75b6796b02028710f691be59f4f2dee357ef9efa0173d30db91497a.scope - libcontainer container 380dcf96d75b6796b02028710f691be59f4f2dee357ef9efa0173d30db91497a. Jun 20 19:56:54.028426 containerd[1986]: time="2025-06-20T19:56:54.028372622Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54b78b49cf-c8srm,Uid:bd80f03f-2e8f-45b1-98ea-c2e27c2ca62e,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"380dcf96d75b6796b02028710f691be59f4f2dee357ef9efa0173d30db91497a\"" Jun 20 19:56:54.343458 systemd-networkd[1834]: cali37cf0736ec1: Gained IPv6LL Jun 20 19:56:54.728104 containerd[1986]: time="2025-06-20T19:56:54.728049175Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-nlbrf,Uid:b561cf8f-35ea-48f5-afe2-0ab762d67dd4,Namespace:kube-system,Attempt:0,}" Jun 20 19:56:54.897255 containerd[1986]: time="2025-06-20T19:56:54.897081197Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:56:54.899448 containerd[1986]: time="2025-06-20T19:56:54.899399721Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.1: active requests=0, bytes read=4661202" Jun 20 19:56:54.900810 containerd[1986]: time="2025-06-20T19:56:54.900761293Z" level=info msg="ImageCreate event name:\"sha256:f9c2addb6553484a4cf8cf5e38959c95aff70d213991bb2626aab9eb9b0ce51c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:56:54.910414 containerd[1986]: time="2025-06-20T19:56:54.910365218Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:7f323954f2f741238d256690a674536bf562d4b4bd7cd6bab3c21a0a1327e1fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:56:54.912165 containerd[1986]: time="2025-06-20T19:56:54.911949217Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.1\" with image id \"sha256:f9c2addb6553484a4cf8cf5e38959c95aff70d213991bb2626aab9eb9b0ce51c\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:7f323954f2f741238d256690a674536bf562d4b4bd7cd6bab3c21a0a1327e1fc\", size \"6153897\" in 1.270788778s" Jun 20 19:56:54.912165 containerd[1986]: time="2025-06-20T19:56:54.911996849Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.1\" returns image reference \"sha256:f9c2addb6553484a4cf8cf5e38959c95aff70d213991bb2626aab9eb9b0ce51c\"" Jun 20 19:56:54.914972 containerd[1986]: time="2025-06-20T19:56:54.914919544Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.1\"" Jun 20 19:56:54.942311 containerd[1986]: time="2025-06-20T19:56:54.942109384Z" level=info msg="CreateContainer within sandbox \"921ac1e34fe0233a7b7966381141ae7e01b78d800f2d040d54a01a7e89a2d258\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jun 20 19:56:54.954303 systemd-networkd[1834]: calied743e589ff: Link UP Jun 20 19:56:54.954608 systemd-networkd[1834]: calied743e589ff: Gained carrier Jun 20 19:56:54.960623 containerd[1986]: time="2025-06-20T19:56:54.959356898Z" level=info msg="Container 970ac5a3da3c9401c6c51918a8dfd68f11c4df11b2ce40e4d37e4b6c32f8bde1: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:56:54.985926 containerd[1986]: time="2025-06-20T19:56:54.985434674Z" level=info msg="CreateContainer within sandbox \"921ac1e34fe0233a7b7966381141ae7e01b78d800f2d040d54a01a7e89a2d258\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"970ac5a3da3c9401c6c51918a8dfd68f11c4df11b2ce40e4d37e4b6c32f8bde1\"" Jun 20 19:56:54.989294 containerd[1986]: time="2025-06-20T19:56:54.988614940Z" level=info msg="StartContainer for \"970ac5a3da3c9401c6c51918a8dfd68f11c4df11b2ce40e4d37e4b6c32f8bde1\"" Jun 20 19:56:54.991809 containerd[1986]: time="2025-06-20T19:56:54.991769002Z" level=info msg="connecting to shim 970ac5a3da3c9401c6c51918a8dfd68f11c4df11b2ce40e4d37e4b6c32f8bde1" address="unix:///run/containerd/s/dd7301b1f31debae588d4365736629e3590d5a09da4d1f3e7aa4220b952ff840" protocol=ttrpc version=3 Jun 20 19:56:54.996282 containerd[1986]: 2025-06-20 19:56:54.783 [INFO][4972] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jun 20 19:56:54.996282 containerd[1986]: 2025-06-20 19:56:54.809 [INFO][4972] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--30--175-k8s-coredns--674b8bbfcf--nlbrf-eth0 coredns-674b8bbfcf- kube-system b561cf8f-35ea-48f5-afe2-0ab762d67dd4 857 0 2025-06-20 19:56:14 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-30-175 coredns-674b8bbfcf-nlbrf eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calied743e589ff [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="d888027409cfee2c121b9b06bafbe467bab26e2b2e3fb623e934c75be3bc8425" Namespace="kube-system" Pod="coredns-674b8bbfcf-nlbrf" WorkloadEndpoint="ip--172--31--30--175-k8s-coredns--674b8bbfcf--nlbrf-" Jun 20 19:56:54.996282 containerd[1986]: 2025-06-20 19:56:54.810 [INFO][4972] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d888027409cfee2c121b9b06bafbe467bab26e2b2e3fb623e934c75be3bc8425" Namespace="kube-system" Pod="coredns-674b8bbfcf-nlbrf" WorkloadEndpoint="ip--172--31--30--175-k8s-coredns--674b8bbfcf--nlbrf-eth0" Jun 20 19:56:54.996282 containerd[1986]: 2025-06-20 19:56:54.866 [INFO][4984] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d888027409cfee2c121b9b06bafbe467bab26e2b2e3fb623e934c75be3bc8425" HandleID="k8s-pod-network.d888027409cfee2c121b9b06bafbe467bab26e2b2e3fb623e934c75be3bc8425" Workload="ip--172--31--30--175-k8s-coredns--674b8bbfcf--nlbrf-eth0" Jun 20 19:56:54.998267 containerd[1986]: 2025-06-20 19:56:54.868 [INFO][4984] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d888027409cfee2c121b9b06bafbe467bab26e2b2e3fb623e934c75be3bc8425" HandleID="k8s-pod-network.d888027409cfee2c121b9b06bafbe467bab26e2b2e3fb623e934c75be3bc8425" Workload="ip--172--31--30--175-k8s-coredns--674b8bbfcf--nlbrf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5010), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-30-175", "pod":"coredns-674b8bbfcf-nlbrf", "timestamp":"2025-06-20 19:56:54.866899603 +0000 UTC"}, Hostname:"ip-172-31-30-175", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 20 19:56:54.998267 containerd[1986]: 2025-06-20 19:56:54.868 [INFO][4984] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 19:56:54.998267 containerd[1986]: 2025-06-20 19:56:54.868 [INFO][4984] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 19:56:54.998267 containerd[1986]: 2025-06-20 19:56:54.868 [INFO][4984] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-30-175' Jun 20 19:56:54.998267 containerd[1986]: 2025-06-20 19:56:54.883 [INFO][4984] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d888027409cfee2c121b9b06bafbe467bab26e2b2e3fb623e934c75be3bc8425" host="ip-172-31-30-175" Jun 20 19:56:54.998267 containerd[1986]: 2025-06-20 19:56:54.893 [INFO][4984] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-30-175" Jun 20 19:56:54.998267 containerd[1986]: 2025-06-20 19:56:54.909 [INFO][4984] ipam/ipam.go 511: Trying affinity for 192.168.9.64/26 host="ip-172-31-30-175" Jun 20 19:56:54.998267 containerd[1986]: 2025-06-20 19:56:54.915 [INFO][4984] ipam/ipam.go 158: Attempting to load block cidr=192.168.9.64/26 host="ip-172-31-30-175" Jun 20 19:56:54.998267 containerd[1986]: 2025-06-20 19:56:54.920 [INFO][4984] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.9.64/26 host="ip-172-31-30-175" Jun 20 19:56:54.998673 containerd[1986]: 2025-06-20 19:56:54.920 [INFO][4984] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.9.64/26 handle="k8s-pod-network.d888027409cfee2c121b9b06bafbe467bab26e2b2e3fb623e934c75be3bc8425" host="ip-172-31-30-175" Jun 20 19:56:54.998673 containerd[1986]: 2025-06-20 19:56:54.923 [INFO][4984] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d888027409cfee2c121b9b06bafbe467bab26e2b2e3fb623e934c75be3bc8425 Jun 20 19:56:54.998673 containerd[1986]: 2025-06-20 19:56:54.933 [INFO][4984] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.9.64/26 handle="k8s-pod-network.d888027409cfee2c121b9b06bafbe467bab26e2b2e3fb623e934c75be3bc8425" host="ip-172-31-30-175" Jun 20 19:56:54.998673 containerd[1986]: 2025-06-20 19:56:54.944 [INFO][4984] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.9.67/26] block=192.168.9.64/26 handle="k8s-pod-network.d888027409cfee2c121b9b06bafbe467bab26e2b2e3fb623e934c75be3bc8425" host="ip-172-31-30-175" Jun 20 19:56:54.998673 containerd[1986]: 2025-06-20 19:56:54.944 [INFO][4984] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.9.67/26] handle="k8s-pod-network.d888027409cfee2c121b9b06bafbe467bab26e2b2e3fb623e934c75be3bc8425" host="ip-172-31-30-175" Jun 20 19:56:54.998673 containerd[1986]: 2025-06-20 19:56:54.944 [INFO][4984] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 19:56:54.998673 containerd[1986]: 2025-06-20 19:56:54.944 [INFO][4984] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.9.67/26] IPv6=[] ContainerID="d888027409cfee2c121b9b06bafbe467bab26e2b2e3fb623e934c75be3bc8425" HandleID="k8s-pod-network.d888027409cfee2c121b9b06bafbe467bab26e2b2e3fb623e934c75be3bc8425" Workload="ip--172--31--30--175-k8s-coredns--674b8bbfcf--nlbrf-eth0" Jun 20 19:56:54.998946 containerd[1986]: 2025-06-20 19:56:54.949 [INFO][4972] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d888027409cfee2c121b9b06bafbe467bab26e2b2e3fb623e934c75be3bc8425" Namespace="kube-system" Pod="coredns-674b8bbfcf-nlbrf" WorkloadEndpoint="ip--172--31--30--175-k8s-coredns--674b8bbfcf--nlbrf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--175-k8s-coredns--674b8bbfcf--nlbrf-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"b561cf8f-35ea-48f5-afe2-0ab762d67dd4", ResourceVersion:"857", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 56, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-175", ContainerID:"", Pod:"coredns-674b8bbfcf-nlbrf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.9.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calied743e589ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:56:54.998946 containerd[1986]: 2025-06-20 19:56:54.949 [INFO][4972] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.9.67/32] ContainerID="d888027409cfee2c121b9b06bafbe467bab26e2b2e3fb623e934c75be3bc8425" Namespace="kube-system" Pod="coredns-674b8bbfcf-nlbrf" WorkloadEndpoint="ip--172--31--30--175-k8s-coredns--674b8bbfcf--nlbrf-eth0" Jun 20 19:56:54.998946 containerd[1986]: 2025-06-20 19:56:54.949 [INFO][4972] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calied743e589ff ContainerID="d888027409cfee2c121b9b06bafbe467bab26e2b2e3fb623e934c75be3bc8425" Namespace="kube-system" Pod="coredns-674b8bbfcf-nlbrf" WorkloadEndpoint="ip--172--31--30--175-k8s-coredns--674b8bbfcf--nlbrf-eth0" Jun 20 19:56:54.998946 containerd[1986]: 2025-06-20 19:56:54.955 [INFO][4972] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d888027409cfee2c121b9b06bafbe467bab26e2b2e3fb623e934c75be3bc8425" Namespace="kube-system" Pod="coredns-674b8bbfcf-nlbrf" WorkloadEndpoint="ip--172--31--30--175-k8s-coredns--674b8bbfcf--nlbrf-eth0" Jun 20 19:56:54.998946 containerd[1986]: 2025-06-20 19:56:54.958 [INFO][4972] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d888027409cfee2c121b9b06bafbe467bab26e2b2e3fb623e934c75be3bc8425" Namespace="kube-system" Pod="coredns-674b8bbfcf-nlbrf" WorkloadEndpoint="ip--172--31--30--175-k8s-coredns--674b8bbfcf--nlbrf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--175-k8s-coredns--674b8bbfcf--nlbrf-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"b561cf8f-35ea-48f5-afe2-0ab762d67dd4", ResourceVersion:"857", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 56, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-175", ContainerID:"d888027409cfee2c121b9b06bafbe467bab26e2b2e3fb623e934c75be3bc8425", Pod:"coredns-674b8bbfcf-nlbrf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.9.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calied743e589ff", MAC:"ba:d9:5e:d8:c0:06", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:56:54.998946 containerd[1986]: 2025-06-20 19:56:54.986 [INFO][4972] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d888027409cfee2c121b9b06bafbe467bab26e2b2e3fb623e934c75be3bc8425" Namespace="kube-system" Pod="coredns-674b8bbfcf-nlbrf" WorkloadEndpoint="ip--172--31--30--175-k8s-coredns--674b8bbfcf--nlbrf-eth0" Jun 20 19:56:55.035468 systemd[1]: Started cri-containerd-970ac5a3da3c9401c6c51918a8dfd68f11c4df11b2ce40e4d37e4b6c32f8bde1.scope - libcontainer container 970ac5a3da3c9401c6c51918a8dfd68f11c4df11b2ce40e4d37e4b6c32f8bde1. Jun 20 19:56:55.048041 containerd[1986]: time="2025-06-20T19:56:55.047895229Z" level=info msg="connecting to shim d888027409cfee2c121b9b06bafbe467bab26e2b2e3fb623e934c75be3bc8425" address="unix:///run/containerd/s/d1d855fc8e6ef16247eb52d728dc0b952ddf5a2815c75a197a8dfec043094ab6" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:56:55.087472 systemd[1]: Started cri-containerd-d888027409cfee2c121b9b06bafbe467bab26e2b2e3fb623e934c75be3bc8425.scope - libcontainer container d888027409cfee2c121b9b06bafbe467bab26e2b2e3fb623e934c75be3bc8425. Jun 20 19:56:55.133702 containerd[1986]: time="2025-06-20T19:56:55.133658320Z" level=info msg="StartContainer for \"970ac5a3da3c9401c6c51918a8dfd68f11c4df11b2ce40e4d37e4b6c32f8bde1\" returns successfully" Jun 20 19:56:55.182588 containerd[1986]: time="2025-06-20T19:56:55.182388975Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-nlbrf,Uid:b561cf8f-35ea-48f5-afe2-0ab762d67dd4,Namespace:kube-system,Attempt:0,} returns sandbox id \"d888027409cfee2c121b9b06bafbe467bab26e2b2e3fb623e934c75be3bc8425\"" Jun 20 19:56:55.198592 containerd[1986]: time="2025-06-20T19:56:55.198522374Z" level=info msg="CreateContainer within sandbox \"d888027409cfee2c121b9b06bafbe467bab26e2b2e3fb623e934c75be3bc8425\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 20 19:56:55.240266 containerd[1986]: time="2025-06-20T19:56:55.239811204Z" level=info msg="Container c4c17259de20f81107d966694d2681bab715198ffc7e52553e531624081742eb: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:56:55.252240 containerd[1986]: time="2025-06-20T19:56:55.252054663Z" level=info msg="CreateContainer within sandbox \"d888027409cfee2c121b9b06bafbe467bab26e2b2e3fb623e934c75be3bc8425\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c4c17259de20f81107d966694d2681bab715198ffc7e52553e531624081742eb\"" Jun 20 19:56:55.254305 containerd[1986]: time="2025-06-20T19:56:55.252967483Z" level=info msg="StartContainer for \"c4c17259de20f81107d966694d2681bab715198ffc7e52553e531624081742eb\"" Jun 20 19:56:55.255894 containerd[1986]: time="2025-06-20T19:56:55.255678458Z" level=info msg="connecting to shim c4c17259de20f81107d966694d2681bab715198ffc7e52553e531624081742eb" address="unix:///run/containerd/s/d1d855fc8e6ef16247eb52d728dc0b952ddf5a2815c75a197a8dfec043094ab6" protocol=ttrpc version=3 Jun 20 19:56:55.280455 systemd[1]: Started cri-containerd-c4c17259de20f81107d966694d2681bab715198ffc7e52553e531624081742eb.scope - libcontainer container c4c17259de20f81107d966694d2681bab715198ffc7e52553e531624081742eb. Jun 20 19:56:55.349606 containerd[1986]: time="2025-06-20T19:56:55.349567283Z" level=info msg="StartContainer for \"c4c17259de20f81107d966694d2681bab715198ffc7e52553e531624081742eb\" returns successfully" Jun 20 19:56:55.559534 systemd-networkd[1834]: cali6cb488f17ce: Gained IPv6LL Jun 20 19:56:55.725731 containerd[1986]: time="2025-06-20T19:56:55.725622427Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-66d5f994b4-9qcrd,Uid:afa6ee2a-ffdb-4330-9d6c-b45312853c2e,Namespace:calico-system,Attempt:0,}" Jun 20 19:56:55.726772 containerd[1986]: time="2025-06-20T19:56:55.726596133Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5bd85449d4-2hbtg,Uid:fbc552a9-a018-43df-8b4a-79f304752516,Namespace:calico-system,Attempt:0,}" Jun 20 19:56:55.728570 containerd[1986]: time="2025-06-20T19:56:55.728398538Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54b78b49cf-hbj2b,Uid:16631420-885d-40eb-a4ee-c415ba82428d,Namespace:calico-apiserver,Attempt:0,}" Jun 20 19:56:55.729752 containerd[1986]: time="2025-06-20T19:56:55.729473085Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-92w4t,Uid:d1b328f6-f5a3-4a18-86d2-b8e5e3cf3098,Namespace:calico-system,Attempt:0,}" Jun 20 19:56:56.253399 systemd-networkd[1834]: calic929cd316ef: Link UP Jun 20 19:56:56.259193 systemd-networkd[1834]: calic929cd316ef: Gained carrier Jun 20 19:56:56.336244 containerd[1986]: 2025-06-20 19:56:55.912 [INFO][5133] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jun 20 19:56:56.336244 containerd[1986]: 2025-06-20 19:56:55.939 [INFO][5133] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--30--175-k8s-calico--kube--controllers--66d5f994b4--9qcrd-eth0 calico-kube-controllers-66d5f994b4- calico-system afa6ee2a-ffdb-4330-9d6c-b45312853c2e 860 0 2025-06-20 19:56:29 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:66d5f994b4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-30-175 calico-kube-controllers-66d5f994b4-9qcrd eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calic929cd316ef [] [] }} ContainerID="97280ac296a4c939385a98b42792870135453e487951b9affd7096376b3638a4" Namespace="calico-system" Pod="calico-kube-controllers-66d5f994b4-9qcrd" WorkloadEndpoint="ip--172--31--30--175-k8s-calico--kube--controllers--66d5f994b4--9qcrd-" Jun 20 19:56:56.336244 containerd[1986]: 2025-06-20 19:56:55.940 [INFO][5133] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="97280ac296a4c939385a98b42792870135453e487951b9affd7096376b3638a4" Namespace="calico-system" Pod="calico-kube-controllers-66d5f994b4-9qcrd" WorkloadEndpoint="ip--172--31--30--175-k8s-calico--kube--controllers--66d5f994b4--9qcrd-eth0" Jun 20 19:56:56.336244 containerd[1986]: 2025-06-20 19:56:56.077 [INFO][5177] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="97280ac296a4c939385a98b42792870135453e487951b9affd7096376b3638a4" HandleID="k8s-pod-network.97280ac296a4c939385a98b42792870135453e487951b9affd7096376b3638a4" Workload="ip--172--31--30--175-k8s-calico--kube--controllers--66d5f994b4--9qcrd-eth0" Jun 20 19:56:56.336244 containerd[1986]: 2025-06-20 19:56:56.078 [INFO][5177] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="97280ac296a4c939385a98b42792870135453e487951b9affd7096376b3638a4" HandleID="k8s-pod-network.97280ac296a4c939385a98b42792870135453e487951b9affd7096376b3638a4" Workload="ip--172--31--30--175-k8s-calico--kube--controllers--66d5f994b4--9qcrd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000347260), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-30-175", "pod":"calico-kube-controllers-66d5f994b4-9qcrd", "timestamp":"2025-06-20 19:56:56.077328288 +0000 UTC"}, Hostname:"ip-172-31-30-175", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 20 19:56:56.336244 containerd[1986]: 2025-06-20 19:56:56.079 [INFO][5177] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 19:56:56.336244 containerd[1986]: 2025-06-20 19:56:56.081 [INFO][5177] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 19:56:56.336244 containerd[1986]: 2025-06-20 19:56:56.082 [INFO][5177] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-30-175' Jun 20 19:56:56.336244 containerd[1986]: 2025-06-20 19:56:56.105 [INFO][5177] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.97280ac296a4c939385a98b42792870135453e487951b9affd7096376b3638a4" host="ip-172-31-30-175" Jun 20 19:56:56.336244 containerd[1986]: 2025-06-20 19:56:56.120 [INFO][5177] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-30-175" Jun 20 19:56:56.336244 containerd[1986]: 2025-06-20 19:56:56.138 [INFO][5177] ipam/ipam.go 511: Trying affinity for 192.168.9.64/26 host="ip-172-31-30-175" Jun 20 19:56:56.336244 containerd[1986]: 2025-06-20 19:56:56.143 [INFO][5177] ipam/ipam.go 158: Attempting to load block cidr=192.168.9.64/26 host="ip-172-31-30-175" Jun 20 19:56:56.336244 containerd[1986]: 2025-06-20 19:56:56.148 [INFO][5177] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.9.64/26 host="ip-172-31-30-175" Jun 20 19:56:56.336244 containerd[1986]: 2025-06-20 19:56:56.149 [INFO][5177] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.9.64/26 handle="k8s-pod-network.97280ac296a4c939385a98b42792870135453e487951b9affd7096376b3638a4" host="ip-172-31-30-175" Jun 20 19:56:56.336244 containerd[1986]: 2025-06-20 19:56:56.153 [INFO][5177] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.97280ac296a4c939385a98b42792870135453e487951b9affd7096376b3638a4 Jun 20 19:56:56.336244 containerd[1986]: 2025-06-20 19:56:56.165 [INFO][5177] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.9.64/26 handle="k8s-pod-network.97280ac296a4c939385a98b42792870135453e487951b9affd7096376b3638a4" host="ip-172-31-30-175" Jun 20 19:56:56.336244 containerd[1986]: 2025-06-20 19:56:56.191 [INFO][5177] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.9.68/26] block=192.168.9.64/26 handle="k8s-pod-network.97280ac296a4c939385a98b42792870135453e487951b9affd7096376b3638a4" host="ip-172-31-30-175" Jun 20 19:56:56.336244 containerd[1986]: 2025-06-20 19:56:56.191 [INFO][5177] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.9.68/26] handle="k8s-pod-network.97280ac296a4c939385a98b42792870135453e487951b9affd7096376b3638a4" host="ip-172-31-30-175" Jun 20 19:56:56.336244 containerd[1986]: 2025-06-20 19:56:56.191 [INFO][5177] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 19:56:56.336244 containerd[1986]: 2025-06-20 19:56:56.191 [INFO][5177] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.9.68/26] IPv6=[] ContainerID="97280ac296a4c939385a98b42792870135453e487951b9affd7096376b3638a4" HandleID="k8s-pod-network.97280ac296a4c939385a98b42792870135453e487951b9affd7096376b3638a4" Workload="ip--172--31--30--175-k8s-calico--kube--controllers--66d5f994b4--9qcrd-eth0" Jun 20 19:56:56.374938 containerd[1986]: 2025-06-20 19:56:56.207 [INFO][5133] cni-plugin/k8s.go 418: Populated endpoint ContainerID="97280ac296a4c939385a98b42792870135453e487951b9affd7096376b3638a4" Namespace="calico-system" Pod="calico-kube-controllers-66d5f994b4-9qcrd" WorkloadEndpoint="ip--172--31--30--175-k8s-calico--kube--controllers--66d5f994b4--9qcrd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--175-k8s-calico--kube--controllers--66d5f994b4--9qcrd-eth0", GenerateName:"calico-kube-controllers-66d5f994b4-", Namespace:"calico-system", SelfLink:"", UID:"afa6ee2a-ffdb-4330-9d6c-b45312853c2e", ResourceVersion:"860", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 56, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"66d5f994b4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-175", ContainerID:"", Pod:"calico-kube-controllers-66d5f994b4-9qcrd", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.9.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic929cd316ef", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:56:56.374938 containerd[1986]: 2025-06-20 19:56:56.209 [INFO][5133] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.9.68/32] ContainerID="97280ac296a4c939385a98b42792870135453e487951b9affd7096376b3638a4" Namespace="calico-system" Pod="calico-kube-controllers-66d5f994b4-9qcrd" WorkloadEndpoint="ip--172--31--30--175-k8s-calico--kube--controllers--66d5f994b4--9qcrd-eth0" Jun 20 19:56:56.374938 containerd[1986]: 2025-06-20 19:56:56.209 [INFO][5133] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic929cd316ef ContainerID="97280ac296a4c939385a98b42792870135453e487951b9affd7096376b3638a4" Namespace="calico-system" Pod="calico-kube-controllers-66d5f994b4-9qcrd" WorkloadEndpoint="ip--172--31--30--175-k8s-calico--kube--controllers--66d5f994b4--9qcrd-eth0" Jun 20 19:56:56.374938 containerd[1986]: 2025-06-20 19:56:56.266 [INFO][5133] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="97280ac296a4c939385a98b42792870135453e487951b9affd7096376b3638a4" Namespace="calico-system" Pod="calico-kube-controllers-66d5f994b4-9qcrd" WorkloadEndpoint="ip--172--31--30--175-k8s-calico--kube--controllers--66d5f994b4--9qcrd-eth0" Jun 20 19:56:56.374938 containerd[1986]: 2025-06-20 19:56:56.270 [INFO][5133] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="97280ac296a4c939385a98b42792870135453e487951b9affd7096376b3638a4" Namespace="calico-system" Pod="calico-kube-controllers-66d5f994b4-9qcrd" WorkloadEndpoint="ip--172--31--30--175-k8s-calico--kube--controllers--66d5f994b4--9qcrd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--175-k8s-calico--kube--controllers--66d5f994b4--9qcrd-eth0", GenerateName:"calico-kube-controllers-66d5f994b4-", Namespace:"calico-system", SelfLink:"", UID:"afa6ee2a-ffdb-4330-9d6c-b45312853c2e", ResourceVersion:"860", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 56, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"66d5f994b4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-175", ContainerID:"97280ac296a4c939385a98b42792870135453e487951b9affd7096376b3638a4", Pod:"calico-kube-controllers-66d5f994b4-9qcrd", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.9.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic929cd316ef", MAC:"8a:9f:2a:f4:17:62", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:56:56.374938 containerd[1986]: 2025-06-20 19:56:56.314 [INFO][5133] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="97280ac296a4c939385a98b42792870135453e487951b9affd7096376b3638a4" Namespace="calico-system" Pod="calico-kube-controllers-66d5f994b4-9qcrd" WorkloadEndpoint="ip--172--31--30--175-k8s-calico--kube--controllers--66d5f994b4--9qcrd-eth0" Jun 20 19:56:56.397277 kubelet[3303]: I0620 19:56:56.357884 3303 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-nlbrf" podStartSLOduration=42.344255504 podStartE2EDuration="42.344255504s" podCreationTimestamp="2025-06-20 19:56:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:56:56.342685788 +0000 UTC m=+47.781975474" watchObservedRunningTime="2025-06-20 19:56:56.344255504 +0000 UTC m=+47.783545190" Jun 20 19:56:56.468164 systemd-networkd[1834]: calia5c1ad025b1: Link UP Jun 20 19:56:56.470277 systemd-networkd[1834]: calia5c1ad025b1: Gained carrier Jun 20 19:56:56.536394 containerd[1986]: 2025-06-20 19:56:55.984 [INFO][5147] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jun 20 19:56:56.536394 containerd[1986]: 2025-06-20 19:56:56.037 [INFO][5147] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--30--175-k8s-calico--apiserver--54b78b49cf--hbj2b-eth0 calico-apiserver-54b78b49cf- calico-apiserver 16631420-885d-40eb-a4ee-c415ba82428d 862 0 2025-06-20 19:56:25 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:54b78b49cf projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-30-175 calico-apiserver-54b78b49cf-hbj2b eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calia5c1ad025b1 [] [] }} ContainerID="2366c606607a0390f495156772f1b40739d211bc03be037ea04d917ca92e2e7c" Namespace="calico-apiserver" Pod="calico-apiserver-54b78b49cf-hbj2b" WorkloadEndpoint="ip--172--31--30--175-k8s-calico--apiserver--54b78b49cf--hbj2b-" Jun 20 19:56:56.536394 containerd[1986]: 2025-06-20 19:56:56.038 [INFO][5147] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2366c606607a0390f495156772f1b40739d211bc03be037ea04d917ca92e2e7c" Namespace="calico-apiserver" Pod="calico-apiserver-54b78b49cf-hbj2b" WorkloadEndpoint="ip--172--31--30--175-k8s-calico--apiserver--54b78b49cf--hbj2b-eth0" Jun 20 19:56:56.536394 containerd[1986]: 2025-06-20 19:56:56.173 [INFO][5191] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2366c606607a0390f495156772f1b40739d211bc03be037ea04d917ca92e2e7c" HandleID="k8s-pod-network.2366c606607a0390f495156772f1b40739d211bc03be037ea04d917ca92e2e7c" Workload="ip--172--31--30--175-k8s-calico--apiserver--54b78b49cf--hbj2b-eth0" Jun 20 19:56:56.536394 containerd[1986]: 2025-06-20 19:56:56.181 [INFO][5191] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2366c606607a0390f495156772f1b40739d211bc03be037ea04d917ca92e2e7c" HandleID="k8s-pod-network.2366c606607a0390f495156772f1b40739d211bc03be037ea04d917ca92e2e7c" Workload="ip--172--31--30--175-k8s-calico--apiserver--54b78b49cf--hbj2b-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003bf980), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-30-175", "pod":"calico-apiserver-54b78b49cf-hbj2b", "timestamp":"2025-06-20 19:56:56.173102577 +0000 UTC"}, Hostname:"ip-172-31-30-175", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 20 19:56:56.536394 containerd[1986]: 2025-06-20 19:56:56.181 [INFO][5191] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 19:56:56.536394 containerd[1986]: 2025-06-20 19:56:56.191 [INFO][5191] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 19:56:56.536394 containerd[1986]: 2025-06-20 19:56:56.192 [INFO][5191] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-30-175' Jun 20 19:56:56.536394 containerd[1986]: 2025-06-20 19:56:56.256 [INFO][5191] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2366c606607a0390f495156772f1b40739d211bc03be037ea04d917ca92e2e7c" host="ip-172-31-30-175" Jun 20 19:56:56.536394 containerd[1986]: 2025-06-20 19:56:56.272 [INFO][5191] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-30-175" Jun 20 19:56:56.536394 containerd[1986]: 2025-06-20 19:56:56.291 [INFO][5191] ipam/ipam.go 511: Trying affinity for 192.168.9.64/26 host="ip-172-31-30-175" Jun 20 19:56:56.536394 containerd[1986]: 2025-06-20 19:56:56.311 [INFO][5191] ipam/ipam.go 158: Attempting to load block cidr=192.168.9.64/26 host="ip-172-31-30-175" Jun 20 19:56:56.536394 containerd[1986]: 2025-06-20 19:56:56.332 [INFO][5191] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.9.64/26 host="ip-172-31-30-175" Jun 20 19:56:56.536394 containerd[1986]: 2025-06-20 19:56:56.332 [INFO][5191] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.9.64/26 handle="k8s-pod-network.2366c606607a0390f495156772f1b40739d211bc03be037ea04d917ca92e2e7c" host="ip-172-31-30-175" Jun 20 19:56:56.536394 containerd[1986]: 2025-06-20 19:56:56.353 [INFO][5191] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.2366c606607a0390f495156772f1b40739d211bc03be037ea04d917ca92e2e7c Jun 20 19:56:56.536394 containerd[1986]: 2025-06-20 19:56:56.382 [INFO][5191] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.9.64/26 handle="k8s-pod-network.2366c606607a0390f495156772f1b40739d211bc03be037ea04d917ca92e2e7c" host="ip-172-31-30-175" Jun 20 19:56:56.536394 containerd[1986]: 2025-06-20 19:56:56.407 [INFO][5191] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.9.69/26] block=192.168.9.64/26 handle="k8s-pod-network.2366c606607a0390f495156772f1b40739d211bc03be037ea04d917ca92e2e7c" host="ip-172-31-30-175" Jun 20 19:56:56.536394 containerd[1986]: 2025-06-20 19:56:56.410 [INFO][5191] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.9.69/26] handle="k8s-pod-network.2366c606607a0390f495156772f1b40739d211bc03be037ea04d917ca92e2e7c" host="ip-172-31-30-175" Jun 20 19:56:56.536394 containerd[1986]: 2025-06-20 19:56:56.412 [INFO][5191] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 19:56:56.536394 containerd[1986]: 2025-06-20 19:56:56.415 [INFO][5191] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.9.69/26] IPv6=[] ContainerID="2366c606607a0390f495156772f1b40739d211bc03be037ea04d917ca92e2e7c" HandleID="k8s-pod-network.2366c606607a0390f495156772f1b40739d211bc03be037ea04d917ca92e2e7c" Workload="ip--172--31--30--175-k8s-calico--apiserver--54b78b49cf--hbj2b-eth0" Jun 20 19:56:56.540730 containerd[1986]: 2025-06-20 19:56:56.455 [INFO][5147] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2366c606607a0390f495156772f1b40739d211bc03be037ea04d917ca92e2e7c" Namespace="calico-apiserver" Pod="calico-apiserver-54b78b49cf-hbj2b" WorkloadEndpoint="ip--172--31--30--175-k8s-calico--apiserver--54b78b49cf--hbj2b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--175-k8s-calico--apiserver--54b78b49cf--hbj2b-eth0", GenerateName:"calico-apiserver-54b78b49cf-", Namespace:"calico-apiserver", SelfLink:"", UID:"16631420-885d-40eb-a4ee-c415ba82428d", ResourceVersion:"862", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 56, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"54b78b49cf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-175", ContainerID:"", Pod:"calico-apiserver-54b78b49cf-hbj2b", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.9.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia5c1ad025b1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:56:56.540730 containerd[1986]: 2025-06-20 19:56:56.457 [INFO][5147] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.9.69/32] ContainerID="2366c606607a0390f495156772f1b40739d211bc03be037ea04d917ca92e2e7c" Namespace="calico-apiserver" Pod="calico-apiserver-54b78b49cf-hbj2b" WorkloadEndpoint="ip--172--31--30--175-k8s-calico--apiserver--54b78b49cf--hbj2b-eth0" Jun 20 19:56:56.540730 containerd[1986]: 2025-06-20 19:56:56.460 [INFO][5147] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia5c1ad025b1 ContainerID="2366c606607a0390f495156772f1b40739d211bc03be037ea04d917ca92e2e7c" Namespace="calico-apiserver" Pod="calico-apiserver-54b78b49cf-hbj2b" WorkloadEndpoint="ip--172--31--30--175-k8s-calico--apiserver--54b78b49cf--hbj2b-eth0" Jun 20 19:56:56.540730 containerd[1986]: 2025-06-20 19:56:56.472 [INFO][5147] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2366c606607a0390f495156772f1b40739d211bc03be037ea04d917ca92e2e7c" Namespace="calico-apiserver" Pod="calico-apiserver-54b78b49cf-hbj2b" WorkloadEndpoint="ip--172--31--30--175-k8s-calico--apiserver--54b78b49cf--hbj2b-eth0" Jun 20 19:56:56.540730 containerd[1986]: 2025-06-20 19:56:56.479 [INFO][5147] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2366c606607a0390f495156772f1b40739d211bc03be037ea04d917ca92e2e7c" Namespace="calico-apiserver" Pod="calico-apiserver-54b78b49cf-hbj2b" WorkloadEndpoint="ip--172--31--30--175-k8s-calico--apiserver--54b78b49cf--hbj2b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--175-k8s-calico--apiserver--54b78b49cf--hbj2b-eth0", GenerateName:"calico-apiserver-54b78b49cf-", Namespace:"calico-apiserver", SelfLink:"", UID:"16631420-885d-40eb-a4ee-c415ba82428d", ResourceVersion:"862", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 56, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"54b78b49cf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-175", ContainerID:"2366c606607a0390f495156772f1b40739d211bc03be037ea04d917ca92e2e7c", Pod:"calico-apiserver-54b78b49cf-hbj2b", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.9.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia5c1ad025b1", MAC:"92:3f:60:e8:40:54", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:56:56.540730 containerd[1986]: 2025-06-20 19:56:56.524 [INFO][5147] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2366c606607a0390f495156772f1b40739d211bc03be037ea04d917ca92e2e7c" Namespace="calico-apiserver" Pod="calico-apiserver-54b78b49cf-hbj2b" WorkloadEndpoint="ip--172--31--30--175-k8s-calico--apiserver--54b78b49cf--hbj2b-eth0" Jun 20 19:56:56.551967 containerd[1986]: time="2025-06-20T19:56:56.551824026Z" level=info msg="connecting to shim 97280ac296a4c939385a98b42792870135453e487951b9affd7096376b3638a4" address="unix:///run/containerd/s/dec51e27e5b1a98691f0fe81ed736dabc854f0614dfd3c93492531e6b2a3ebce" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:56:56.645501 systemd[1]: Started cri-containerd-97280ac296a4c939385a98b42792870135453e487951b9affd7096376b3638a4.scope - libcontainer container 97280ac296a4c939385a98b42792870135453e487951b9affd7096376b3638a4. Jun 20 19:56:56.657011 containerd[1986]: time="2025-06-20T19:56:56.656432559Z" level=info msg="connecting to shim 2366c606607a0390f495156772f1b40739d211bc03be037ea04d917ca92e2e7c" address="unix:///run/containerd/s/29a234873925caf82d73d499519b77aadc70fa68b42e79bd4fd57df8ba1d7986" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:56:56.752260 containerd[1986]: time="2025-06-20T19:56:56.749384368Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-x5fjm,Uid:f097a8f6-c458-48b9-bc63-afb7bb73c93e,Namespace:kube-system,Attempt:0,}" Jun 20 19:56:56.783457 systemd[1]: Started cri-containerd-2366c606607a0390f495156772f1b40739d211bc03be037ea04d917ca92e2e7c.scope - libcontainer container 2366c606607a0390f495156772f1b40739d211bc03be037ea04d917ca92e2e7c. Jun 20 19:56:56.826450 systemd-networkd[1834]: cali3f057d78274: Link UP Jun 20 19:56:56.832982 systemd-networkd[1834]: cali3f057d78274: Gained carrier Jun 20 19:56:56.839511 systemd-networkd[1834]: calied743e589ff: Gained IPv6LL Jun 20 19:56:56.873114 containerd[1986]: 2025-06-20 19:56:55.965 [INFO][5137] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jun 20 19:56:56.873114 containerd[1986]: 2025-06-20 19:56:56.021 [INFO][5137] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--30--175-k8s-goldmane--5bd85449d4--2hbtg-eth0 goldmane-5bd85449d4- calico-system fbc552a9-a018-43df-8b4a-79f304752516 859 0 2025-06-20 19:56:28 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:5bd85449d4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-30-175 goldmane-5bd85449d4-2hbtg eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali3f057d78274 [] [] }} ContainerID="27e170b56839df7f7f71b4122a174c60c7c1efac0f7e66a5c56f56f1c193bbfc" Namespace="calico-system" Pod="goldmane-5bd85449d4-2hbtg" WorkloadEndpoint="ip--172--31--30--175-k8s-goldmane--5bd85449d4--2hbtg-" Jun 20 19:56:56.873114 containerd[1986]: 2025-06-20 19:56:56.022 [INFO][5137] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="27e170b56839df7f7f71b4122a174c60c7c1efac0f7e66a5c56f56f1c193bbfc" Namespace="calico-system" Pod="goldmane-5bd85449d4-2hbtg" WorkloadEndpoint="ip--172--31--30--175-k8s-goldmane--5bd85449d4--2hbtg-eth0" Jun 20 19:56:56.873114 containerd[1986]: 2025-06-20 19:56:56.226 [INFO][5188] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="27e170b56839df7f7f71b4122a174c60c7c1efac0f7e66a5c56f56f1c193bbfc" HandleID="k8s-pod-network.27e170b56839df7f7f71b4122a174c60c7c1efac0f7e66a5c56f56f1c193bbfc" Workload="ip--172--31--30--175-k8s-goldmane--5bd85449d4--2hbtg-eth0" Jun 20 19:56:56.873114 containerd[1986]: 2025-06-20 19:56:56.227 [INFO][5188] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="27e170b56839df7f7f71b4122a174c60c7c1efac0f7e66a5c56f56f1c193bbfc" HandleID="k8s-pod-network.27e170b56839df7f7f71b4122a174c60c7c1efac0f7e66a5c56f56f1c193bbfc" Workload="ip--172--31--30--175-k8s-goldmane--5bd85449d4--2hbtg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000125890), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-30-175", "pod":"goldmane-5bd85449d4-2hbtg", "timestamp":"2025-06-20 19:56:56.226100841 +0000 UTC"}, Hostname:"ip-172-31-30-175", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 20 19:56:56.873114 containerd[1986]: 2025-06-20 19:56:56.229 [INFO][5188] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 19:56:56.873114 containerd[1986]: 2025-06-20 19:56:56.411 [INFO][5188] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 19:56:56.873114 containerd[1986]: 2025-06-20 19:56:56.411 [INFO][5188] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-30-175' Jun 20 19:56:56.873114 containerd[1986]: 2025-06-20 19:56:56.482 [INFO][5188] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.27e170b56839df7f7f71b4122a174c60c7c1efac0f7e66a5c56f56f1c193bbfc" host="ip-172-31-30-175" Jun 20 19:56:56.873114 containerd[1986]: 2025-06-20 19:56:56.551 [INFO][5188] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-30-175" Jun 20 19:56:56.873114 containerd[1986]: 2025-06-20 19:56:56.577 [INFO][5188] ipam/ipam.go 511: Trying affinity for 192.168.9.64/26 host="ip-172-31-30-175" Jun 20 19:56:56.873114 containerd[1986]: 2025-06-20 19:56:56.598 [INFO][5188] ipam/ipam.go 158: Attempting to load block cidr=192.168.9.64/26 host="ip-172-31-30-175" Jun 20 19:56:56.873114 containerd[1986]: 2025-06-20 19:56:56.626 [INFO][5188] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.9.64/26 host="ip-172-31-30-175" Jun 20 19:56:56.873114 containerd[1986]: 2025-06-20 19:56:56.628 [INFO][5188] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.9.64/26 handle="k8s-pod-network.27e170b56839df7f7f71b4122a174c60c7c1efac0f7e66a5c56f56f1c193bbfc" host="ip-172-31-30-175" Jun 20 19:56:56.873114 containerd[1986]: 2025-06-20 19:56:56.643 [INFO][5188] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.27e170b56839df7f7f71b4122a174c60c7c1efac0f7e66a5c56f56f1c193bbfc Jun 20 19:56:56.873114 containerd[1986]: 2025-06-20 19:56:56.665 [INFO][5188] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.9.64/26 handle="k8s-pod-network.27e170b56839df7f7f71b4122a174c60c7c1efac0f7e66a5c56f56f1c193bbfc" host="ip-172-31-30-175" Jun 20 19:56:56.873114 containerd[1986]: 2025-06-20 19:56:56.692 [INFO][5188] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.9.70/26] block=192.168.9.64/26 handle="k8s-pod-network.27e170b56839df7f7f71b4122a174c60c7c1efac0f7e66a5c56f56f1c193bbfc" host="ip-172-31-30-175" Jun 20 19:56:56.873114 containerd[1986]: 2025-06-20 19:56:56.692 [INFO][5188] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.9.70/26] handle="k8s-pod-network.27e170b56839df7f7f71b4122a174c60c7c1efac0f7e66a5c56f56f1c193bbfc" host="ip-172-31-30-175" Jun 20 19:56:56.873114 containerd[1986]: 2025-06-20 19:56:56.700 [INFO][5188] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 19:56:56.873114 containerd[1986]: 2025-06-20 19:56:56.703 [INFO][5188] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.9.70/26] IPv6=[] ContainerID="27e170b56839df7f7f71b4122a174c60c7c1efac0f7e66a5c56f56f1c193bbfc" HandleID="k8s-pod-network.27e170b56839df7f7f71b4122a174c60c7c1efac0f7e66a5c56f56f1c193bbfc" Workload="ip--172--31--30--175-k8s-goldmane--5bd85449d4--2hbtg-eth0" Jun 20 19:56:56.875139 containerd[1986]: 2025-06-20 19:56:56.776 [INFO][5137] cni-plugin/k8s.go 418: Populated endpoint ContainerID="27e170b56839df7f7f71b4122a174c60c7c1efac0f7e66a5c56f56f1c193bbfc" Namespace="calico-system" Pod="goldmane-5bd85449d4-2hbtg" WorkloadEndpoint="ip--172--31--30--175-k8s-goldmane--5bd85449d4--2hbtg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--175-k8s-goldmane--5bd85449d4--2hbtg-eth0", GenerateName:"goldmane-5bd85449d4-", Namespace:"calico-system", SelfLink:"", UID:"fbc552a9-a018-43df-8b4a-79f304752516", ResourceVersion:"859", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 56, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5bd85449d4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-175", ContainerID:"", Pod:"goldmane-5bd85449d4-2hbtg", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.9.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali3f057d78274", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:56:56.875139 containerd[1986]: 2025-06-20 19:56:56.777 [INFO][5137] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.9.70/32] ContainerID="27e170b56839df7f7f71b4122a174c60c7c1efac0f7e66a5c56f56f1c193bbfc" Namespace="calico-system" Pod="goldmane-5bd85449d4-2hbtg" WorkloadEndpoint="ip--172--31--30--175-k8s-goldmane--5bd85449d4--2hbtg-eth0" Jun 20 19:56:56.875139 containerd[1986]: 2025-06-20 19:56:56.777 [INFO][5137] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3f057d78274 ContainerID="27e170b56839df7f7f71b4122a174c60c7c1efac0f7e66a5c56f56f1c193bbfc" Namespace="calico-system" Pod="goldmane-5bd85449d4-2hbtg" WorkloadEndpoint="ip--172--31--30--175-k8s-goldmane--5bd85449d4--2hbtg-eth0" Jun 20 19:56:56.875139 containerd[1986]: 2025-06-20 19:56:56.834 [INFO][5137] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="27e170b56839df7f7f71b4122a174c60c7c1efac0f7e66a5c56f56f1c193bbfc" Namespace="calico-system" Pod="goldmane-5bd85449d4-2hbtg" WorkloadEndpoint="ip--172--31--30--175-k8s-goldmane--5bd85449d4--2hbtg-eth0" Jun 20 19:56:56.875139 containerd[1986]: 2025-06-20 19:56:56.836 [INFO][5137] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="27e170b56839df7f7f71b4122a174c60c7c1efac0f7e66a5c56f56f1c193bbfc" Namespace="calico-system" Pod="goldmane-5bd85449d4-2hbtg" WorkloadEndpoint="ip--172--31--30--175-k8s-goldmane--5bd85449d4--2hbtg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--175-k8s-goldmane--5bd85449d4--2hbtg-eth0", GenerateName:"goldmane-5bd85449d4-", Namespace:"calico-system", SelfLink:"", UID:"fbc552a9-a018-43df-8b4a-79f304752516", ResourceVersion:"859", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 56, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5bd85449d4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-175", ContainerID:"27e170b56839df7f7f71b4122a174c60c7c1efac0f7e66a5c56f56f1c193bbfc", Pod:"goldmane-5bd85449d4-2hbtg", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.9.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali3f057d78274", MAC:"02:e9:82:09:63:cd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:56:56.875139 containerd[1986]: 2025-06-20 19:56:56.864 [INFO][5137] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="27e170b56839df7f7f71b4122a174c60c7c1efac0f7e66a5c56f56f1c193bbfc" Namespace="calico-system" Pod="goldmane-5bd85449d4-2hbtg" WorkloadEndpoint="ip--172--31--30--175-k8s-goldmane--5bd85449d4--2hbtg-eth0" Jun 20 19:56:57.001264 systemd-networkd[1834]: calid09b8176443: Link UP Jun 20 19:56:57.003336 systemd-networkd[1834]: calid09b8176443: Gained carrier Jun 20 19:56:57.060760 containerd[1986]: 2025-06-20 19:56:55.980 [INFO][5150] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jun 20 19:56:57.060760 containerd[1986]: 2025-06-20 19:56:56.059 [INFO][5150] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--30--175-k8s-csi--node--driver--92w4t-eth0 csi-node-driver- calico-system d1b328f6-f5a3-4a18-86d2-b8e5e3cf3098 745 0 2025-06-20 19:56:29 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:85b8c9d4df k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-30-175 csi-node-driver-92w4t eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calid09b8176443 [] [] }} ContainerID="0adab8a5fd85039eb238a4a93d6d6129026a57bc25cb13abf34424127855a984" Namespace="calico-system" Pod="csi-node-driver-92w4t" WorkloadEndpoint="ip--172--31--30--175-k8s-csi--node--driver--92w4t-" Jun 20 19:56:57.060760 containerd[1986]: 2025-06-20 19:56:56.059 [INFO][5150] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0adab8a5fd85039eb238a4a93d6d6129026a57bc25cb13abf34424127855a984" Namespace="calico-system" Pod="csi-node-driver-92w4t" WorkloadEndpoint="ip--172--31--30--175-k8s-csi--node--driver--92w4t-eth0" Jun 20 19:56:57.060760 containerd[1986]: 2025-06-20 19:56:56.290 [INFO][5200] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0adab8a5fd85039eb238a4a93d6d6129026a57bc25cb13abf34424127855a984" HandleID="k8s-pod-network.0adab8a5fd85039eb238a4a93d6d6129026a57bc25cb13abf34424127855a984" Workload="ip--172--31--30--175-k8s-csi--node--driver--92w4t-eth0" Jun 20 19:56:57.060760 containerd[1986]: 2025-06-20 19:56:56.292 [INFO][5200] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0adab8a5fd85039eb238a4a93d6d6129026a57bc25cb13abf34424127855a984" HandleID="k8s-pod-network.0adab8a5fd85039eb238a4a93d6d6129026a57bc25cb13abf34424127855a984" Workload="ip--172--31--30--175-k8s-csi--node--driver--92w4t-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00037af00), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-30-175", "pod":"csi-node-driver-92w4t", "timestamp":"2025-06-20 19:56:56.290532592 +0000 UTC"}, Hostname:"ip-172-31-30-175", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 20 19:56:57.060760 containerd[1986]: 2025-06-20 19:56:56.292 [INFO][5200] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 19:56:57.060760 containerd[1986]: 2025-06-20 19:56:56.695 [INFO][5200] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 19:56:57.060760 containerd[1986]: 2025-06-20 19:56:56.697 [INFO][5200] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-30-175' Jun 20 19:56:57.060760 containerd[1986]: 2025-06-20 19:56:56.758 [INFO][5200] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0adab8a5fd85039eb238a4a93d6d6129026a57bc25cb13abf34424127855a984" host="ip-172-31-30-175" Jun 20 19:56:57.060760 containerd[1986]: 2025-06-20 19:56:56.851 [INFO][5200] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-30-175" Jun 20 19:56:57.060760 containerd[1986]: 2025-06-20 19:56:56.891 [INFO][5200] ipam/ipam.go 511: Trying affinity for 192.168.9.64/26 host="ip-172-31-30-175" Jun 20 19:56:57.060760 containerd[1986]: 2025-06-20 19:56:56.902 [INFO][5200] ipam/ipam.go 158: Attempting to load block cidr=192.168.9.64/26 host="ip-172-31-30-175" Jun 20 19:56:57.060760 containerd[1986]: 2025-06-20 19:56:56.914 [INFO][5200] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.9.64/26 host="ip-172-31-30-175" Jun 20 19:56:57.060760 containerd[1986]: 2025-06-20 19:56:56.915 [INFO][5200] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.9.64/26 handle="k8s-pod-network.0adab8a5fd85039eb238a4a93d6d6129026a57bc25cb13abf34424127855a984" host="ip-172-31-30-175" Jun 20 19:56:57.060760 containerd[1986]: 2025-06-20 19:56:56.920 [INFO][5200] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.0adab8a5fd85039eb238a4a93d6d6129026a57bc25cb13abf34424127855a984 Jun 20 19:56:57.060760 containerd[1986]: 2025-06-20 19:56:56.930 [INFO][5200] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.9.64/26 handle="k8s-pod-network.0adab8a5fd85039eb238a4a93d6d6129026a57bc25cb13abf34424127855a984" host="ip-172-31-30-175" Jun 20 19:56:57.060760 containerd[1986]: 2025-06-20 19:56:56.954 [INFO][5200] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.9.71/26] block=192.168.9.64/26 handle="k8s-pod-network.0adab8a5fd85039eb238a4a93d6d6129026a57bc25cb13abf34424127855a984" host="ip-172-31-30-175" Jun 20 19:56:57.060760 containerd[1986]: 2025-06-20 19:56:56.954 [INFO][5200] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.9.71/26] handle="k8s-pod-network.0adab8a5fd85039eb238a4a93d6d6129026a57bc25cb13abf34424127855a984" host="ip-172-31-30-175" Jun 20 19:56:57.060760 containerd[1986]: 2025-06-20 19:56:56.954 [INFO][5200] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 19:56:57.060760 containerd[1986]: 2025-06-20 19:56:56.954 [INFO][5200] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.9.71/26] IPv6=[] ContainerID="0adab8a5fd85039eb238a4a93d6d6129026a57bc25cb13abf34424127855a984" HandleID="k8s-pod-network.0adab8a5fd85039eb238a4a93d6d6129026a57bc25cb13abf34424127855a984" Workload="ip--172--31--30--175-k8s-csi--node--driver--92w4t-eth0" Jun 20 19:56:57.063550 containerd[1986]: 2025-06-20 19:56:56.989 [INFO][5150] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0adab8a5fd85039eb238a4a93d6d6129026a57bc25cb13abf34424127855a984" Namespace="calico-system" Pod="csi-node-driver-92w4t" WorkloadEndpoint="ip--172--31--30--175-k8s-csi--node--driver--92w4t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--175-k8s-csi--node--driver--92w4t-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d1b328f6-f5a3-4a18-86d2-b8e5e3cf3098", ResourceVersion:"745", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 56, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"85b8c9d4df", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-175", ContainerID:"", Pod:"csi-node-driver-92w4t", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.9.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid09b8176443", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:56:57.063550 containerd[1986]: 2025-06-20 19:56:56.989 [INFO][5150] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.9.71/32] ContainerID="0adab8a5fd85039eb238a4a93d6d6129026a57bc25cb13abf34424127855a984" Namespace="calico-system" Pod="csi-node-driver-92w4t" WorkloadEndpoint="ip--172--31--30--175-k8s-csi--node--driver--92w4t-eth0" Jun 20 19:56:57.063550 containerd[1986]: 2025-06-20 19:56:56.990 [INFO][5150] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid09b8176443 ContainerID="0adab8a5fd85039eb238a4a93d6d6129026a57bc25cb13abf34424127855a984" Namespace="calico-system" Pod="csi-node-driver-92w4t" WorkloadEndpoint="ip--172--31--30--175-k8s-csi--node--driver--92w4t-eth0" Jun 20 19:56:57.063550 containerd[1986]: 2025-06-20 19:56:57.008 [INFO][5150] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0adab8a5fd85039eb238a4a93d6d6129026a57bc25cb13abf34424127855a984" Namespace="calico-system" Pod="csi-node-driver-92w4t" WorkloadEndpoint="ip--172--31--30--175-k8s-csi--node--driver--92w4t-eth0" Jun 20 19:56:57.063550 containerd[1986]: 2025-06-20 19:56:57.010 [INFO][5150] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0adab8a5fd85039eb238a4a93d6d6129026a57bc25cb13abf34424127855a984" Namespace="calico-system" Pod="csi-node-driver-92w4t" WorkloadEndpoint="ip--172--31--30--175-k8s-csi--node--driver--92w4t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--175-k8s-csi--node--driver--92w4t-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d1b328f6-f5a3-4a18-86d2-b8e5e3cf3098", ResourceVersion:"745", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 56, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"85b8c9d4df", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-175", ContainerID:"0adab8a5fd85039eb238a4a93d6d6129026a57bc25cb13abf34424127855a984", Pod:"csi-node-driver-92w4t", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.9.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid09b8176443", MAC:"3a:be:d5:79:1e:9c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:56:57.063550 containerd[1986]: 2025-06-20 19:56:57.040 [INFO][5150] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0adab8a5fd85039eb238a4a93d6d6129026a57bc25cb13abf34424127855a984" Namespace="calico-system" Pod="csi-node-driver-92w4t" WorkloadEndpoint="ip--172--31--30--175-k8s-csi--node--driver--92w4t-eth0" Jun 20 19:56:57.064570 containerd[1986]: time="2025-06-20T19:56:57.063837241Z" level=info msg="connecting to shim 27e170b56839df7f7f71b4122a174c60c7c1efac0f7e66a5c56f56f1c193bbfc" address="unix:///run/containerd/s/6aac6ac5b18977fa86b7f0a54b28af6563f9597d66faae94cf14ed96f5d097b7" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:56:57.200519 systemd[1]: Started cri-containerd-27e170b56839df7f7f71b4122a174c60c7c1efac0f7e66a5c56f56f1c193bbfc.scope - libcontainer container 27e170b56839df7f7f71b4122a174c60c7c1efac0f7e66a5c56f56f1c193bbfc. Jun 20 19:56:57.227713 containerd[1986]: time="2025-06-20T19:56:57.226640612Z" level=info msg="connecting to shim 0adab8a5fd85039eb238a4a93d6d6129026a57bc25cb13abf34424127855a984" address="unix:///run/containerd/s/2fca04992cedeffb7ad5f6d1c280f5290fcfa0f619d155918f0494e07d9680ac" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:56:57.374040 systemd[1]: Started cri-containerd-0adab8a5fd85039eb238a4a93d6d6129026a57bc25cb13abf34424127855a984.scope - libcontainer container 0adab8a5fd85039eb238a4a93d6d6129026a57bc25cb13abf34424127855a984. Jun 20 19:56:57.459403 systemd-networkd[1834]: cali399831c3105: Link UP Jun 20 19:56:57.461540 systemd-networkd[1834]: cali399831c3105: Gained carrier Jun 20 19:56:57.470178 containerd[1986]: time="2025-06-20T19:56:57.470006808Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-66d5f994b4-9qcrd,Uid:afa6ee2a-ffdb-4330-9d6c-b45312853c2e,Namespace:calico-system,Attempt:0,} returns sandbox id \"97280ac296a4c939385a98b42792870135453e487951b9affd7096376b3638a4\"" Jun 20 19:56:57.517510 containerd[1986]: 2025-06-20 19:56:56.974 [INFO][5302] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jun 20 19:56:57.517510 containerd[1986]: 2025-06-20 19:56:57.044 [INFO][5302] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--30--175-k8s-coredns--674b8bbfcf--x5fjm-eth0 coredns-674b8bbfcf- kube-system f097a8f6-c458-48b9-bc63-afb7bb73c93e 856 0 2025-06-20 19:56:14 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-30-175 coredns-674b8bbfcf-x5fjm eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali399831c3105 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="e0efc22443785b86d007cc2ac9f355b563e76ba703e02a90ac4136eac8c28f7f" Namespace="kube-system" Pod="coredns-674b8bbfcf-x5fjm" WorkloadEndpoint="ip--172--31--30--175-k8s-coredns--674b8bbfcf--x5fjm-" Jun 20 19:56:57.517510 containerd[1986]: 2025-06-20 19:56:57.044 [INFO][5302] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e0efc22443785b86d007cc2ac9f355b563e76ba703e02a90ac4136eac8c28f7f" Namespace="kube-system" Pod="coredns-674b8bbfcf-x5fjm" WorkloadEndpoint="ip--172--31--30--175-k8s-coredns--674b8bbfcf--x5fjm-eth0" Jun 20 19:56:57.517510 containerd[1986]: 2025-06-20 19:56:57.259 [INFO][5342] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e0efc22443785b86d007cc2ac9f355b563e76ba703e02a90ac4136eac8c28f7f" HandleID="k8s-pod-network.e0efc22443785b86d007cc2ac9f355b563e76ba703e02a90ac4136eac8c28f7f" Workload="ip--172--31--30--175-k8s-coredns--674b8bbfcf--x5fjm-eth0" Jun 20 19:56:57.517510 containerd[1986]: 2025-06-20 19:56:57.261 [INFO][5342] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e0efc22443785b86d007cc2ac9f355b563e76ba703e02a90ac4136eac8c28f7f" HandleID="k8s-pod-network.e0efc22443785b86d007cc2ac9f355b563e76ba703e02a90ac4136eac8c28f7f" Workload="ip--172--31--30--175-k8s-coredns--674b8bbfcf--x5fjm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c7d80), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-30-175", "pod":"coredns-674b8bbfcf-x5fjm", "timestamp":"2025-06-20 19:56:57.257723052 +0000 UTC"}, Hostname:"ip-172-31-30-175", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 20 19:56:57.517510 containerd[1986]: 2025-06-20 19:56:57.262 [INFO][5342] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 19:56:57.517510 containerd[1986]: 2025-06-20 19:56:57.262 [INFO][5342] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 19:56:57.517510 containerd[1986]: 2025-06-20 19:56:57.263 [INFO][5342] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-30-175' Jun 20 19:56:57.517510 containerd[1986]: 2025-06-20 19:56:57.293 [INFO][5342] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e0efc22443785b86d007cc2ac9f355b563e76ba703e02a90ac4136eac8c28f7f" host="ip-172-31-30-175" Jun 20 19:56:57.517510 containerd[1986]: 2025-06-20 19:56:57.334 [INFO][5342] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-30-175" Jun 20 19:56:57.517510 containerd[1986]: 2025-06-20 19:56:57.351 [INFO][5342] ipam/ipam.go 511: Trying affinity for 192.168.9.64/26 host="ip-172-31-30-175" Jun 20 19:56:57.517510 containerd[1986]: 2025-06-20 19:56:57.356 [INFO][5342] ipam/ipam.go 158: Attempting to load block cidr=192.168.9.64/26 host="ip-172-31-30-175" Jun 20 19:56:57.517510 containerd[1986]: 2025-06-20 19:56:57.361 [INFO][5342] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.9.64/26 host="ip-172-31-30-175" Jun 20 19:56:57.517510 containerd[1986]: 2025-06-20 19:56:57.361 [INFO][5342] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.9.64/26 handle="k8s-pod-network.e0efc22443785b86d007cc2ac9f355b563e76ba703e02a90ac4136eac8c28f7f" host="ip-172-31-30-175" Jun 20 19:56:57.517510 containerd[1986]: 2025-06-20 19:56:57.364 [INFO][5342] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.e0efc22443785b86d007cc2ac9f355b563e76ba703e02a90ac4136eac8c28f7f Jun 20 19:56:57.517510 containerd[1986]: 2025-06-20 19:56:57.383 [INFO][5342] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.9.64/26 handle="k8s-pod-network.e0efc22443785b86d007cc2ac9f355b563e76ba703e02a90ac4136eac8c28f7f" host="ip-172-31-30-175" Jun 20 19:56:57.517510 containerd[1986]: 2025-06-20 19:56:57.412 [INFO][5342] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.9.72/26] block=192.168.9.64/26 handle="k8s-pod-network.e0efc22443785b86d007cc2ac9f355b563e76ba703e02a90ac4136eac8c28f7f" host="ip-172-31-30-175" Jun 20 19:56:57.517510 containerd[1986]: 2025-06-20 19:56:57.412 [INFO][5342] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.9.72/26] handle="k8s-pod-network.e0efc22443785b86d007cc2ac9f355b563e76ba703e02a90ac4136eac8c28f7f" host="ip-172-31-30-175" Jun 20 19:56:57.517510 containerd[1986]: 2025-06-20 19:56:57.412 [INFO][5342] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 19:56:57.517510 containerd[1986]: 2025-06-20 19:56:57.412 [INFO][5342] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.9.72/26] IPv6=[] ContainerID="e0efc22443785b86d007cc2ac9f355b563e76ba703e02a90ac4136eac8c28f7f" HandleID="k8s-pod-network.e0efc22443785b86d007cc2ac9f355b563e76ba703e02a90ac4136eac8c28f7f" Workload="ip--172--31--30--175-k8s-coredns--674b8bbfcf--x5fjm-eth0" Jun 20 19:56:57.518476 containerd[1986]: 2025-06-20 19:56:57.432 [INFO][5302] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e0efc22443785b86d007cc2ac9f355b563e76ba703e02a90ac4136eac8c28f7f" Namespace="kube-system" Pod="coredns-674b8bbfcf-x5fjm" WorkloadEndpoint="ip--172--31--30--175-k8s-coredns--674b8bbfcf--x5fjm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--175-k8s-coredns--674b8bbfcf--x5fjm-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"f097a8f6-c458-48b9-bc63-afb7bb73c93e", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 56, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-175", ContainerID:"", Pod:"coredns-674b8bbfcf-x5fjm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.9.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali399831c3105", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:56:57.518476 containerd[1986]: 2025-06-20 19:56:57.439 [INFO][5302] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.9.72/32] ContainerID="e0efc22443785b86d007cc2ac9f355b563e76ba703e02a90ac4136eac8c28f7f" Namespace="kube-system" Pod="coredns-674b8bbfcf-x5fjm" WorkloadEndpoint="ip--172--31--30--175-k8s-coredns--674b8bbfcf--x5fjm-eth0" Jun 20 19:56:57.518476 containerd[1986]: 2025-06-20 19:56:57.442 [INFO][5302] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali399831c3105 ContainerID="e0efc22443785b86d007cc2ac9f355b563e76ba703e02a90ac4136eac8c28f7f" Namespace="kube-system" Pod="coredns-674b8bbfcf-x5fjm" WorkloadEndpoint="ip--172--31--30--175-k8s-coredns--674b8bbfcf--x5fjm-eth0" Jun 20 19:56:57.518476 containerd[1986]: 2025-06-20 19:56:57.463 [INFO][5302] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e0efc22443785b86d007cc2ac9f355b563e76ba703e02a90ac4136eac8c28f7f" Namespace="kube-system" Pod="coredns-674b8bbfcf-x5fjm" WorkloadEndpoint="ip--172--31--30--175-k8s-coredns--674b8bbfcf--x5fjm-eth0" Jun 20 19:56:57.518476 containerd[1986]: 2025-06-20 19:56:57.464 [INFO][5302] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e0efc22443785b86d007cc2ac9f355b563e76ba703e02a90ac4136eac8c28f7f" Namespace="kube-system" Pod="coredns-674b8bbfcf-x5fjm" WorkloadEndpoint="ip--172--31--30--175-k8s-coredns--674b8bbfcf--x5fjm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--175-k8s-coredns--674b8bbfcf--x5fjm-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"f097a8f6-c458-48b9-bc63-afb7bb73c93e", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 56, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-175", ContainerID:"e0efc22443785b86d007cc2ac9f355b563e76ba703e02a90ac4136eac8c28f7f", Pod:"coredns-674b8bbfcf-x5fjm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.9.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali399831c3105", MAC:"8e:50:90:91:c6:37", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:56:57.518476 containerd[1986]: 2025-06-20 19:56:57.502 [INFO][5302] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e0efc22443785b86d007cc2ac9f355b563e76ba703e02a90ac4136eac8c28f7f" Namespace="kube-system" Pod="coredns-674b8bbfcf-x5fjm" WorkloadEndpoint="ip--172--31--30--175-k8s-coredns--674b8bbfcf--x5fjm-eth0" Jun 20 19:56:57.599042 containerd[1986]: time="2025-06-20T19:56:57.598018927Z" level=info msg="connecting to shim e0efc22443785b86d007cc2ac9f355b563e76ba703e02a90ac4136eac8c28f7f" address="unix:///run/containerd/s/bbf4948c9e8f08321a6f422ecf1fb51dd9ad263bae027a2c1741e7f0e1d78837" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:56:57.639317 containerd[1986]: time="2025-06-20T19:56:57.639249417Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-92w4t,Uid:d1b328f6-f5a3-4a18-86d2-b8e5e3cf3098,Namespace:calico-system,Attempt:0,} returns sandbox id \"0adab8a5fd85039eb238a4a93d6d6129026a57bc25cb13abf34424127855a984\"" Jun 20 19:56:57.700760 systemd[1]: Started cri-containerd-e0efc22443785b86d007cc2ac9f355b563e76ba703e02a90ac4136eac8c28f7f.scope - libcontainer container e0efc22443785b86d007cc2ac9f355b563e76ba703e02a90ac4136eac8c28f7f. Jun 20 19:56:57.736482 systemd-networkd[1834]: calia5c1ad025b1: Gained IPv6LL Jun 20 19:56:57.736803 systemd-networkd[1834]: calic929cd316ef: Gained IPv6LL Jun 20 19:56:57.792939 systemd[1]: Started sshd@9-172.31.30.175:22-147.75.109.163:48286.service - OpenSSH per-connection server daemon (147.75.109.163:48286). Jun 20 19:56:57.943790 containerd[1986]: time="2025-06-20T19:56:57.943736589Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5bd85449d4-2hbtg,Uid:fbc552a9-a018-43df-8b4a-79f304752516,Namespace:calico-system,Attempt:0,} returns sandbox id \"27e170b56839df7f7f71b4122a174c60c7c1efac0f7e66a5c56f56f1c193bbfc\"" Jun 20 19:56:58.115162 containerd[1986]: time="2025-06-20T19:56:58.115119870Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54b78b49cf-hbj2b,Uid:16631420-885d-40eb-a4ee-c415ba82428d,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"2366c606607a0390f495156772f1b40739d211bc03be037ea04d917ca92e2e7c\"" Jun 20 19:56:58.142035 sshd[5503]: Accepted publickey for core from 147.75.109.163 port 48286 ssh2: RSA SHA256:aTTjzy5mYIy3PeeaxkfgVdnra6PV/x835FJtoX+5LLY Jun 20 19:56:58.150124 sshd-session[5503]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:56:58.153244 containerd[1986]: time="2025-06-20T19:56:58.153172435Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-x5fjm,Uid:f097a8f6-c458-48b9-bc63-afb7bb73c93e,Namespace:kube-system,Attempt:0,} returns sandbox id \"e0efc22443785b86d007cc2ac9f355b563e76ba703e02a90ac4136eac8c28f7f\"" Jun 20 19:56:58.167295 systemd-logind[1969]: New session 10 of user core. Jun 20 19:56:58.173124 systemd[1]: Started session-10.scope - Session 10 of User core. Jun 20 19:56:58.216388 containerd[1986]: time="2025-06-20T19:56:58.216199012Z" level=info msg="CreateContainer within sandbox \"e0efc22443785b86d007cc2ac9f355b563e76ba703e02a90ac4136eac8c28f7f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 20 19:56:58.262296 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2318030842.mount: Deactivated successfully. Jun 20 19:56:58.278250 containerd[1986]: time="2025-06-20T19:56:58.270551871Z" level=info msg="Container 861d67a9dd4b336c091e091a81245b07e6dfec0074de87abc66545d9dbaa4c23: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:56:58.294497 containerd[1986]: time="2025-06-20T19:56:58.294437550Z" level=info msg="CreateContainer within sandbox \"e0efc22443785b86d007cc2ac9f355b563e76ba703e02a90ac4136eac8c28f7f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"861d67a9dd4b336c091e091a81245b07e6dfec0074de87abc66545d9dbaa4c23\"" Jun 20 19:56:58.320655 containerd[1986]: time="2025-06-20T19:56:58.320586273Z" level=info msg="StartContainer for \"861d67a9dd4b336c091e091a81245b07e6dfec0074de87abc66545d9dbaa4c23\"" Jun 20 19:56:58.329924 containerd[1986]: time="2025-06-20T19:56:58.329506530Z" level=info msg="connecting to shim 861d67a9dd4b336c091e091a81245b07e6dfec0074de87abc66545d9dbaa4c23" address="unix:///run/containerd/s/bbf4948c9e8f08321a6f422ecf1fb51dd9ad263bae027a2c1741e7f0e1d78837" protocol=ttrpc version=3 Jun 20 19:56:58.442875 systemd[1]: Started cri-containerd-861d67a9dd4b336c091e091a81245b07e6dfec0074de87abc66545d9dbaa4c23.scope - libcontainer container 861d67a9dd4b336c091e091a81245b07e6dfec0074de87abc66545d9dbaa4c23. Jun 20 19:56:58.569356 systemd-networkd[1834]: cali3f057d78274: Gained IPv6LL Jun 20 19:56:58.571053 systemd-networkd[1834]: calid09b8176443: Gained IPv6LL Jun 20 19:56:58.581819 kubelet[3303]: I0620 19:56:58.580349 3303 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 20 19:56:58.680264 containerd[1986]: time="2025-06-20T19:56:58.680072995Z" level=info msg="StartContainer for \"861d67a9dd4b336c091e091a81245b07e6dfec0074de87abc66545d9dbaa4c23\" returns successfully" Jun 20 19:56:58.759987 systemd-networkd[1834]: cali399831c3105: Gained IPv6LL Jun 20 19:56:59.627346 sshd[5528]: Connection closed by 147.75.109.163 port 48286 Jun 20 19:56:59.628011 sshd-session[5503]: pam_unix(sshd:session): session closed for user core Jun 20 19:56:59.644359 systemd[1]: sshd@9-172.31.30.175:22-147.75.109.163:48286.service: Deactivated successfully. Jun 20 19:56:59.651706 systemd[1]: session-10.scope: Deactivated successfully. Jun 20 19:56:59.654734 systemd-logind[1969]: Session 10 logged out. Waiting for processes to exit. Jun 20 19:56:59.661818 systemd-logind[1969]: Removed session 10. Jun 20 19:56:59.748652 kubelet[3303]: I0620 19:56:59.735667 3303 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-x5fjm" podStartSLOduration=45.735642247 podStartE2EDuration="45.735642247s" podCreationTimestamp="2025-06-20 19:56:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:56:59.680477863 +0000 UTC m=+51.119767571" watchObservedRunningTime="2025-06-20 19:56:59.735642247 +0000 UTC m=+51.174931933" Jun 20 19:57:00.970312 containerd[1986]: time="2025-06-20T19:57:00.970253097Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.1: active requests=0, bytes read=47305653" Jun 20 19:57:01.040263 containerd[1986]: time="2025-06-20T19:57:01.038216938Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.1\" with image id \"sha256:5d29e6e796e41d7383da7c5b73fc136f7e486d40c52f79a04098396b7f85106c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:f6439af8b6022a48d2c6c75d92ec31fe177e7b6a90c58c78ca3964db2b94e21b\", size \"48798372\" in 6.123255517s" Jun 20 19:57:01.040263 containerd[1986]: time="2025-06-20T19:57:01.038288202Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.1\" returns image reference \"sha256:5d29e6e796e41d7383da7c5b73fc136f7e486d40c52f79a04098396b7f85106c\"" Jun 20 19:57:01.048763 containerd[1986]: time="2025-06-20T19:57:01.045515277Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.1\"" Jun 20 19:57:01.106314 containerd[1986]: time="2025-06-20T19:57:01.106235221Z" level=info msg="CreateContainer within sandbox \"380dcf96d75b6796b02028710f691be59f4f2dee357ef9efa0173d30db91497a\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jun 20 19:57:01.107731 containerd[1986]: time="2025-06-20T19:57:01.107689446Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:57:01.109573 containerd[1986]: time="2025-06-20T19:57:01.109450406Z" level=info msg="ImageCreate event name:\"sha256:5d29e6e796e41d7383da7c5b73fc136f7e486d40c52f79a04098396b7f85106c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:57:01.111296 containerd[1986]: time="2025-06-20T19:57:01.110679956Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:f6439af8b6022a48d2c6c75d92ec31fe177e7b6a90c58c78ca3964db2b94e21b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:57:01.124540 containerd[1986]: time="2025-06-20T19:57:01.124406208Z" level=info msg="Container 2a8925abaead94aa34bddf3bac76aca5fa096688b04b35f1b6fb26fa9ef285f3: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:57:01.148014 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount432246237.mount: Deactivated successfully. Jun 20 19:57:01.184531 containerd[1986]: time="2025-06-20T19:57:01.184034981Z" level=info msg="CreateContainer within sandbox \"380dcf96d75b6796b02028710f691be59f4f2dee357ef9efa0173d30db91497a\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"2a8925abaead94aa34bddf3bac76aca5fa096688b04b35f1b6fb26fa9ef285f3\"" Jun 20 19:57:01.188978 containerd[1986]: time="2025-06-20T19:57:01.188600654Z" level=info msg="StartContainer for \"2a8925abaead94aa34bddf3bac76aca5fa096688b04b35f1b6fb26fa9ef285f3\"" Jun 20 19:57:01.193805 containerd[1986]: time="2025-06-20T19:57:01.193761633Z" level=info msg="connecting to shim 2a8925abaead94aa34bddf3bac76aca5fa096688b04b35f1b6fb26fa9ef285f3" address="unix:///run/containerd/s/7f0a655e334221d1dd4af6e1e8acc262608272acc0929baec0fcba5f43f0c485" protocol=ttrpc version=3 Jun 20 19:57:01.261791 systemd[1]: Started cri-containerd-2a8925abaead94aa34bddf3bac76aca5fa096688b04b35f1b6fb26fa9ef285f3.scope - libcontainer container 2a8925abaead94aa34bddf3bac76aca5fa096688b04b35f1b6fb26fa9ef285f3. Jun 20 19:57:01.449004 containerd[1986]: time="2025-06-20T19:57:01.448960014Z" level=info msg="StartContainer for \"2a8925abaead94aa34bddf3bac76aca5fa096688b04b35f1b6fb26fa9ef285f3\" returns successfully" Jun 20 19:57:01.492730 systemd-networkd[1834]: vxlan.calico: Link UP Jun 20 19:57:01.493328 systemd-networkd[1834]: vxlan.calico: Gained carrier Jun 20 19:57:01.572380 (udev-worker)[4637]: Network interface NamePolicy= disabled on kernel command line. Jun 20 19:57:02.791801 systemd-networkd[1834]: vxlan.calico: Gained IPv6LL Jun 20 19:57:03.979105 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3288381548.mount: Deactivated successfully. Jun 20 19:57:04.024834 containerd[1986]: time="2025-06-20T19:57:04.024778312Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:57:04.026134 containerd[1986]: time="2025-06-20T19:57:04.025894415Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.1: active requests=0, bytes read=33086345" Jun 20 19:57:04.028241 containerd[1986]: time="2025-06-20T19:57:04.027207195Z" level=info msg="ImageCreate event name:\"sha256:a8d73c8fd22b3a7a28e9baab63169fb459bc504d71d871f96225c4f2d5e660a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:57:04.032693 containerd[1986]: time="2025-06-20T19:57:04.032653713Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:4b8bcb8b4fc05026ba811bf0b25b736086c1b8b26a83a9039a84dd3a06b06bd4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:57:04.052456 containerd[1986]: time="2025-06-20T19:57:04.052399991Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.1\" with image id \"sha256:a8d73c8fd22b3a7a28e9baab63169fb459bc504d71d871f96225c4f2d5e660a5\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:4b8bcb8b4fc05026ba811bf0b25b736086c1b8b26a83a9039a84dd3a06b06bd4\", size \"33086175\" in 3.006612676s" Jun 20 19:57:04.052650 containerd[1986]: time="2025-06-20T19:57:04.052629194Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.1\" returns image reference \"sha256:a8d73c8fd22b3a7a28e9baab63169fb459bc504d71d871f96225c4f2d5e660a5\"" Jun 20 19:57:04.054447 containerd[1986]: time="2025-06-20T19:57:04.054377748Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.1\"" Jun 20 19:57:04.059206 containerd[1986]: time="2025-06-20T19:57:04.059155378Z" level=info msg="CreateContainer within sandbox \"921ac1e34fe0233a7b7966381141ae7e01b78d800f2d040d54a01a7e89a2d258\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jun 20 19:57:04.070588 containerd[1986]: time="2025-06-20T19:57:04.070365962Z" level=info msg="Container 231c5baf33b0168ea1e1cfe616f6dbd66220bfe3029ecd624ca518ba39372413: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:57:04.083250 containerd[1986]: time="2025-06-20T19:57:04.083029630Z" level=info msg="CreateContainer within sandbox \"921ac1e34fe0233a7b7966381141ae7e01b78d800f2d040d54a01a7e89a2d258\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"231c5baf33b0168ea1e1cfe616f6dbd66220bfe3029ecd624ca518ba39372413\"" Jun 20 19:57:04.083732 containerd[1986]: time="2025-06-20T19:57:04.083695772Z" level=info msg="StartContainer for \"231c5baf33b0168ea1e1cfe616f6dbd66220bfe3029ecd624ca518ba39372413\"" Jun 20 19:57:04.100618 containerd[1986]: time="2025-06-20T19:57:04.099627358Z" level=info msg="connecting to shim 231c5baf33b0168ea1e1cfe616f6dbd66220bfe3029ecd624ca518ba39372413" address="unix:///run/containerd/s/dd7301b1f31debae588d4365736629e3590d5a09da4d1f3e7aa4220b952ff840" protocol=ttrpc version=3 Jun 20 19:57:04.179850 systemd[1]: Started cri-containerd-231c5baf33b0168ea1e1cfe616f6dbd66220bfe3029ecd624ca518ba39372413.scope - libcontainer container 231c5baf33b0168ea1e1cfe616f6dbd66220bfe3029ecd624ca518ba39372413. Jun 20 19:57:04.262134 containerd[1986]: time="2025-06-20T19:57:04.262009759Z" level=info msg="StartContainer for \"231c5baf33b0168ea1e1cfe616f6dbd66220bfe3029ecd624ca518ba39372413\" returns successfully" Jun 20 19:57:04.666325 systemd[1]: Started sshd@10-172.31.30.175:22-147.75.109.163:48296.service - OpenSSH per-connection server daemon (147.75.109.163:48296). Jun 20 19:57:04.764252 kubelet[3303]: I0620 19:57:04.762709 3303 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-54b78b49cf-c8srm" podStartSLOduration=32.751854718 podStartE2EDuration="39.7626853s" podCreationTimestamp="2025-06-20 19:56:25 +0000 UTC" firstStartedPulling="2025-06-20 19:56:54.030215836 +0000 UTC m=+45.469505513" lastFinishedPulling="2025-06-20 19:57:01.041046415 +0000 UTC m=+52.480336095" observedRunningTime="2025-06-20 19:57:01.737508383 +0000 UTC m=+53.176798068" watchObservedRunningTime="2025-06-20 19:57:04.7626853 +0000 UTC m=+56.201974987" Jun 20 19:57:04.764252 kubelet[3303]: I0620 19:57:04.762827 3303 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-988b5b6d7-4wshk" podStartSLOduration=2.348555368 podStartE2EDuration="12.762821807s" podCreationTimestamp="2025-06-20 19:56:52 +0000 UTC" firstStartedPulling="2025-06-20 19:56:53.6396305 +0000 UTC m=+45.078920162" lastFinishedPulling="2025-06-20 19:57:04.05389693 +0000 UTC m=+55.493186601" observedRunningTime="2025-06-20 19:57:04.761467581 +0000 UTC m=+56.200757266" watchObservedRunningTime="2025-06-20 19:57:04.762821807 +0000 UTC m=+56.202111491" Jun 20 19:57:04.935412 sshd[5790]: Accepted publickey for core from 147.75.109.163 port 48296 ssh2: RSA SHA256:aTTjzy5mYIy3PeeaxkfgVdnra6PV/x835FJtoX+5LLY Jun 20 19:57:04.943415 sshd-session[5790]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:57:04.957004 systemd-logind[1969]: New session 11 of user core. Jun 20 19:57:04.960613 systemd[1]: Started session-11.scope - Session 11 of User core. Jun 20 19:57:05.504375 ntpd[1962]: Listen normally on 8 vxlan.calico 192.168.9.64:123 Jun 20 19:57:05.506524 ntpd[1962]: 20 Jun 19:57:05 ntpd[1962]: Listen normally on 8 vxlan.calico 192.168.9.64:123 Jun 20 19:57:05.506524 ntpd[1962]: 20 Jun 19:57:05 ntpd[1962]: Listen normally on 9 cali37cf0736ec1 [fe80::ecee:eeff:feee:eeee%4]:123 Jun 20 19:57:05.506524 ntpd[1962]: 20 Jun 19:57:05 ntpd[1962]: Listen normally on 10 cali6cb488f17ce [fe80::ecee:eeff:feee:eeee%5]:123 Jun 20 19:57:05.506524 ntpd[1962]: 20 Jun 19:57:05 ntpd[1962]: Listen normally on 11 calied743e589ff [fe80::ecee:eeff:feee:eeee%6]:123 Jun 20 19:57:05.506524 ntpd[1962]: 20 Jun 19:57:05 ntpd[1962]: Listen normally on 12 calic929cd316ef [fe80::ecee:eeff:feee:eeee%7]:123 Jun 20 19:57:05.506524 ntpd[1962]: 20 Jun 19:57:05 ntpd[1962]: Listen normally on 13 calia5c1ad025b1 [fe80::ecee:eeff:feee:eeee%8]:123 Jun 20 19:57:05.506524 ntpd[1962]: 20 Jun 19:57:05 ntpd[1962]: Listen normally on 14 cali3f057d78274 [fe80::ecee:eeff:feee:eeee%9]:123 Jun 20 19:57:05.506524 ntpd[1962]: 20 Jun 19:57:05 ntpd[1962]: Listen normally on 15 calid09b8176443 [fe80::ecee:eeff:feee:eeee%10]:123 Jun 20 19:57:05.506524 ntpd[1962]: 20 Jun 19:57:05 ntpd[1962]: Listen normally on 16 cali399831c3105 [fe80::ecee:eeff:feee:eeee%11]:123 Jun 20 19:57:05.506524 ntpd[1962]: 20 Jun 19:57:05 ntpd[1962]: Listen normally on 17 vxlan.calico [fe80::648a:58ff:fe90:9009%12]:123 Jun 20 19:57:05.504472 ntpd[1962]: Listen normally on 9 cali37cf0736ec1 [fe80::ecee:eeff:feee:eeee%4]:123 Jun 20 19:57:05.504526 ntpd[1962]: Listen normally on 10 cali6cb488f17ce [fe80::ecee:eeff:feee:eeee%5]:123 Jun 20 19:57:05.504563 ntpd[1962]: Listen normally on 11 calied743e589ff [fe80::ecee:eeff:feee:eeee%6]:123 Jun 20 19:57:05.504596 ntpd[1962]: Listen normally on 12 calic929cd316ef [fe80::ecee:eeff:feee:eeee%7]:123 Jun 20 19:57:05.504633 ntpd[1962]: Listen normally on 13 calia5c1ad025b1 [fe80::ecee:eeff:feee:eeee%8]:123 Jun 20 19:57:05.504667 ntpd[1962]: Listen normally on 14 cali3f057d78274 [fe80::ecee:eeff:feee:eeee%9]:123 Jun 20 19:57:05.504701 ntpd[1962]: Listen normally on 15 calid09b8176443 [fe80::ecee:eeff:feee:eeee%10]:123 Jun 20 19:57:05.504734 ntpd[1962]: Listen normally on 16 cali399831c3105 [fe80::ecee:eeff:feee:eeee%11]:123 Jun 20 19:57:05.504768 ntpd[1962]: Listen normally on 17 vxlan.calico [fe80::648a:58ff:fe90:9009%12]:123 Jun 20 19:57:06.307335 sshd[5796]: Connection closed by 147.75.109.163 port 48296 Jun 20 19:57:06.308013 sshd-session[5790]: pam_unix(sshd:session): session closed for user core Jun 20 19:57:06.318995 systemd[1]: sshd@10-172.31.30.175:22-147.75.109.163:48296.service: Deactivated successfully. Jun 20 19:57:06.325830 systemd[1]: session-11.scope: Deactivated successfully. Jun 20 19:57:06.347923 systemd-logind[1969]: Session 11 logged out. Waiting for processes to exit. Jun 20 19:57:06.351664 systemd-logind[1969]: Removed session 11. Jun 20 19:57:07.255009 containerd[1986]: time="2025-06-20T19:57:07.254949872Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:57:07.256476 containerd[1986]: time="2025-06-20T19:57:07.256259942Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.1: active requests=0, bytes read=51246233" Jun 20 19:57:07.257543 containerd[1986]: time="2025-06-20T19:57:07.257508660Z" level=info msg="ImageCreate event name:\"sha256:6df5d7da55b19142ea456ddaa7f49909709419c92a39991e84b0f6708f953d73\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:57:07.260320 containerd[1986]: time="2025-06-20T19:57:07.260280404Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5a988b0c09389a083a7f37e3f14e361659f0bcf538c01d50e9f785671a7d9b20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:57:07.260615 containerd[1986]: time="2025-06-20T19:57:07.260550752Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.1\" with image id \"sha256:6df5d7da55b19142ea456ddaa7f49909709419c92a39991e84b0f6708f953d73\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5a988b0c09389a083a7f37e3f14e361659f0bcf538c01d50e9f785671a7d9b20\", size \"52738904\" in 3.206139069s" Jun 20 19:57:07.260677 containerd[1986]: time="2025-06-20T19:57:07.260624778Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.1\" returns image reference \"sha256:6df5d7da55b19142ea456ddaa7f49909709419c92a39991e84b0f6708f953d73\"" Jun 20 19:57:07.297735 containerd[1986]: time="2025-06-20T19:57:07.297695801Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.1\"" Jun 20 19:57:07.458242 containerd[1986]: time="2025-06-20T19:57:07.456692762Z" level=info msg="CreateContainer within sandbox \"97280ac296a4c939385a98b42792870135453e487951b9affd7096376b3638a4\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jun 20 19:57:07.474682 containerd[1986]: time="2025-06-20T19:57:07.474640086Z" level=info msg="Container 2e2023bcb97b7241df37b9e2b4f6e019de3f93439b3ffa32479f143571fa2f26: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:57:07.596626 containerd[1986]: time="2025-06-20T19:57:07.596190269Z" level=info msg="CreateContainer within sandbox \"97280ac296a4c939385a98b42792870135453e487951b9affd7096376b3638a4\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"2e2023bcb97b7241df37b9e2b4f6e019de3f93439b3ffa32479f143571fa2f26\"" Jun 20 19:57:07.606164 containerd[1986]: time="2025-06-20T19:57:07.606112300Z" level=info msg="StartContainer for \"2e2023bcb97b7241df37b9e2b4f6e019de3f93439b3ffa32479f143571fa2f26\"" Jun 20 19:57:07.609404 containerd[1986]: time="2025-06-20T19:57:07.609351514Z" level=info msg="connecting to shim 2e2023bcb97b7241df37b9e2b4f6e019de3f93439b3ffa32479f143571fa2f26" address="unix:///run/containerd/s/dec51e27e5b1a98691f0fe81ed736dabc854f0614dfd3c93492531e6b2a3ebce" protocol=ttrpc version=3 Jun 20 19:57:07.905028 systemd[1]: Started cri-containerd-2e2023bcb97b7241df37b9e2b4f6e019de3f93439b3ffa32479f143571fa2f26.scope - libcontainer container 2e2023bcb97b7241df37b9e2b4f6e019de3f93439b3ffa32479f143571fa2f26. Jun 20 19:57:08.104985 containerd[1986]: time="2025-06-20T19:57:08.104893423Z" level=info msg="StartContainer for \"2e2023bcb97b7241df37b9e2b4f6e019de3f93439b3ffa32479f143571fa2f26\" returns successfully" Jun 20 19:57:09.055591 containerd[1986]: time="2025-06-20T19:57:09.055478661Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:57:09.059996 containerd[1986]: time="2025-06-20T19:57:09.059936418Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.1: active requests=0, bytes read=8758389" Jun 20 19:57:09.061415 containerd[1986]: time="2025-06-20T19:57:09.061034948Z" level=info msg="ImageCreate event name:\"sha256:8a733c30ec1a8c9f3f51e2da387b425052ed4a9ca631da57c6b185183243e8e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:57:09.072459 containerd[1986]: time="2025-06-20T19:57:09.071209751Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:b2a5699992dd6c84cfab94ef60536b9aaf19ad8de648e8e0b92d3733f5f52d23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:57:09.072459 containerd[1986]: time="2025-06-20T19:57:09.072374606Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.1\" with image id \"sha256:8a733c30ec1a8c9f3f51e2da387b425052ed4a9ca631da57c6b185183243e8e9\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:b2a5699992dd6c84cfab94ef60536b9aaf19ad8de648e8e0b92d3733f5f52d23\", size \"10251092\" in 1.77463252s" Jun 20 19:57:09.072459 containerd[1986]: time="2025-06-20T19:57:09.072412266Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.1\" returns image reference \"sha256:8a733c30ec1a8c9f3f51e2da387b425052ed4a9ca631da57c6b185183243e8e9\"" Jun 20 19:57:09.234874 containerd[1986]: time="2025-06-20T19:57:09.234829417Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.1\"" Jun 20 19:57:09.241884 containerd[1986]: time="2025-06-20T19:57:09.241836381Z" level=info msg="CreateContainer within sandbox \"0adab8a5fd85039eb238a4a93d6d6129026a57bc25cb13abf34424127855a984\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jun 20 19:57:09.276850 containerd[1986]: time="2025-06-20T19:57:09.273314909Z" level=info msg="Container 1eddff3670c1ed6a34194d0e4584b47ddb9238ca890583f1db646c6f97decb56: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:57:09.330356 containerd[1986]: time="2025-06-20T19:57:09.330236340Z" level=info msg="CreateContainer within sandbox \"0adab8a5fd85039eb238a4a93d6d6129026a57bc25cb13abf34424127855a984\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"1eddff3670c1ed6a34194d0e4584b47ddb9238ca890583f1db646c6f97decb56\"" Jun 20 19:57:09.334654 containerd[1986]: time="2025-06-20T19:57:09.334600825Z" level=info msg="StartContainer for \"1eddff3670c1ed6a34194d0e4584b47ddb9238ca890583f1db646c6f97decb56\"" Jun 20 19:57:09.342014 containerd[1986]: time="2025-06-20T19:57:09.341878750Z" level=info msg="connecting to shim 1eddff3670c1ed6a34194d0e4584b47ddb9238ca890583f1db646c6f97decb56" address="unix:///run/containerd/s/2fca04992cedeffb7ad5f6d1c280f5290fcfa0f619d155918f0494e07d9680ac" protocol=ttrpc version=3 Jun 20 19:57:09.466738 systemd[1]: Started cri-containerd-1eddff3670c1ed6a34194d0e4584b47ddb9238ca890583f1db646c6f97decb56.scope - libcontainer container 1eddff3670c1ed6a34194d0e4584b47ddb9238ca890583f1db646c6f97decb56. Jun 20 19:57:09.615192 containerd[1986]: time="2025-06-20T19:57:09.615070450Z" level=info msg="StartContainer for \"1eddff3670c1ed6a34194d0e4584b47ddb9238ca890583f1db646c6f97decb56\" returns successfully" Jun 20 19:57:09.810274 containerd[1986]: time="2025-06-20T19:57:09.810187599Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2e2023bcb97b7241df37b9e2b4f6e019de3f93439b3ffa32479f143571fa2f26\" id:\"bfc4c2b26530a05a8791739f05aed18e139271c36bdc331e038eb2c434e07d2b\" pid:5902 exited_at:{seconds:1750449429 nanos:764273115}" Jun 20 19:57:09.897572 kubelet[3303]: I0620 19:57:09.890886 3303 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-66d5f994b4-9qcrd" podStartSLOduration=31.083121545 podStartE2EDuration="40.878887143s" podCreationTimestamp="2025-06-20 19:56:29 +0000 UTC" firstStartedPulling="2025-06-20 19:56:57.474735143 +0000 UTC m=+48.914024821" lastFinishedPulling="2025-06-20 19:57:07.270500743 +0000 UTC m=+58.709790419" observedRunningTime="2025-06-20 19:57:09.37820096 +0000 UTC m=+60.817490647" watchObservedRunningTime="2025-06-20 19:57:09.878887143 +0000 UTC m=+61.318176838" Jun 20 19:57:11.340108 systemd[1]: Started sshd@11-172.31.30.175:22-147.75.109.163:42770.service - OpenSSH per-connection server daemon (147.75.109.163:42770). Jun 20 19:57:11.580872 sshd[5922]: Accepted publickey for core from 147.75.109.163 port 42770 ssh2: RSA SHA256:aTTjzy5mYIy3PeeaxkfgVdnra6PV/x835FJtoX+5LLY Jun 20 19:57:11.586574 sshd-session[5922]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:57:11.593459 systemd-logind[1969]: New session 12 of user core. Jun 20 19:57:11.600428 systemd[1]: Started session-12.scope - Session 12 of User core. Jun 20 19:57:12.613158 sshd[5924]: Connection closed by 147.75.109.163 port 42770 Jun 20 19:57:12.614530 sshd-session[5922]: pam_unix(sshd:session): session closed for user core Jun 20 19:57:12.619092 systemd[1]: sshd@11-172.31.30.175:22-147.75.109.163:42770.service: Deactivated successfully. Jun 20 19:57:12.622689 systemd[1]: session-12.scope: Deactivated successfully. Jun 20 19:57:12.626767 systemd-logind[1969]: Session 12 logged out. Waiting for processes to exit. Jun 20 19:57:12.628016 systemd-logind[1969]: Removed session 12. Jun 20 19:57:12.652919 systemd[1]: Started sshd@12-172.31.30.175:22-147.75.109.163:42784.service - OpenSSH per-connection server daemon (147.75.109.163:42784). Jun 20 19:57:12.837261 sshd[5937]: Accepted publickey for core from 147.75.109.163 port 42784 ssh2: RSA SHA256:aTTjzy5mYIy3PeeaxkfgVdnra6PV/x835FJtoX+5LLY Jun 20 19:57:12.839206 sshd-session[5937]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:57:12.845587 systemd-logind[1969]: New session 13 of user core. Jun 20 19:57:12.851455 systemd[1]: Started session-13.scope - Session 13 of User core. Jun 20 19:57:13.161653 sshd[5939]: Connection closed by 147.75.109.163 port 42784 Jun 20 19:57:13.162519 sshd-session[5937]: pam_unix(sshd:session): session closed for user core Jun 20 19:57:13.170069 systemd[1]: sshd@12-172.31.30.175:22-147.75.109.163:42784.service: Deactivated successfully. Jun 20 19:57:13.171400 systemd-logind[1969]: Session 13 logged out. Waiting for processes to exit. Jun 20 19:57:13.179036 systemd[1]: session-13.scope: Deactivated successfully. Jun 20 19:57:13.209911 systemd-logind[1969]: Removed session 13. Jun 20 19:57:13.211568 systemd[1]: Started sshd@13-172.31.30.175:22-147.75.109.163:42794.service - OpenSSH per-connection server daemon (147.75.109.163:42794). Jun 20 19:57:13.425614 sshd[5957]: Accepted publickey for core from 147.75.109.163 port 42794 ssh2: RSA SHA256:aTTjzy5mYIy3PeeaxkfgVdnra6PV/x835FJtoX+5LLY Jun 20 19:57:13.427474 sshd-session[5957]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:57:13.439291 systemd-logind[1969]: New session 14 of user core. Jun 20 19:57:13.442543 systemd[1]: Started session-14.scope - Session 14 of User core. Jun 20 19:57:13.814236 sshd[5959]: Connection closed by 147.75.109.163 port 42794 Jun 20 19:57:13.825419 sshd-session[5957]: pam_unix(sshd:session): session closed for user core Jun 20 19:57:13.835253 systemd-logind[1969]: Session 14 logged out. Waiting for processes to exit. Jun 20 19:57:13.836155 systemd[1]: sshd@13-172.31.30.175:22-147.75.109.163:42794.service: Deactivated successfully. Jun 20 19:57:13.839911 systemd[1]: session-14.scope: Deactivated successfully. Jun 20 19:57:13.847108 systemd-logind[1969]: Removed session 14. Jun 20 19:57:16.009410 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1889518799.mount: Deactivated successfully. Jun 20 19:57:16.850005 containerd[1986]: time="2025-06-20T19:57:16.849952652Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:57:16.851814 containerd[1986]: time="2025-06-20T19:57:16.851769885Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.1: active requests=0, bytes read=66352249" Jun 20 19:57:16.859254 containerd[1986]: time="2025-06-20T19:57:16.858861555Z" level=info msg="ImageCreate event name:\"sha256:7ded2fef2b18e2077114599de13fa300df0e1437753deab5c59843a86d2dad82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:57:16.868239 containerd[1986]: time="2025-06-20T19:57:16.868166393Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:173a10ef7a65a843f99fc366c7c860fa4068a8f52fda1b30ee589bc4ca43f45a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:57:16.868848 containerd[1986]: time="2025-06-20T19:57:16.868768260Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.1\" with image id \"sha256:7ded2fef2b18e2077114599de13fa300df0e1437753deab5c59843a86d2dad82\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:173a10ef7a65a843f99fc366c7c860fa4068a8f52fda1b30ee589bc4ca43f45a\", size \"66352095\" in 7.633888302s" Jun 20 19:57:16.868848 containerd[1986]: time="2025-06-20T19:57:16.868799408Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.1\" returns image reference \"sha256:7ded2fef2b18e2077114599de13fa300df0e1437753deab5c59843a86d2dad82\"" Jun 20 19:57:16.870425 containerd[1986]: time="2025-06-20T19:57:16.870395626Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.1\"" Jun 20 19:57:16.943862 containerd[1986]: time="2025-06-20T19:57:16.943805729Z" level=info msg="CreateContainer within sandbox \"27e170b56839df7f7f71b4122a174c60c7c1efac0f7e66a5c56f56f1c193bbfc\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jun 20 19:57:16.966246 containerd[1986]: time="2025-06-20T19:57:16.962417528Z" level=info msg="Container b4cffa5d275c17a1e409045b69af71b1dacaebf4f4bd0cfd7a9f44a4ffa88058: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:57:17.007622 containerd[1986]: time="2025-06-20T19:57:17.007560587Z" level=info msg="CreateContainer within sandbox \"27e170b56839df7f7f71b4122a174c60c7c1efac0f7e66a5c56f56f1c193bbfc\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"b4cffa5d275c17a1e409045b69af71b1dacaebf4f4bd0cfd7a9f44a4ffa88058\"" Jun 20 19:57:17.009239 containerd[1986]: time="2025-06-20T19:57:17.008675446Z" level=info msg="StartContainer for \"b4cffa5d275c17a1e409045b69af71b1dacaebf4f4bd0cfd7a9f44a4ffa88058\"" Jun 20 19:57:17.010267 containerd[1986]: time="2025-06-20T19:57:17.010118737Z" level=info msg="connecting to shim b4cffa5d275c17a1e409045b69af71b1dacaebf4f4bd0cfd7a9f44a4ffa88058" address="unix:///run/containerd/s/6aac6ac5b18977fa86b7f0a54b28af6563f9597d66faae94cf14ed96f5d097b7" protocol=ttrpc version=3 Jun 20 19:57:17.139455 systemd[1]: Started cri-containerd-b4cffa5d275c17a1e409045b69af71b1dacaebf4f4bd0cfd7a9f44a4ffa88058.scope - libcontainer container b4cffa5d275c17a1e409045b69af71b1dacaebf4f4bd0cfd7a9f44a4ffa88058. Jun 20 19:57:17.401156 containerd[1986]: time="2025-06-20T19:57:17.401037614Z" level=info msg="StartContainer for \"b4cffa5d275c17a1e409045b69af71b1dacaebf4f4bd0cfd7a9f44a4ffa88058\" returns successfully" Jun 20 19:57:17.570631 containerd[1986]: time="2025-06-20T19:57:17.570586732Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:57:17.595742 containerd[1986]: time="2025-06-20T19:57:17.595702175Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.1: active requests=0, bytes read=77" Jun 20 19:57:17.598340 containerd[1986]: time="2025-06-20T19:57:17.598282438Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.1\" with image id \"sha256:5d29e6e796e41d7383da7c5b73fc136f7e486d40c52f79a04098396b7f85106c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:f6439af8b6022a48d2c6c75d92ec31fe177e7b6a90c58c78ca3964db2b94e21b\", size \"48798372\" in 727.745682ms" Jun 20 19:57:17.598340 containerd[1986]: time="2025-06-20T19:57:17.598333107Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.1\" returns image reference \"sha256:5d29e6e796e41d7383da7c5b73fc136f7e486d40c52f79a04098396b7f85106c\"" Jun 20 19:57:17.608374 containerd[1986]: time="2025-06-20T19:57:17.608327449Z" level=info msg="CreateContainer within sandbox \"2366c606607a0390f495156772f1b40739d211bc03be037ea04d917ca92e2e7c\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jun 20 19:57:17.612154 containerd[1986]: time="2025-06-20T19:57:17.612116395Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.1\"" Jun 20 19:57:17.625289 containerd[1986]: time="2025-06-20T19:57:17.625255523Z" level=info msg="Container ed663dc8a8ce21779f5233c3f3238934a8d22216382ef3e259a073d8bd5111dd: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:57:17.634397 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount488914633.mount: Deactivated successfully. Jun 20 19:57:17.692573 containerd[1986]: time="2025-06-20T19:57:17.692451384Z" level=info msg="CreateContainer within sandbox \"2366c606607a0390f495156772f1b40739d211bc03be037ea04d917ca92e2e7c\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"ed663dc8a8ce21779f5233c3f3238934a8d22216382ef3e259a073d8bd5111dd\"" Jun 20 19:57:17.693416 containerd[1986]: time="2025-06-20T19:57:17.693383411Z" level=info msg="StartContainer for \"ed663dc8a8ce21779f5233c3f3238934a8d22216382ef3e259a073d8bd5111dd\"" Jun 20 19:57:17.702502 containerd[1986]: time="2025-06-20T19:57:17.702456519Z" level=info msg="connecting to shim ed663dc8a8ce21779f5233c3f3238934a8d22216382ef3e259a073d8bd5111dd" address="unix:///run/containerd/s/29a234873925caf82d73d499519b77aadc70fa68b42e79bd4fd57df8ba1d7986" protocol=ttrpc version=3 Jun 20 19:57:17.748500 systemd[1]: Started cri-containerd-ed663dc8a8ce21779f5233c3f3238934a8d22216382ef3e259a073d8bd5111dd.scope - libcontainer container ed663dc8a8ce21779f5233c3f3238934a8d22216382ef3e259a073d8bd5111dd. Jun 20 19:57:17.837153 containerd[1986]: time="2025-06-20T19:57:17.837105244Z" level=info msg="StartContainer for \"ed663dc8a8ce21779f5233c3f3238934a8d22216382ef3e259a073d8bd5111dd\" returns successfully" Jun 20 19:57:18.779110 kubelet[3303]: I0620 19:57:18.778631 3303 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-5bd85449d4-2hbtg" podStartSLOduration=31.857938475 podStartE2EDuration="50.767748057s" podCreationTimestamp="2025-06-20 19:56:28 +0000 UTC" firstStartedPulling="2025-06-20 19:56:57.959997083 +0000 UTC m=+49.399286753" lastFinishedPulling="2025-06-20 19:57:16.869806672 +0000 UTC m=+68.309096335" observedRunningTime="2025-06-20 19:57:18.753645658 +0000 UTC m=+70.192935344" watchObservedRunningTime="2025-06-20 19:57:18.767748057 +0000 UTC m=+70.207037743" Jun 20 19:57:18.871526 systemd[1]: Started sshd@14-172.31.30.175:22-147.75.109.163:40516.service - OpenSSH per-connection server daemon (147.75.109.163:40516). Jun 20 19:57:19.041768 containerd[1986]: time="2025-06-20T19:57:19.041259190Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b4cffa5d275c17a1e409045b69af71b1dacaebf4f4bd0cfd7a9f44a4ffa88058\" id:\"4593c29703d73afbb955664fe38d012eb09c5a1fb59133af9923c06a8baa4f8c\" pid:6065 exit_status:1 exited_at:{seconds:1750449439 nanos:26306621}" Jun 20 19:57:19.184611 sshd[6078]: Accepted publickey for core from 147.75.109.163 port 40516 ssh2: RSA SHA256:aTTjzy5mYIy3PeeaxkfgVdnra6PV/x835FJtoX+5LLY Jun 20 19:57:19.190576 sshd-session[6078]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:57:19.203413 systemd-logind[1969]: New session 15 of user core. Jun 20 19:57:19.207557 systemd[1]: Started session-15.scope - Session 15 of User core. Jun 20 19:57:19.820815 containerd[1986]: time="2025-06-20T19:57:19.820615414Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b4cffa5d275c17a1e409045b69af71b1dacaebf4f4bd0cfd7a9f44a4ffa88058\" id:\"f2bca497104ac08a71cc5af0ea998faa8a2f59e136a170d8b5aa62b25f2f67f5\" pid:6098 exit_status:1 exited_at:{seconds:1750449439 nanos:818212518}" Jun 20 19:57:20.810820 containerd[1986]: time="2025-06-20T19:57:20.810770873Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:57:20.814184 containerd[1986]: time="2025-06-20T19:57:20.814138344Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.1: active requests=0, bytes read=14705633" Jun 20 19:57:20.817708 containerd[1986]: time="2025-06-20T19:57:20.817652008Z" level=info msg="ImageCreate event name:\"sha256:dfc00385e8755bddd1053a2a482a3559ad6c93bd8b882371b9ed8b5c3dfe22b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:57:20.825117 containerd[1986]: time="2025-06-20T19:57:20.825065702Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:1a882b6866dd22d783a39f1e041b87a154666ea4dd8b669fe98d0b0fac58d225\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:57:20.833240 containerd[1986]: time="2025-06-20T19:57:20.833124383Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.1\" with image id \"sha256:dfc00385e8755bddd1053a2a482a3559ad6c93bd8b882371b9ed8b5c3dfe22b5\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:1a882b6866dd22d783a39f1e041b87a154666ea4dd8b669fe98d0b0fac58d225\", size \"16198288\" in 3.21532967s" Jun 20 19:57:20.833240 containerd[1986]: time="2025-06-20T19:57:20.833194428Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.1\" returns image reference \"sha256:dfc00385e8755bddd1053a2a482a3559ad6c93bd8b882371b9ed8b5c3dfe22b5\"" Jun 20 19:57:20.859241 containerd[1986]: time="2025-06-20T19:57:20.857801597Z" level=info msg="CreateContainer within sandbox \"0adab8a5fd85039eb238a4a93d6d6129026a57bc25cb13abf34424127855a984\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jun 20 19:57:20.910811 containerd[1986]: time="2025-06-20T19:57:20.910766483Z" level=info msg="Container caefbd507c76d0ac0f5c1d1c728e7ae5ba76c82e48e2bbfeb05ef1583f25a3ca: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:57:20.935353 containerd[1986]: time="2025-06-20T19:57:20.934757176Z" level=info msg="CreateContainer within sandbox \"0adab8a5fd85039eb238a4a93d6d6129026a57bc25cb13abf34424127855a984\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"caefbd507c76d0ac0f5c1d1c728e7ae5ba76c82e48e2bbfeb05ef1583f25a3ca\"" Jun 20 19:57:20.937262 containerd[1986]: time="2025-06-20T19:57:20.936603150Z" level=info msg="StartContainer for \"caefbd507c76d0ac0f5c1d1c728e7ae5ba76c82e48e2bbfeb05ef1583f25a3ca\"" Jun 20 19:57:20.941321 containerd[1986]: time="2025-06-20T19:57:20.940565441Z" level=info msg="connecting to shim caefbd507c76d0ac0f5c1d1c728e7ae5ba76c82e48e2bbfeb05ef1583f25a3ca" address="unix:///run/containerd/s/2fca04992cedeffb7ad5f6d1c280f5290fcfa0f619d155918f0494e07d9680ac" protocol=ttrpc version=3 Jun 20 19:57:21.068364 systemd[1]: Started cri-containerd-caefbd507c76d0ac0f5c1d1c728e7ae5ba76c82e48e2bbfeb05ef1583f25a3ca.scope - libcontainer container caefbd507c76d0ac0f5c1d1c728e7ae5ba76c82e48e2bbfeb05ef1583f25a3ca. Jun 20 19:57:21.074396 sshd[6082]: Connection closed by 147.75.109.163 port 40516 Jun 20 19:57:21.078436 sshd-session[6078]: pam_unix(sshd:session): session closed for user core Jun 20 19:57:21.089338 systemd[1]: sshd@14-172.31.30.175:22-147.75.109.163:40516.service: Deactivated successfully. Jun 20 19:57:21.091826 systemd-logind[1969]: Session 15 logged out. Waiting for processes to exit. Jun 20 19:57:21.095615 systemd[1]: session-15.scope: Deactivated successfully. Jun 20 19:57:21.119549 systemd[1]: Started sshd@15-172.31.30.175:22-147.75.109.163:40520.service - OpenSSH per-connection server daemon (147.75.109.163:40520). Jun 20 19:57:21.122729 systemd-logind[1969]: Removed session 15. Jun 20 19:57:21.231019 containerd[1986]: time="2025-06-20T19:57:21.230974241Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b4cffa5d275c17a1e409045b69af71b1dacaebf4f4bd0cfd7a9f44a4ffa88058\" id:\"6fa5df7b471306ceb473640f764e014b88e22e8c961fe19b72334df18a937535\" pid:6136 exit_status:1 exited_at:{seconds:1750449441 nanos:230085773}" Jun 20 19:57:21.319924 containerd[1986]: time="2025-06-20T19:57:21.319722811Z" level=info msg="StartContainer for \"caefbd507c76d0ac0f5c1d1c728e7ae5ba76c82e48e2bbfeb05ef1583f25a3ca\" returns successfully" Jun 20 19:57:21.408833 containerd[1986]: time="2025-06-20T19:57:21.408550570Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b4cffa5d275c17a1e409045b69af71b1dacaebf4f4bd0cfd7a9f44a4ffa88058\" id:\"429af2a6714af1bfccae05dfbeb28afe23c3fb0098f62517dc6bc852f86c908f\" pid:6172 exited_at:{seconds:1750449441 nanos:406505817}" Jun 20 19:57:21.412200 sshd[6189]: Accepted publickey for core from 147.75.109.163 port 40520 ssh2: RSA SHA256:aTTjzy5mYIy3PeeaxkfgVdnra6PV/x835FJtoX+5LLY Jun 20 19:57:21.414530 sshd-session[6189]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:57:21.430340 systemd-logind[1969]: New session 16 of user core. Jun 20 19:57:21.438580 systemd[1]: Started session-16.scope - Session 16 of User core. Jun 20 19:57:21.663611 kubelet[3303]: I0620 19:57:21.663036 3303 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-92w4t" podStartSLOduration=29.462991053 podStartE2EDuration="52.663010166s" podCreationTimestamp="2025-06-20 19:56:29 +0000 UTC" firstStartedPulling="2025-06-20 19:56:57.642858088 +0000 UTC m=+49.082147763" lastFinishedPulling="2025-06-20 19:57:20.842877208 +0000 UTC m=+72.282166876" observedRunningTime="2025-06-20 19:57:21.662194755 +0000 UTC m=+73.101484484" watchObservedRunningTime="2025-06-20 19:57:21.663010166 +0000 UTC m=+73.102299852" Jun 20 19:57:21.665278 kubelet[3303]: I0620 19:57:21.664580 3303 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-54b78b49cf-hbj2b" podStartSLOduration=37.190628679 podStartE2EDuration="56.66456228s" podCreationTimestamp="2025-06-20 19:56:25 +0000 UTC" firstStartedPulling="2025-06-20 19:56:58.125209466 +0000 UTC m=+49.564499129" lastFinishedPulling="2025-06-20 19:57:17.599143046 +0000 UTC m=+69.038432730" observedRunningTime="2025-06-20 19:57:18.912142092 +0000 UTC m=+70.351431793" watchObservedRunningTime="2025-06-20 19:57:21.66456228 +0000 UTC m=+73.103851966" Jun 20 19:57:21.942275 containerd[1986]: time="2025-06-20T19:57:21.942097219Z" level=info msg="TaskExit event in podsandbox handler container_id:\"130dde779bef3273b1b16c04443e458f9a51c1f944b28d1338b17c4322230aca\" id:\"f3cd96be5dc50ad5aa4533ee64f8d66b434295393727a9222f29e547328ac18e\" pid:6206 exited_at:{seconds:1750449441 nanos:941465545}" Jun 20 19:57:22.308321 kubelet[3303]: I0620 19:57:22.301021 3303 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jun 20 19:57:22.314612 kubelet[3303]: I0620 19:57:22.314087 3303 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jun 20 19:57:25.558322 sshd[6233]: Connection closed by 147.75.109.163 port 40520 Jun 20 19:57:25.573937 sshd-session[6189]: pam_unix(sshd:session): session closed for user core Jun 20 19:57:25.606363 systemd[1]: sshd@15-172.31.30.175:22-147.75.109.163:40520.service: Deactivated successfully. Jun 20 19:57:25.609574 systemd[1]: session-16.scope: Deactivated successfully. Jun 20 19:57:25.611805 systemd-logind[1969]: Session 16 logged out. Waiting for processes to exit. Jun 20 19:57:25.615140 systemd-logind[1969]: Removed session 16. Jun 20 19:57:25.617649 systemd[1]: Started sshd@16-172.31.30.175:22-147.75.109.163:40536.service - OpenSSH per-connection server daemon (147.75.109.163:40536). Jun 20 19:57:25.889183 sshd[6256]: Accepted publickey for core from 147.75.109.163 port 40536 ssh2: RSA SHA256:aTTjzy5mYIy3PeeaxkfgVdnra6PV/x835FJtoX+5LLY Jun 20 19:57:25.892261 sshd-session[6256]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:57:25.900176 systemd-logind[1969]: New session 17 of user core. Jun 20 19:57:25.903428 systemd[1]: Started session-17.scope - Session 17 of User core. Jun 20 19:57:27.372986 sshd[6258]: Connection closed by 147.75.109.163 port 40536 Jun 20 19:57:27.373724 sshd-session[6256]: pam_unix(sshd:session): session closed for user core Jun 20 19:57:27.379642 systemd-logind[1969]: Session 17 logged out. Waiting for processes to exit. Jun 20 19:57:27.382236 systemd[1]: sshd@16-172.31.30.175:22-147.75.109.163:40536.service: Deactivated successfully. Jun 20 19:57:27.385938 systemd[1]: session-17.scope: Deactivated successfully. Jun 20 19:57:27.389611 systemd-logind[1969]: Removed session 17. Jun 20 19:57:27.408443 systemd[1]: Started sshd@17-172.31.30.175:22-147.75.109.163:44276.service - OpenSSH per-connection server daemon (147.75.109.163:44276). Jun 20 19:57:27.652967 sshd[6284]: Accepted publickey for core from 147.75.109.163 port 44276 ssh2: RSA SHA256:aTTjzy5mYIy3PeeaxkfgVdnra6PV/x835FJtoX+5LLY Jun 20 19:57:27.656099 sshd-session[6284]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:57:27.665718 systemd-logind[1969]: New session 18 of user core. Jun 20 19:57:27.672418 systemd[1]: Started session-18.scope - Session 18 of User core. Jun 20 19:57:29.403978 sshd[6286]: Connection closed by 147.75.109.163 port 44276 Jun 20 19:57:29.407933 sshd-session[6284]: pam_unix(sshd:session): session closed for user core Jun 20 19:57:29.418135 systemd[1]: sshd@17-172.31.30.175:22-147.75.109.163:44276.service: Deactivated successfully. Jun 20 19:57:29.426643 systemd[1]: session-18.scope: Deactivated successfully. Jun 20 19:57:29.428260 systemd[1]: session-18.scope: Consumed 742ms CPU time, 70.4M memory peak. Jun 20 19:57:29.437154 systemd-logind[1969]: Session 18 logged out. Waiting for processes to exit. Jun 20 19:57:29.461304 systemd[1]: Started sshd@18-172.31.30.175:22-147.75.109.163:44284.service - OpenSSH per-connection server daemon (147.75.109.163:44284). Jun 20 19:57:29.464333 systemd-logind[1969]: Removed session 18. Jun 20 19:57:29.668906 sshd[6300]: Accepted publickey for core from 147.75.109.163 port 44284 ssh2: RSA SHA256:aTTjzy5mYIy3PeeaxkfgVdnra6PV/x835FJtoX+5LLY Jun 20 19:57:29.670859 sshd-session[6300]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:57:29.679290 systemd-logind[1969]: New session 19 of user core. Jun 20 19:57:29.686590 systemd[1]: Started session-19.scope - Session 19 of User core. Jun 20 19:57:29.952880 sshd[6302]: Connection closed by 147.75.109.163 port 44284 Jun 20 19:57:29.953490 sshd-session[6300]: pam_unix(sshd:session): session closed for user core Jun 20 19:57:29.957759 systemd[1]: sshd@18-172.31.30.175:22-147.75.109.163:44284.service: Deactivated successfully. Jun 20 19:57:29.960363 systemd[1]: session-19.scope: Deactivated successfully. Jun 20 19:57:29.962167 systemd-logind[1969]: Session 19 logged out. Waiting for processes to exit. Jun 20 19:57:29.963932 systemd-logind[1969]: Removed session 19. Jun 20 19:57:35.002410 systemd[1]: Started sshd@19-172.31.30.175:22-147.75.109.163:44298.service - OpenSSH per-connection server daemon (147.75.109.163:44298). Jun 20 19:57:35.320130 sshd[6316]: Accepted publickey for core from 147.75.109.163 port 44298 ssh2: RSA SHA256:aTTjzy5mYIy3PeeaxkfgVdnra6PV/x835FJtoX+5LLY Jun 20 19:57:35.326142 sshd-session[6316]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:57:35.333347 systemd-logind[1969]: New session 20 of user core. Jun 20 19:57:35.340588 systemd[1]: Started session-20.scope - Session 20 of User core. Jun 20 19:57:36.368373 containerd[1986]: time="2025-06-20T19:57:36.368198457Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2e2023bcb97b7241df37b9e2b4f6e019de3f93439b3ffa32479f143571fa2f26\" id:\"5c5432caf7ec13705e79922f79f002f67c64b791edb15b318351d93750c1b44f\" pid:6338 exited_at:{seconds:1750449456 nanos:361588241}" Jun 20 19:57:36.448178 sshd[6318]: Connection closed by 147.75.109.163 port 44298 Jun 20 19:57:36.448596 sshd-session[6316]: pam_unix(sshd:session): session closed for user core Jun 20 19:57:36.453267 systemd-logind[1969]: Session 20 logged out. Waiting for processes to exit. Jun 20 19:57:36.453390 systemd[1]: sshd@19-172.31.30.175:22-147.75.109.163:44298.service: Deactivated successfully. Jun 20 19:57:36.456756 systemd[1]: session-20.scope: Deactivated successfully. Jun 20 19:57:36.459538 systemd-logind[1969]: Removed session 20. Jun 20 19:57:39.394934 containerd[1986]: time="2025-06-20T19:57:39.394875335Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2e2023bcb97b7241df37b9e2b4f6e019de3f93439b3ffa32479f143571fa2f26\" id:\"7fcc13dce0c9e07e2706292804f2830b227a1af8705b94c8ea946a4ae98d386c\" pid:6363 exited_at:{seconds:1750449459 nanos:394399915}" Jun 20 19:57:41.490874 systemd[1]: Started sshd@20-172.31.30.175:22-147.75.109.163:50516.service - OpenSSH per-connection server daemon (147.75.109.163:50516). Jun 20 19:57:41.755369 sshd[6373]: Accepted publickey for core from 147.75.109.163 port 50516 ssh2: RSA SHA256:aTTjzy5mYIy3PeeaxkfgVdnra6PV/x835FJtoX+5LLY Jun 20 19:57:41.757355 sshd-session[6373]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:57:41.770015 systemd-logind[1969]: New session 21 of user core. Jun 20 19:57:41.774498 systemd[1]: Started session-21.scope - Session 21 of User core. Jun 20 19:57:42.102495 sshd[6375]: Connection closed by 147.75.109.163 port 50516 Jun 20 19:57:42.103530 sshd-session[6373]: pam_unix(sshd:session): session closed for user core Jun 20 19:57:42.108068 systemd-logind[1969]: Session 21 logged out. Waiting for processes to exit. Jun 20 19:57:42.109166 systemd[1]: sshd@20-172.31.30.175:22-147.75.109.163:50516.service: Deactivated successfully. Jun 20 19:57:42.114342 systemd[1]: session-21.scope: Deactivated successfully. Jun 20 19:57:42.120428 systemd-logind[1969]: Removed session 21. Jun 20 19:57:47.136680 systemd[1]: Started sshd@21-172.31.30.175:22-147.75.109.163:55662.service - OpenSSH per-connection server daemon (147.75.109.163:55662). Jun 20 19:57:47.353035 sshd[6397]: Accepted publickey for core from 147.75.109.163 port 55662 ssh2: RSA SHA256:aTTjzy5mYIy3PeeaxkfgVdnra6PV/x835FJtoX+5LLY Jun 20 19:57:47.355396 sshd-session[6397]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:57:47.361801 systemd-logind[1969]: New session 22 of user core. Jun 20 19:57:47.366436 systemd[1]: Started session-22.scope - Session 22 of User core. Jun 20 19:57:48.282481 sshd[6399]: Connection closed by 147.75.109.163 port 55662 Jun 20 19:57:48.294575 sshd-session[6397]: pam_unix(sshd:session): session closed for user core Jun 20 19:57:48.311373 systemd-logind[1969]: Session 22 logged out. Waiting for processes to exit. Jun 20 19:57:48.312658 systemd[1]: sshd@21-172.31.30.175:22-147.75.109.163:55662.service: Deactivated successfully. Jun 20 19:57:48.319290 systemd[1]: session-22.scope: Deactivated successfully. Jun 20 19:57:48.323214 systemd-logind[1969]: Removed session 22. Jun 20 19:57:51.598569 containerd[1986]: time="2025-06-20T19:57:51.598504380Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b4cffa5d275c17a1e409045b69af71b1dacaebf4f4bd0cfd7a9f44a4ffa88058\" id:\"c415fe40d21ff95c34b5c20598103d667e599ddea43e858d954877e155f61f08\" pid:6423 exited_at:{seconds:1750449471 nanos:596656520}" Jun 20 19:57:52.265053 containerd[1986]: time="2025-06-20T19:57:52.264993204Z" level=info msg="TaskExit event in podsandbox handler container_id:\"130dde779bef3273b1b16c04443e458f9a51c1f944b28d1338b17c4322230aca\" id:\"007131abf9b2d15913bab0c18040b17e11221ffe24cc4b89b64c494ea892d11c\" pid:6445 exited_at:{seconds:1750449472 nanos:263159697}" Jun 20 19:57:53.320825 systemd[1]: Started sshd@22-172.31.30.175:22-147.75.109.163:55676.service - OpenSSH per-connection server daemon (147.75.109.163:55676). Jun 20 19:57:53.639939 sshd[6460]: Accepted publickey for core from 147.75.109.163 port 55676 ssh2: RSA SHA256:aTTjzy5mYIy3PeeaxkfgVdnra6PV/x835FJtoX+5LLY Jun 20 19:57:53.646499 sshd-session[6460]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:57:53.657146 systemd-logind[1969]: New session 23 of user core. Jun 20 19:57:53.660421 systemd[1]: Started session-23.scope - Session 23 of User core. Jun 20 19:57:55.598064 sshd[6468]: Connection closed by 147.75.109.163 port 55676 Jun 20 19:57:55.610359 sshd-session[6460]: pam_unix(sshd:session): session closed for user core Jun 20 19:57:55.628527 systemd[1]: sshd@22-172.31.30.175:22-147.75.109.163:55676.service: Deactivated successfully. Jun 20 19:57:55.634155 systemd[1]: session-23.scope: Deactivated successfully. Jun 20 19:57:55.642425 systemd-logind[1969]: Session 23 logged out. Waiting for processes to exit. Jun 20 19:57:55.646213 systemd-logind[1969]: Removed session 23. Jun 20 19:58:09.490676 systemd[1]: cri-containerd-e814d31832e44f80f4981eea7345b845d2b4dea9ad1f9d2fa3f952abe44289b4.scope: Deactivated successfully. Jun 20 19:58:09.493215 systemd[1]: cri-containerd-e814d31832e44f80f4981eea7345b845d2b4dea9ad1f9d2fa3f952abe44289b4.scope: Consumed 3.876s CPU time, 85.6M memory peak, 121.4M read from disk. Jun 20 19:58:09.632265 containerd[1986]: time="2025-06-20T19:58:09.631321720Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2e2023bcb97b7241df37b9e2b4f6e019de3f93439b3ffa32479f143571fa2f26\" id:\"429f75388526336da3b4daf92ef0c3a7fec0e5e03bdac45f2d48e0afdaf88853\" pid:6494 exited_at:{seconds:1750449489 nanos:583115006}" Jun 20 19:58:09.737180 containerd[1986]: time="2025-06-20T19:58:09.736736433Z" level=info msg="received exit event container_id:\"e814d31832e44f80f4981eea7345b845d2b4dea9ad1f9d2fa3f952abe44289b4\" id:\"e814d31832e44f80f4981eea7345b845d2b4dea9ad1f9d2fa3f952abe44289b4\" pid:3122 exit_status:1 exited_at:{seconds:1750449489 nanos:676992066}" Jun 20 19:58:09.737783 containerd[1986]: time="2025-06-20T19:58:09.719624834Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e814d31832e44f80f4981eea7345b845d2b4dea9ad1f9d2fa3f952abe44289b4\" id:\"e814d31832e44f80f4981eea7345b845d2b4dea9ad1f9d2fa3f952abe44289b4\" pid:3122 exit_status:1 exited_at:{seconds:1750449489 nanos:676992066}" Jun 20 19:58:09.842615 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e814d31832e44f80f4981eea7345b845d2b4dea9ad1f9d2fa3f952abe44289b4-rootfs.mount: Deactivated successfully. Jun 20 19:58:10.163663 kubelet[3303]: I0620 19:58:10.163614 3303 scope.go:117] "RemoveContainer" containerID="e814d31832e44f80f4981eea7345b845d2b4dea9ad1f9d2fa3f952abe44289b4" Jun 20 19:58:10.264896 containerd[1986]: time="2025-06-20T19:58:10.264848317Z" level=info msg="CreateContainer within sandbox \"53b1cf34a75f6e37809a27bc0936936525786fc3ea8ba9e869feb32675b6c435\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jun 20 19:58:10.399460 containerd[1986]: time="2025-06-20T19:58:10.399409265Z" level=info msg="Container be52690c2a50a1ee28c82e6a87e8e529228995d786f9cac2e2e2dddaa47670ee: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:58:10.403861 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2740441780.mount: Deactivated successfully. Jun 20 19:58:10.427072 containerd[1986]: time="2025-06-20T19:58:10.426925629Z" level=info msg="CreateContainer within sandbox \"53b1cf34a75f6e37809a27bc0936936525786fc3ea8ba9e869feb32675b6c435\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"be52690c2a50a1ee28c82e6a87e8e529228995d786f9cac2e2e2dddaa47670ee\"" Jun 20 19:58:10.431300 containerd[1986]: time="2025-06-20T19:58:10.430409738Z" level=info msg="StartContainer for \"be52690c2a50a1ee28c82e6a87e8e529228995d786f9cac2e2e2dddaa47670ee\"" Jun 20 19:58:10.432016 containerd[1986]: time="2025-06-20T19:58:10.431812914Z" level=info msg="connecting to shim be52690c2a50a1ee28c82e6a87e8e529228995d786f9cac2e2e2dddaa47670ee" address="unix:///run/containerd/s/23868064aebf656c95aecb49c112cc80887d158542cfda27b99bebe4c20d041f" protocol=ttrpc version=3 Jun 20 19:58:10.525443 systemd[1]: Started cri-containerd-be52690c2a50a1ee28c82e6a87e8e529228995d786f9cac2e2e2dddaa47670ee.scope - libcontainer container be52690c2a50a1ee28c82e6a87e8e529228995d786f9cac2e2e2dddaa47670ee. Jun 20 19:58:10.614319 containerd[1986]: time="2025-06-20T19:58:10.614278647Z" level=info msg="StartContainer for \"be52690c2a50a1ee28c82e6a87e8e529228995d786f9cac2e2e2dddaa47670ee\" returns successfully" Jun 20 19:58:11.328510 systemd[1]: cri-containerd-45cd74a1f588c567723c2a5c19de457b3544f0b9414dbdbdf4ce1879b80efcee.scope: Deactivated successfully. Jun 20 19:58:11.330170 systemd[1]: cri-containerd-45cd74a1f588c567723c2a5c19de457b3544f0b9414dbdbdf4ce1879b80efcee.scope: Consumed 9.311s CPU time, 106.8M memory peak, 83.8M read from disk. Jun 20 19:58:11.336779 containerd[1986]: time="2025-06-20T19:58:11.335831098Z" level=info msg="received exit event container_id:\"45cd74a1f588c567723c2a5c19de457b3544f0b9414dbdbdf4ce1879b80efcee\" id:\"45cd74a1f588c567723c2a5c19de457b3544f0b9414dbdbdf4ce1879b80efcee\" pid:3897 exit_status:1 exited_at:{seconds:1750449491 nanos:334562272}" Jun 20 19:58:11.342014 containerd[1986]: time="2025-06-20T19:58:11.340473290Z" level=info msg="TaskExit event in podsandbox handler container_id:\"45cd74a1f588c567723c2a5c19de457b3544f0b9414dbdbdf4ce1879b80efcee\" id:\"45cd74a1f588c567723c2a5c19de457b3544f0b9414dbdbdf4ce1879b80efcee\" pid:3897 exit_status:1 exited_at:{seconds:1750449491 nanos:334562272}" Jun 20 19:58:11.403261 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-45cd74a1f588c567723c2a5c19de457b3544f0b9414dbdbdf4ce1879b80efcee-rootfs.mount: Deactivated successfully. Jun 20 19:58:12.082019 kubelet[3303]: E0620 19:58:12.081805 3303 controller.go:195] "Failed to update lease" err="Put \"https://172.31.30.175:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-175?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jun 20 19:58:12.109906 kubelet[3303]: I0620 19:58:12.109767 3303 scope.go:117] "RemoveContainer" containerID="45cd74a1f588c567723c2a5c19de457b3544f0b9414dbdbdf4ce1879b80efcee" Jun 20 19:58:12.143252 containerd[1986]: time="2025-06-20T19:58:12.142960949Z" level=info msg="CreateContainer within sandbox \"7a0be87f558b52d49ed335d3bf234a500f58acc212ceb2f32933b1d070004ca6\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jun 20 19:58:12.167887 containerd[1986]: time="2025-06-20T19:58:12.167839060Z" level=info msg="Container 9805f5cdcc76597180437964fe1f39a8a925598334f68604d104247504a54bd4: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:58:12.184058 containerd[1986]: time="2025-06-20T19:58:12.183962074Z" level=info msg="CreateContainer within sandbox \"7a0be87f558b52d49ed335d3bf234a500f58acc212ceb2f32933b1d070004ca6\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"9805f5cdcc76597180437964fe1f39a8a925598334f68604d104247504a54bd4\"" Jun 20 19:58:12.185137 containerd[1986]: time="2025-06-20T19:58:12.184731048Z" level=info msg="StartContainer for \"9805f5cdcc76597180437964fe1f39a8a925598334f68604d104247504a54bd4\"" Jun 20 19:58:12.186485 containerd[1986]: time="2025-06-20T19:58:12.186453456Z" level=info msg="connecting to shim 9805f5cdcc76597180437964fe1f39a8a925598334f68604d104247504a54bd4" address="unix:///run/containerd/s/1034a775efa11cd60a769bdb2f523534e8ac7a0443463d6cedfd662c81ec56e9" protocol=ttrpc version=3 Jun 20 19:58:12.222549 systemd[1]: Started cri-containerd-9805f5cdcc76597180437964fe1f39a8a925598334f68604d104247504a54bd4.scope - libcontainer container 9805f5cdcc76597180437964fe1f39a8a925598334f68604d104247504a54bd4. Jun 20 19:58:12.282518 containerd[1986]: time="2025-06-20T19:58:12.282480321Z" level=info msg="StartContainer for \"9805f5cdcc76597180437964fe1f39a8a925598334f68604d104247504a54bd4\" returns successfully" Jun 20 19:58:15.214388 systemd[1]: cri-containerd-3591a516367461fa64bf944b8c6e31708a5fd9ceb25d8449202adf5f881b8681.scope: Deactivated successfully. Jun 20 19:58:15.215349 systemd[1]: cri-containerd-3591a516367461fa64bf944b8c6e31708a5fd9ceb25d8449202adf5f881b8681.scope: Consumed 3.314s CPU time, 38.5M memory peak, 70.4M read from disk. Jun 20 19:58:15.219118 containerd[1986]: time="2025-06-20T19:58:15.219003308Z" level=info msg="received exit event container_id:\"3591a516367461fa64bf944b8c6e31708a5fd9ceb25d8449202adf5f881b8681\" id:\"3591a516367461fa64bf944b8c6e31708a5fd9ceb25d8449202adf5f881b8681\" pid:3144 exit_status:1 exited_at:{seconds:1750449495 nanos:218103014}" Jun 20 19:58:15.219884 containerd[1986]: time="2025-06-20T19:58:15.219122813Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3591a516367461fa64bf944b8c6e31708a5fd9ceb25d8449202adf5f881b8681\" id:\"3591a516367461fa64bf944b8c6e31708a5fd9ceb25d8449202adf5f881b8681\" pid:3144 exit_status:1 exited_at:{seconds:1750449495 nanos:218103014}" Jun 20 19:58:15.257458 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3591a516367461fa64bf944b8c6e31708a5fd9ceb25d8449202adf5f881b8681-rootfs.mount: Deactivated successfully. Jun 20 19:58:16.123052 kubelet[3303]: I0620 19:58:16.123013 3303 scope.go:117] "RemoveContainer" containerID="3591a516367461fa64bf944b8c6e31708a5fd9ceb25d8449202adf5f881b8681" Jun 20 19:58:16.126615 containerd[1986]: time="2025-06-20T19:58:16.126573596Z" level=info msg="CreateContainer within sandbox \"6f40ddabd115de838c53212d25c70177ca34fe24b696e2c8664642c61b7234a9\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jun 20 19:58:16.149894 containerd[1986]: time="2025-06-20T19:58:16.147598403Z" level=info msg="Container 3efc1e858759da696bb9c3cf2fee71b50547c1c71d318f9aa0d41bb9e8969d29: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:58:16.166107 containerd[1986]: time="2025-06-20T19:58:16.166064801Z" level=info msg="CreateContainer within sandbox \"6f40ddabd115de838c53212d25c70177ca34fe24b696e2c8664642c61b7234a9\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"3efc1e858759da696bb9c3cf2fee71b50547c1c71d318f9aa0d41bb9e8969d29\"" Jun 20 19:58:16.167089 containerd[1986]: time="2025-06-20T19:58:16.166737225Z" level=info msg="StartContainer for \"3efc1e858759da696bb9c3cf2fee71b50547c1c71d318f9aa0d41bb9e8969d29\"" Jun 20 19:58:16.168287 containerd[1986]: time="2025-06-20T19:58:16.168214092Z" level=info msg="connecting to shim 3efc1e858759da696bb9c3cf2fee71b50547c1c71d318f9aa0d41bb9e8969d29" address="unix:///run/containerd/s/d853f91a41912ebcb969bbf6f2eeb22b8ff1a3eda36912096799647825a4abef" protocol=ttrpc version=3 Jun 20 19:58:16.226639 systemd[1]: Started cri-containerd-3efc1e858759da696bb9c3cf2fee71b50547c1c71d318f9aa0d41bb9e8969d29.scope - libcontainer container 3efc1e858759da696bb9c3cf2fee71b50547c1c71d318f9aa0d41bb9e8969d29. Jun 20 19:58:16.290158 containerd[1986]: time="2025-06-20T19:58:16.290116814Z" level=info msg="StartContainer for \"3efc1e858759da696bb9c3cf2fee71b50547c1c71d318f9aa0d41bb9e8969d29\" returns successfully"