Apr 30 03:28:03.887804 kernel: Linux version 6.6.88-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Apr 29 23:03:20 -00 2025 Apr 30 03:28:03.887827 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=c687c1f8aad1bd5ea19c342ca6f52efb69b4807a131e3bd7f3f07b950e1ec39d Apr 30 03:28:03.887840 kernel: BIOS-provided physical RAM map: Apr 30 03:28:03.887846 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Apr 30 03:28:03.887853 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000786cdfff] usable Apr 30 03:28:03.887859 kernel: BIOS-e820: [mem 0x00000000786ce000-0x00000000787cdfff] type 20 Apr 30 03:28:03.887867 kernel: BIOS-e820: [mem 0x00000000787ce000-0x000000007894dfff] reserved Apr 30 03:28:03.887874 kernel: BIOS-e820: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Apr 30 03:28:03.887881 kernel: BIOS-e820: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Apr 30 03:28:03.887890 kernel: BIOS-e820: [mem 0x00000000789de000-0x000000007c97bfff] usable Apr 30 03:28:03.887896 kernel: BIOS-e820: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Apr 30 03:28:03.887903 kernel: NX (Execute Disable) protection: active Apr 30 03:28:03.887910 kernel: APIC: Static calls initialized Apr 30 03:28:03.887917 kernel: efi: EFI v2.7 by EDK II Apr 30 03:28:03.887925 kernel: efi: SMBIOS=0x7886a000 ACPI=0x7895d000 ACPI 2.0=0x7895d014 MEMATTR=0x77003518 Apr 30 03:28:03.887935 kernel: SMBIOS 2.7 present. Apr 30 03:28:03.887943 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Apr 30 03:28:03.887951 kernel: Hypervisor detected: KVM Apr 30 03:28:03.887959 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 30 03:28:03.887966 kernel: kvm-clock: using sched offset of 3874290548 cycles Apr 30 03:28:03.887974 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 30 03:28:03.887982 kernel: tsc: Detected 2499.996 MHz processor Apr 30 03:28:03.887990 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 30 03:28:03.887998 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 30 03:28:03.888006 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Apr 30 03:28:03.888016 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Apr 30 03:28:03.888024 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 30 03:28:03.888032 kernel: Using GB pages for direct mapping Apr 30 03:28:03.888039 kernel: Secure boot disabled Apr 30 03:28:03.888047 kernel: ACPI: Early table checksum verification disabled Apr 30 03:28:03.888054 kernel: ACPI: RSDP 0x000000007895D014 000024 (v02 AMAZON) Apr 30 03:28:03.888062 kernel: ACPI: XSDT 0x000000007895C0E8 00006C (v01 AMAZON AMZNFACP 00000001 01000013) Apr 30 03:28:03.888070 kernel: ACPI: FACP 0x0000000078955000 000114 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Apr 30 03:28:03.888077 kernel: ACPI: DSDT 0x0000000078956000 00115A (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Apr 30 03:28:03.888087 kernel: ACPI: FACS 0x00000000789D0000 000040 Apr 30 03:28:03.888095 kernel: ACPI: WAET 0x000000007895B000 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Apr 30 03:28:03.888102 kernel: ACPI: SLIT 0x000000007895A000 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Apr 30 03:28:03.888110 kernel: ACPI: APIC 0x0000000078959000 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Apr 30 03:28:03.888118 kernel: ACPI: SRAT 0x0000000078958000 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Apr 30 03:28:03.888125 kernel: ACPI: HPET 0x0000000078954000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Apr 30 03:28:03.888137 kernel: ACPI: SSDT 0x0000000078953000 000759 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Apr 30 03:28:03.888148 kernel: ACPI: SSDT 0x0000000078952000 00007F (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Apr 30 03:28:03.888156 kernel: ACPI: BGRT 0x0000000078951000 000038 (v01 AMAZON AMAZON 00000002 01000013) Apr 30 03:28:03.888280 kernel: ACPI: Reserving FACP table memory at [mem 0x78955000-0x78955113] Apr 30 03:28:03.888289 kernel: ACPI: Reserving DSDT table memory at [mem 0x78956000-0x78957159] Apr 30 03:28:03.888297 kernel: ACPI: Reserving FACS table memory at [mem 0x789d0000-0x789d003f] Apr 30 03:28:03.888305 kernel: ACPI: Reserving WAET table memory at [mem 0x7895b000-0x7895b027] Apr 30 03:28:03.888313 kernel: ACPI: Reserving SLIT table memory at [mem 0x7895a000-0x7895a06b] Apr 30 03:28:03.888325 kernel: ACPI: Reserving APIC table memory at [mem 0x78959000-0x78959075] Apr 30 03:28:03.888333 kernel: ACPI: Reserving SRAT table memory at [mem 0x78958000-0x7895809f] Apr 30 03:28:03.888341 kernel: ACPI: Reserving HPET table memory at [mem 0x78954000-0x78954037] Apr 30 03:28:03.888349 kernel: ACPI: Reserving SSDT table memory at [mem 0x78953000-0x78953758] Apr 30 03:28:03.888357 kernel: ACPI: Reserving SSDT table memory at [mem 0x78952000-0x7895207e] Apr 30 03:28:03.888365 kernel: ACPI: Reserving BGRT table memory at [mem 0x78951000-0x78951037] Apr 30 03:28:03.888374 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Apr 30 03:28:03.888382 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Apr 30 03:28:03.888390 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Apr 30 03:28:03.888400 kernel: NUMA: Initialized distance table, cnt=1 Apr 30 03:28:03.888409 kernel: NODE_DATA(0) allocated [mem 0x7a8ef000-0x7a8f4fff] Apr 30 03:28:03.888417 kernel: Zone ranges: Apr 30 03:28:03.888425 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 30 03:28:03.888434 kernel: DMA32 [mem 0x0000000001000000-0x000000007c97bfff] Apr 30 03:28:03.888442 kernel: Normal empty Apr 30 03:28:03.888450 kernel: Movable zone start for each node Apr 30 03:28:03.888458 kernel: Early memory node ranges Apr 30 03:28:03.888466 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Apr 30 03:28:03.888474 kernel: node 0: [mem 0x0000000000100000-0x00000000786cdfff] Apr 30 03:28:03.888485 kernel: node 0: [mem 0x00000000789de000-0x000000007c97bfff] Apr 30 03:28:03.888493 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007c97bfff] Apr 30 03:28:03.888501 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 30 03:28:03.888509 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Apr 30 03:28:03.888518 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Apr 30 03:28:03.888526 kernel: On node 0, zone DMA32: 13956 pages in unavailable ranges Apr 30 03:28:03.888534 kernel: ACPI: PM-Timer IO Port: 0xb008 Apr 30 03:28:03.888542 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 30 03:28:03.888550 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Apr 30 03:28:03.888561 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 30 03:28:03.888569 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 30 03:28:03.888577 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 30 03:28:03.888586 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 30 03:28:03.888594 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 30 03:28:03.888602 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 30 03:28:03.888610 kernel: TSC deadline timer available Apr 30 03:28:03.888619 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Apr 30 03:28:03.888627 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 30 03:28:03.888638 kernel: [mem 0x7ca00000-0xffffffff] available for PCI devices Apr 30 03:28:03.888646 kernel: Booting paravirtualized kernel on KVM Apr 30 03:28:03.888655 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 30 03:28:03.888663 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Apr 30 03:28:03.888671 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u1048576 Apr 30 03:28:03.888679 kernel: pcpu-alloc: s197096 r8192 d32280 u1048576 alloc=1*2097152 Apr 30 03:28:03.888687 kernel: pcpu-alloc: [0] 0 1 Apr 30 03:28:03.888695 kernel: kvm-guest: PV spinlocks enabled Apr 30 03:28:03.888703 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 30 03:28:03.888715 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=c687c1f8aad1bd5ea19c342ca6f52efb69b4807a131e3bd7f3f07b950e1ec39d Apr 30 03:28:03.888723 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Apr 30 03:28:03.888732 kernel: random: crng init done Apr 30 03:28:03.888740 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 30 03:28:03.888748 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Apr 30 03:28:03.888756 kernel: Fallback order for Node 0: 0 Apr 30 03:28:03.888764 kernel: Built 1 zonelists, mobility grouping on. Total pages: 501318 Apr 30 03:28:03.888781 kernel: Policy zone: DMA32 Apr 30 03:28:03.888792 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 30 03:28:03.888801 kernel: Memory: 1874608K/2037804K available (12288K kernel code, 2295K rwdata, 22740K rodata, 42864K init, 2328K bss, 162936K reserved, 0K cma-reserved) Apr 30 03:28:03.888809 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 30 03:28:03.888817 kernel: Kernel/User page tables isolation: enabled Apr 30 03:28:03.888826 kernel: ftrace: allocating 37944 entries in 149 pages Apr 30 03:28:03.888834 kernel: ftrace: allocated 149 pages with 4 groups Apr 30 03:28:03.888842 kernel: Dynamic Preempt: voluntary Apr 30 03:28:03.888850 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 30 03:28:03.888859 kernel: rcu: RCU event tracing is enabled. Apr 30 03:28:03.888870 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 30 03:28:03.888879 kernel: Trampoline variant of Tasks RCU enabled. Apr 30 03:28:03.888887 kernel: Rude variant of Tasks RCU enabled. Apr 30 03:28:03.888895 kernel: Tracing variant of Tasks RCU enabled. Apr 30 03:28:03.888903 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 30 03:28:03.888911 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 30 03:28:03.888920 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Apr 30 03:28:03.888939 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 30 03:28:03.888948 kernel: Console: colour dummy device 80x25 Apr 30 03:28:03.888956 kernel: printk: console [tty0] enabled Apr 30 03:28:03.888965 kernel: printk: console [ttyS0] enabled Apr 30 03:28:03.888973 kernel: ACPI: Core revision 20230628 Apr 30 03:28:03.888985 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Apr 30 03:28:03.888994 kernel: APIC: Switch to symmetric I/O mode setup Apr 30 03:28:03.889002 kernel: x2apic enabled Apr 30 03:28:03.889011 kernel: APIC: Switched APIC routing to: physical x2apic Apr 30 03:28:03.889020 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Apr 30 03:28:03.889032 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499996) Apr 30 03:28:03.889041 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Apr 30 03:28:03.889050 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Apr 30 03:28:03.889058 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 30 03:28:03.889067 kernel: Spectre V2 : Mitigation: Retpolines Apr 30 03:28:03.889075 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Apr 30 03:28:03.889084 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Apr 30 03:28:03.889093 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Apr 30 03:28:03.889101 kernel: RETBleed: Vulnerable Apr 30 03:28:03.889112 kernel: Speculative Store Bypass: Vulnerable Apr 30 03:28:03.889121 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Apr 30 03:28:03.889130 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 30 03:28:03.889138 kernel: GDS: Unknown: Dependent on hypervisor status Apr 30 03:28:03.889147 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 30 03:28:03.889155 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 30 03:28:03.889174 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 30 03:28:03.889183 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Apr 30 03:28:03.889191 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Apr 30 03:28:03.889200 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Apr 30 03:28:03.889208 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Apr 30 03:28:03.889220 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Apr 30 03:28:03.889229 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Apr 30 03:28:03.889238 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 30 03:28:03.889247 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Apr 30 03:28:03.889256 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Apr 30 03:28:03.889264 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Apr 30 03:28:03.889273 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Apr 30 03:28:03.889282 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Apr 30 03:28:03.889290 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Apr 30 03:28:03.889299 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Apr 30 03:28:03.889307 kernel: Freeing SMP alternatives memory: 32K Apr 30 03:28:03.889316 kernel: pid_max: default: 32768 minimum: 301 Apr 30 03:28:03.889328 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 30 03:28:03.889336 kernel: landlock: Up and running. Apr 30 03:28:03.889345 kernel: SELinux: Initializing. Apr 30 03:28:03.889354 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Apr 30 03:28:03.889362 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Apr 30 03:28:03.889371 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Apr 30 03:28:03.889380 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 30 03:28:03.889389 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 30 03:28:03.889398 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 30 03:28:03.889407 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Apr 30 03:28:03.889418 kernel: signal: max sigframe size: 3632 Apr 30 03:28:03.889427 kernel: rcu: Hierarchical SRCU implementation. Apr 30 03:28:03.889436 kernel: rcu: Max phase no-delay instances is 400. Apr 30 03:28:03.889444 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 30 03:28:03.889453 kernel: smp: Bringing up secondary CPUs ... Apr 30 03:28:03.889462 kernel: smpboot: x86: Booting SMP configuration: Apr 30 03:28:03.889470 kernel: .... node #0, CPUs: #1 Apr 30 03:28:03.889480 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Apr 30 03:28:03.889489 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Apr 30 03:28:03.889501 kernel: smp: Brought up 1 node, 2 CPUs Apr 30 03:28:03.889509 kernel: smpboot: Max logical packages: 1 Apr 30 03:28:03.889518 kernel: smpboot: Total of 2 processors activated (9999.98 BogoMIPS) Apr 30 03:28:03.889527 kernel: devtmpfs: initialized Apr 30 03:28:03.889535 kernel: x86/mm: Memory block size: 128MB Apr 30 03:28:03.889544 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7895e000-0x789ddfff] (524288 bytes) Apr 30 03:28:03.889553 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 30 03:28:03.889562 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 30 03:28:03.889573 kernel: pinctrl core: initialized pinctrl subsystem Apr 30 03:28:03.889582 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 30 03:28:03.889591 kernel: audit: initializing netlink subsys (disabled) Apr 30 03:28:03.889600 kernel: audit: type=2000 audit(1745983683.458:1): state=initialized audit_enabled=0 res=1 Apr 30 03:28:03.889608 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 30 03:28:03.889617 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 30 03:28:03.889625 kernel: cpuidle: using governor menu Apr 30 03:28:03.889634 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 30 03:28:03.889643 kernel: dca service started, version 1.12.1 Apr 30 03:28:03.889655 kernel: PCI: Using configuration type 1 for base access Apr 30 03:28:03.889664 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 30 03:28:03.889672 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 30 03:28:03.889681 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 30 03:28:03.889690 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 30 03:28:03.889699 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 30 03:28:03.889707 kernel: ACPI: Added _OSI(Module Device) Apr 30 03:28:03.889716 kernel: ACPI: Added _OSI(Processor Device) Apr 30 03:28:03.889725 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Apr 30 03:28:03.889736 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 30 03:28:03.889745 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Apr 30 03:28:03.889754 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 30 03:28:03.889762 kernel: ACPI: Interpreter enabled Apr 30 03:28:03.889771 kernel: ACPI: PM: (supports S0 S5) Apr 30 03:28:03.889780 kernel: ACPI: Using IOAPIC for interrupt routing Apr 30 03:28:03.889789 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 30 03:28:03.889797 kernel: PCI: Using E820 reservations for host bridge windows Apr 30 03:28:03.889806 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Apr 30 03:28:03.889815 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 30 03:28:03.889973 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Apr 30 03:28:03.890072 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Apr 30 03:28:03.890188 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Apr 30 03:28:03.890200 kernel: acpiphp: Slot [3] registered Apr 30 03:28:03.890209 kernel: acpiphp: Slot [4] registered Apr 30 03:28:03.890217 kernel: acpiphp: Slot [5] registered Apr 30 03:28:03.890226 kernel: acpiphp: Slot [6] registered Apr 30 03:28:03.890238 kernel: acpiphp: Slot [7] registered Apr 30 03:28:03.890247 kernel: acpiphp: Slot [8] registered Apr 30 03:28:03.890255 kernel: acpiphp: Slot [9] registered Apr 30 03:28:03.890264 kernel: acpiphp: Slot [10] registered Apr 30 03:28:03.890273 kernel: acpiphp: Slot [11] registered Apr 30 03:28:03.890281 kernel: acpiphp: Slot [12] registered Apr 30 03:28:03.890290 kernel: acpiphp: Slot [13] registered Apr 30 03:28:03.890298 kernel: acpiphp: Slot [14] registered Apr 30 03:28:03.890307 kernel: acpiphp: Slot [15] registered Apr 30 03:28:03.890318 kernel: acpiphp: Slot [16] registered Apr 30 03:28:03.890327 kernel: acpiphp: Slot [17] registered Apr 30 03:28:03.890336 kernel: acpiphp: Slot [18] registered Apr 30 03:28:03.890344 kernel: acpiphp: Slot [19] registered Apr 30 03:28:03.890353 kernel: acpiphp: Slot [20] registered Apr 30 03:28:03.890361 kernel: acpiphp: Slot [21] registered Apr 30 03:28:03.890370 kernel: acpiphp: Slot [22] registered Apr 30 03:28:03.890379 kernel: acpiphp: Slot [23] registered Apr 30 03:28:03.890388 kernel: acpiphp: Slot [24] registered Apr 30 03:28:03.890397 kernel: acpiphp: Slot [25] registered Apr 30 03:28:03.890408 kernel: acpiphp: Slot [26] registered Apr 30 03:28:03.890417 kernel: acpiphp: Slot [27] registered Apr 30 03:28:03.890426 kernel: acpiphp: Slot [28] registered Apr 30 03:28:03.890434 kernel: acpiphp: Slot [29] registered Apr 30 03:28:03.890443 kernel: acpiphp: Slot [30] registered Apr 30 03:28:03.890452 kernel: acpiphp: Slot [31] registered Apr 30 03:28:03.890460 kernel: PCI host bridge to bus 0000:00 Apr 30 03:28:03.890556 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 30 03:28:03.890644 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 30 03:28:03.890726 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 30 03:28:03.890808 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Apr 30 03:28:03.890889 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x2000ffffffff window] Apr 30 03:28:03.890971 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 30 03:28:03.891082 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Apr 30 03:28:03.891203 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Apr 30 03:28:03.891311 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Apr 30 03:28:03.891403 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Apr 30 03:28:03.891496 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Apr 30 03:28:03.891587 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Apr 30 03:28:03.891678 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Apr 30 03:28:03.891769 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Apr 30 03:28:03.891860 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Apr 30 03:28:03.891953 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Apr 30 03:28:03.892049 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Apr 30 03:28:03.892140 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x80000000-0x803fffff pref] Apr 30 03:28:03.892253 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Apr 30 03:28:03.892343 kernel: pci 0000:00:03.0: BAR 0: assigned to efifb Apr 30 03:28:03.892433 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 30 03:28:03.892528 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Apr 30 03:28:03.892623 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80404000-0x80407fff] Apr 30 03:28:03.892720 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Apr 30 03:28:03.892819 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80400000-0x80403fff] Apr 30 03:28:03.892831 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 30 03:28:03.892840 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 30 03:28:03.892849 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 30 03:28:03.892858 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 30 03:28:03.892871 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Apr 30 03:28:03.892880 kernel: iommu: Default domain type: Translated Apr 30 03:28:03.892889 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 30 03:28:03.892898 kernel: efivars: Registered efivars operations Apr 30 03:28:03.892907 kernel: PCI: Using ACPI for IRQ routing Apr 30 03:28:03.892916 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 30 03:28:03.892925 kernel: e820: reserve RAM buffer [mem 0x786ce000-0x7bffffff] Apr 30 03:28:03.892933 kernel: e820: reserve RAM buffer [mem 0x7c97c000-0x7fffffff] Apr 30 03:28:03.893023 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Apr 30 03:28:03.893117 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Apr 30 03:28:03.893221 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 30 03:28:03.893234 kernel: vgaarb: loaded Apr 30 03:28:03.893243 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Apr 30 03:28:03.893252 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Apr 30 03:28:03.893261 kernel: clocksource: Switched to clocksource kvm-clock Apr 30 03:28:03.893269 kernel: VFS: Disk quotas dquot_6.6.0 Apr 30 03:28:03.893278 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 30 03:28:03.893292 kernel: pnp: PnP ACPI init Apr 30 03:28:03.893300 kernel: pnp: PnP ACPI: found 5 devices Apr 30 03:28:03.893310 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 30 03:28:03.893319 kernel: NET: Registered PF_INET protocol family Apr 30 03:28:03.893328 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 30 03:28:03.893337 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Apr 30 03:28:03.893346 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 30 03:28:03.893354 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Apr 30 03:28:03.893363 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Apr 30 03:28:03.893374 kernel: TCP: Hash tables configured (established 16384 bind 16384) Apr 30 03:28:03.893384 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Apr 30 03:28:03.893392 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Apr 30 03:28:03.893401 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 30 03:28:03.893410 kernel: NET: Registered PF_XDP protocol family Apr 30 03:28:03.893499 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 30 03:28:03.893582 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 30 03:28:03.893666 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 30 03:28:03.893752 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Apr 30 03:28:03.893834 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x2000ffffffff window] Apr 30 03:28:03.893937 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Apr 30 03:28:03.893949 kernel: PCI: CLS 0 bytes, default 64 Apr 30 03:28:03.893958 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 30 03:28:03.893967 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Apr 30 03:28:03.893976 kernel: clocksource: Switched to clocksource tsc Apr 30 03:28:03.893985 kernel: Initialise system trusted keyrings Apr 30 03:28:03.893994 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Apr 30 03:28:03.894006 kernel: Key type asymmetric registered Apr 30 03:28:03.894014 kernel: Asymmetric key parser 'x509' registered Apr 30 03:28:03.894023 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 30 03:28:03.894032 kernel: io scheduler mq-deadline registered Apr 30 03:28:03.894041 kernel: io scheduler kyber registered Apr 30 03:28:03.894050 kernel: io scheduler bfq registered Apr 30 03:28:03.894059 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 30 03:28:03.894068 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 30 03:28:03.894077 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 30 03:28:03.894089 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 30 03:28:03.894098 kernel: i8042: Warning: Keylock active Apr 30 03:28:03.894107 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 30 03:28:03.894115 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 30 03:28:03.894946 kernel: rtc_cmos 00:00: RTC can wake from S4 Apr 30 03:28:03.895049 kernel: rtc_cmos 00:00: registered as rtc0 Apr 30 03:28:03.895135 kernel: rtc_cmos 00:00: setting system clock to 2025-04-30T03:28:03 UTC (1745983683) Apr 30 03:28:03.895298 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Apr 30 03:28:03.895315 kernel: intel_pstate: CPU model not supported Apr 30 03:28:03.895325 kernel: efifb: probing for efifb Apr 30 03:28:03.895334 kernel: efifb: framebuffer at 0x80000000, using 1920k, total 1920k Apr 30 03:28:03.895343 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 Apr 30 03:28:03.895352 kernel: efifb: scrolling: redraw Apr 30 03:28:03.895361 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Apr 30 03:28:03.895370 kernel: Console: switching to colour frame buffer device 100x37 Apr 30 03:28:03.895379 kernel: fb0: EFI VGA frame buffer device Apr 30 03:28:03.895388 kernel: pstore: Using crash dump compression: deflate Apr 30 03:28:03.895400 kernel: pstore: Registered efi_pstore as persistent store backend Apr 30 03:28:03.895409 kernel: NET: Registered PF_INET6 protocol family Apr 30 03:28:03.895418 kernel: Segment Routing with IPv6 Apr 30 03:28:03.895426 kernel: In-situ OAM (IOAM) with IPv6 Apr 30 03:28:03.895435 kernel: NET: Registered PF_PACKET protocol family Apr 30 03:28:03.895444 kernel: Key type dns_resolver registered Apr 30 03:28:03.895471 kernel: IPI shorthand broadcast: enabled Apr 30 03:28:03.895482 kernel: sched_clock: Marking stable (459002873, 121190898)->(648899909, -68706138) Apr 30 03:28:03.895494 kernel: registered taskstats version 1 Apr 30 03:28:03.895506 kernel: Loading compiled-in X.509 certificates Apr 30 03:28:03.895515 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.88-flatcar: 4a2605119c3649b55d5796c3fe312b2581bff37b' Apr 30 03:28:03.895524 kernel: Key type .fscrypt registered Apr 30 03:28:03.895533 kernel: Key type fscrypt-provisioning registered Apr 30 03:28:03.895542 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 30 03:28:03.895552 kernel: ima: Allocated hash algorithm: sha1 Apr 30 03:28:03.895561 kernel: ima: No architecture policies found Apr 30 03:28:03.895571 kernel: clk: Disabling unused clocks Apr 30 03:28:03.895582 kernel: Freeing unused kernel image (initmem) memory: 42864K Apr 30 03:28:03.895592 kernel: Write protecting the kernel read-only data: 36864k Apr 30 03:28:03.895601 kernel: Freeing unused kernel image (rodata/data gap) memory: 1836K Apr 30 03:28:03.898214 kernel: Run /init as init process Apr 30 03:28:03.898226 kernel: with arguments: Apr 30 03:28:03.898236 kernel: /init Apr 30 03:28:03.898245 kernel: with environment: Apr 30 03:28:03.898255 kernel: HOME=/ Apr 30 03:28:03.898264 kernel: TERM=linux Apr 30 03:28:03.898274 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Apr 30 03:28:03.898291 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 30 03:28:03.898304 systemd[1]: Detected virtualization amazon. Apr 30 03:28:03.898314 systemd[1]: Detected architecture x86-64. Apr 30 03:28:03.898324 systemd[1]: Running in initrd. Apr 30 03:28:03.898333 systemd[1]: No hostname configured, using default hostname. Apr 30 03:28:03.898342 systemd[1]: Hostname set to . Apr 30 03:28:03.898355 systemd[1]: Initializing machine ID from VM UUID. Apr 30 03:28:03.898365 systemd[1]: Queued start job for default target initrd.target. Apr 30 03:28:03.898375 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 03:28:03.898384 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 03:28:03.898395 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 30 03:28:03.898405 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 30 03:28:03.898415 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 30 03:28:03.898428 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 30 03:28:03.898439 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 30 03:28:03.898449 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 30 03:28:03.898459 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 03:28:03.898469 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 30 03:28:03.898481 systemd[1]: Reached target paths.target - Path Units. Apr 30 03:28:03.898494 systemd[1]: Reached target slices.target - Slice Units. Apr 30 03:28:03.898504 systemd[1]: Reached target swap.target - Swaps. Apr 30 03:28:03.898527 systemd[1]: Reached target timers.target - Timer Units. Apr 30 03:28:03.898542 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 30 03:28:03.898552 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 30 03:28:03.898562 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 30 03:28:03.898572 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 30 03:28:03.898582 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 30 03:28:03.898594 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 30 03:28:03.898604 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 03:28:03.898614 systemd[1]: Reached target sockets.target - Socket Units. Apr 30 03:28:03.898624 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 30 03:28:03.898634 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 30 03:28:03.898643 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 30 03:28:03.898653 systemd[1]: Starting systemd-fsck-usr.service... Apr 30 03:28:03.898663 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 30 03:28:03.898675 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 30 03:28:03.898685 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 03:28:03.898695 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 30 03:28:03.898732 systemd-journald[178]: Collecting audit messages is disabled. Apr 30 03:28:03.898758 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 03:28:03.898768 systemd[1]: Finished systemd-fsck-usr.service. Apr 30 03:28:03.898780 systemd-journald[178]: Journal started Apr 30 03:28:03.898803 systemd-journald[178]: Runtime Journal (/run/log/journal/ec2ae1e136e2f40999c7b748a74b5900) is 4.7M, max 38.2M, 33.4M free. Apr 30 03:28:03.900031 systemd-modules-load[179]: Inserted module 'overlay' Apr 30 03:28:03.909188 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 30 03:28:03.913180 systemd[1]: Started systemd-journald.service - Journal Service. Apr 30 03:28:03.915296 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:28:03.916404 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 30 03:28:03.924371 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 03:28:03.927339 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 30 03:28:03.932319 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 30 03:28:03.940184 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 30 03:28:03.942587 systemd-modules-load[179]: Inserted module 'br_netfilter' Apr 30 03:28:03.943204 kernel: Bridge firewalling registered Apr 30 03:28:03.945751 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 30 03:28:03.955334 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 03:28:03.956082 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 03:28:03.957232 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 03:28:03.957901 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 03:28:03.962324 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 30 03:28:03.963359 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 03:28:03.967383 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 30 03:28:03.977458 dracut-cmdline[211]: dracut-dracut-053 Apr 30 03:28:03.980959 dracut-cmdline[211]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=c687c1f8aad1bd5ea19c342ca6f52efb69b4807a131e3bd7f3f07b950e1ec39d Apr 30 03:28:03.997156 systemd-resolved[213]: Positive Trust Anchors: Apr 30 03:28:03.997938 systemd-resolved[213]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 30 03:28:03.997980 systemd-resolved[213]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 30 03:28:04.001538 systemd-resolved[213]: Defaulting to hostname 'linux'. Apr 30 03:28:04.005006 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 30 03:28:04.008204 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 30 03:28:04.005448 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 30 03:28:04.059196 kernel: SCSI subsystem initialized Apr 30 03:28:04.069187 kernel: Loading iSCSI transport class v2.0-870. Apr 30 03:28:04.081188 kernel: iscsi: registered transport (tcp) Apr 30 03:28:04.102204 kernel: iscsi: registered transport (qla4xxx) Apr 30 03:28:04.102276 kernel: QLogic iSCSI HBA Driver Apr 30 03:28:04.143220 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 30 03:28:04.148404 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 30 03:28:04.175524 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 30 03:28:04.175604 kernel: device-mapper: uevent: version 1.0.3 Apr 30 03:28:04.175627 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 30 03:28:04.218199 kernel: raid6: avx512x4 gen() 15445 MB/s Apr 30 03:28:04.235193 kernel: raid6: avx512x2 gen() 15443 MB/s Apr 30 03:28:04.252193 kernel: raid6: avx512x1 gen() 15370 MB/s Apr 30 03:28:04.269191 kernel: raid6: avx2x4 gen() 15247 MB/s Apr 30 03:28:04.286195 kernel: raid6: avx2x2 gen() 15210 MB/s Apr 30 03:28:04.303418 kernel: raid6: avx2x1 gen() 11614 MB/s Apr 30 03:28:04.303473 kernel: raid6: using algorithm avx512x4 gen() 15445 MB/s Apr 30 03:28:04.323188 kernel: raid6: .... xor() 7497 MB/s, rmw enabled Apr 30 03:28:04.323242 kernel: raid6: using avx512x2 recovery algorithm Apr 30 03:28:04.345199 kernel: xor: automatically using best checksumming function avx Apr 30 03:28:04.508198 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 30 03:28:04.518438 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 30 03:28:04.528439 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 03:28:04.541682 systemd-udevd[396]: Using default interface naming scheme 'v255'. Apr 30 03:28:04.546761 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 03:28:04.554554 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 30 03:28:04.574617 dracut-pre-trigger[402]: rd.md=0: removing MD RAID activation Apr 30 03:28:04.605245 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 30 03:28:04.609379 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 30 03:28:04.660061 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 03:28:04.670203 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 30 03:28:04.696675 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 30 03:28:04.698711 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 30 03:28:04.699608 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 03:28:04.701232 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 30 03:28:04.708409 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 30 03:28:04.732442 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 30 03:28:04.758221 kernel: cryptd: max_cpu_qlen set to 1000 Apr 30 03:28:04.778318 kernel: ena 0000:00:05.0: ENA device version: 0.10 Apr 30 03:28:04.808653 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Apr 30 03:28:04.808848 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Apr 30 03:28:04.809011 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80400000, mac addr 06:00:66:d2:3f:63 Apr 30 03:28:04.809201 kernel: nvme nvme0: pci function 0000:00:04.0 Apr 30 03:28:04.809386 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Apr 30 03:28:04.797889 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 30 03:28:04.798143 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 03:28:04.803035 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 03:28:04.803663 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 03:28:04.803972 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:28:04.804636 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 03:28:04.816452 kernel: AVX2 version of gcm_enc/dec engaged. Apr 30 03:28:04.816486 kernel: AES CTR mode by8 optimization enabled Apr 30 03:28:04.814689 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 03:28:04.828224 kernel: nvme nvme0: 2/0/0 default/read/poll queues Apr 30 03:28:04.829409 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 03:28:04.829555 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:28:04.842742 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 30 03:28:04.842805 kernel: GPT:9289727 != 16777215 Apr 30 03:28:04.842827 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 30 03:28:04.842847 kernel: GPT:9289727 != 16777215 Apr 30 03:28:04.842867 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 30 03:28:04.842887 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 30 03:28:04.846792 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 03:28:04.847977 (udev-worker)[456]: Network interface NamePolicy= disabled on kernel command line. Apr 30 03:28:04.871659 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:28:04.876377 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 03:28:04.901764 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 03:28:04.922363 kernel: BTRFS: device fsid 24af5149-14c0-4f50-b6d3-2f5c9259df26 devid 1 transid 38 /dev/nvme0n1p3 scanned by (udev-worker) (455) Apr 30 03:28:04.939921 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Apr 30 03:28:04.950182 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Apr 30 03:28:04.955988 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (448) Apr 30 03:28:04.954958 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Apr 30 03:28:04.961332 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 30 03:28:04.981294 disk-uuid[620]: Primary Header is updated. Apr 30 03:28:04.981294 disk-uuid[620]: Secondary Entries is updated. Apr 30 03:28:04.981294 disk-uuid[620]: Secondary Header is updated. Apr 30 03:28:04.990088 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Apr 30 03:28:05.006637 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Apr 30 03:28:05.995269 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 30 03:28:05.995604 disk-uuid[626]: The operation has completed successfully. Apr 30 03:28:06.103121 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 30 03:28:06.103250 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 30 03:28:06.120361 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 30 03:28:06.124019 sh[893]: Success Apr 30 03:28:06.146200 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Apr 30 03:28:06.241535 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 30 03:28:06.249265 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 30 03:28:06.251405 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 30 03:28:06.277667 kernel: BTRFS info (device dm-0): first mount of filesystem 24af5149-14c0-4f50-b6d3-2f5c9259df26 Apr 30 03:28:06.277742 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 30 03:28:06.277765 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 30 03:28:06.280848 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 30 03:28:06.280915 kernel: BTRFS info (device dm-0): using free space tree Apr 30 03:28:06.305190 kernel: BTRFS info (device dm-0): enabling ssd optimizations Apr 30 03:28:06.307638 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 30 03:28:06.308720 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 30 03:28:06.315364 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 30 03:28:06.317232 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 30 03:28:06.341091 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 03:28:06.341154 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Apr 30 03:28:06.341181 kernel: BTRFS info (device nvme0n1p6): using free space tree Apr 30 03:28:06.348189 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Apr 30 03:28:06.357577 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 30 03:28:06.360424 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 03:28:06.366090 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 30 03:28:06.373360 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 30 03:28:06.408193 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 30 03:28:06.423418 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 30 03:28:06.444732 systemd-networkd[1085]: lo: Link UP Apr 30 03:28:06.444741 systemd-networkd[1085]: lo: Gained carrier Apr 30 03:28:06.445993 systemd-networkd[1085]: Enumeration completed Apr 30 03:28:06.446245 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 30 03:28:06.446344 systemd-networkd[1085]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 03:28:06.446348 systemd-networkd[1085]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 03:28:06.446700 systemd[1]: Reached target network.target - Network. Apr 30 03:28:06.449672 systemd-networkd[1085]: eth0: Link UP Apr 30 03:28:06.449682 systemd-networkd[1085]: eth0: Gained carrier Apr 30 03:28:06.449694 systemd-networkd[1085]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 03:28:06.458243 systemd-networkd[1085]: eth0: DHCPv4 address 172.31.23.191/20, gateway 172.31.16.1 acquired from 172.31.16.1 Apr 30 03:28:06.558350 ignition[1032]: Ignition 2.19.0 Apr 30 03:28:06.558364 ignition[1032]: Stage: fetch-offline Apr 30 03:28:06.558558 ignition[1032]: no configs at "/usr/lib/ignition/base.d" Apr 30 03:28:06.558577 ignition[1032]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 30 03:28:06.560057 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 30 03:28:06.558915 ignition[1032]: Ignition finished successfully Apr 30 03:28:06.568440 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 30 03:28:06.583586 ignition[1093]: Ignition 2.19.0 Apr 30 03:28:06.583600 ignition[1093]: Stage: fetch Apr 30 03:28:06.584081 ignition[1093]: no configs at "/usr/lib/ignition/base.d" Apr 30 03:28:06.584095 ignition[1093]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 30 03:28:06.584245 ignition[1093]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 30 03:28:06.591322 ignition[1093]: PUT result: OK Apr 30 03:28:06.592844 ignition[1093]: parsed url from cmdline: "" Apr 30 03:28:06.592854 ignition[1093]: no config URL provided Apr 30 03:28:06.592866 ignition[1093]: reading system config file "/usr/lib/ignition/user.ign" Apr 30 03:28:06.592888 ignition[1093]: no config at "/usr/lib/ignition/user.ign" Apr 30 03:28:06.592914 ignition[1093]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 30 03:28:06.593492 ignition[1093]: PUT result: OK Apr 30 03:28:06.593539 ignition[1093]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Apr 30 03:28:06.594102 ignition[1093]: GET result: OK Apr 30 03:28:06.594254 ignition[1093]: parsing config with SHA512: de7cb2326accaf21ccfedde467ea463ecab8826d6a84991031b39860d635eba431b1b25b01db95105fe809e9b3db10fa80d68063048aac6c3229b342cba7a28f Apr 30 03:28:06.599775 unknown[1093]: fetched base config from "system" Apr 30 03:28:06.599785 unknown[1093]: fetched base config from "system" Apr 30 03:28:06.600431 ignition[1093]: fetch: fetch complete Apr 30 03:28:06.599791 unknown[1093]: fetched user config from "aws" Apr 30 03:28:06.600437 ignition[1093]: fetch: fetch passed Apr 30 03:28:06.600496 ignition[1093]: Ignition finished successfully Apr 30 03:28:06.602641 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 30 03:28:06.609349 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 30 03:28:06.625675 ignition[1099]: Ignition 2.19.0 Apr 30 03:28:06.625689 ignition[1099]: Stage: kargs Apr 30 03:28:06.626240 ignition[1099]: no configs at "/usr/lib/ignition/base.d" Apr 30 03:28:06.626254 ignition[1099]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 30 03:28:06.626381 ignition[1099]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 30 03:28:06.627294 ignition[1099]: PUT result: OK Apr 30 03:28:06.630264 ignition[1099]: kargs: kargs passed Apr 30 03:28:06.630344 ignition[1099]: Ignition finished successfully Apr 30 03:28:06.632258 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 30 03:28:06.638497 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 30 03:28:06.653315 ignition[1105]: Ignition 2.19.0 Apr 30 03:28:06.653328 ignition[1105]: Stage: disks Apr 30 03:28:06.653833 ignition[1105]: no configs at "/usr/lib/ignition/base.d" Apr 30 03:28:06.653846 ignition[1105]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 30 03:28:06.653965 ignition[1105]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 30 03:28:06.655627 ignition[1105]: PUT result: OK Apr 30 03:28:06.658859 ignition[1105]: disks: disks passed Apr 30 03:28:06.658923 ignition[1105]: Ignition finished successfully Apr 30 03:28:06.660407 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 30 03:28:06.661028 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 30 03:28:06.661382 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 30 03:28:06.661907 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 30 03:28:06.662435 systemd[1]: Reached target sysinit.target - System Initialization. Apr 30 03:28:06.663012 systemd[1]: Reached target basic.target - Basic System. Apr 30 03:28:06.668353 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 30 03:28:06.702719 systemd-fsck[1114]: ROOT: clean, 14/553520 files, 52654/553472 blocks Apr 30 03:28:06.706998 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 30 03:28:06.713303 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 30 03:28:06.809443 kernel: EXT4-fs (nvme0n1p9): mounted filesystem c246962b-d3a7-4703-a2cb-a633fbca1b76 r/w with ordered data mode. Quota mode: none. Apr 30 03:28:06.810159 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 30 03:28:06.811105 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 30 03:28:06.824369 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 30 03:28:06.826906 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 30 03:28:06.828027 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 30 03:28:06.828071 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 30 03:28:06.828095 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 30 03:28:06.834575 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 30 03:28:06.836140 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 30 03:28:06.846202 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by mount (1133) Apr 30 03:28:06.849841 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 03:28:06.849893 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Apr 30 03:28:06.849908 kernel: BTRFS info (device nvme0n1p6): using free space tree Apr 30 03:28:06.865248 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Apr 30 03:28:06.866437 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 30 03:28:06.916512 initrd-setup-root[1157]: cut: /sysroot/etc/passwd: No such file or directory Apr 30 03:28:06.921712 initrd-setup-root[1164]: cut: /sysroot/etc/group: No such file or directory Apr 30 03:28:06.926478 initrd-setup-root[1171]: cut: /sysroot/etc/shadow: No such file or directory Apr 30 03:28:06.930505 initrd-setup-root[1178]: cut: /sysroot/etc/gshadow: No such file or directory Apr 30 03:28:07.042642 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 30 03:28:07.047288 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 30 03:28:07.049325 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 30 03:28:07.057205 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 03:28:07.078305 ignition[1250]: INFO : Ignition 2.19.0 Apr 30 03:28:07.078995 ignition[1250]: INFO : Stage: mount Apr 30 03:28:07.079786 ignition[1250]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 03:28:07.080362 ignition[1250]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 30 03:28:07.080362 ignition[1250]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 30 03:28:07.081817 ignition[1250]: INFO : PUT result: OK Apr 30 03:28:07.084202 ignition[1250]: INFO : mount: mount passed Apr 30 03:28:07.084202 ignition[1250]: INFO : Ignition finished successfully Apr 30 03:28:07.086247 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 30 03:28:07.090344 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 30 03:28:07.098325 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 30 03:28:07.273779 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 30 03:28:07.278417 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 30 03:28:07.297183 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1264) Apr 30 03:28:07.301630 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 03:28:07.301704 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Apr 30 03:28:07.301719 kernel: BTRFS info (device nvme0n1p6): using free space tree Apr 30 03:28:07.308195 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Apr 30 03:28:07.310039 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 30 03:28:07.330628 ignition[1281]: INFO : Ignition 2.19.0 Apr 30 03:28:07.330628 ignition[1281]: INFO : Stage: files Apr 30 03:28:07.331677 ignition[1281]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 03:28:07.331677 ignition[1281]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 30 03:28:07.331677 ignition[1281]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 30 03:28:07.332659 ignition[1281]: INFO : PUT result: OK Apr 30 03:28:07.334571 ignition[1281]: DEBUG : files: compiled without relabeling support, skipping Apr 30 03:28:07.335903 ignition[1281]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 30 03:28:07.335903 ignition[1281]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 30 03:28:07.340295 ignition[1281]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 30 03:28:07.340956 ignition[1281]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 30 03:28:07.340956 ignition[1281]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 30 03:28:07.340688 unknown[1281]: wrote ssh authorized keys file for user: core Apr 30 03:28:07.342724 ignition[1281]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Apr 30 03:28:07.343463 ignition[1281]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Apr 30 03:28:07.343463 ignition[1281]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Apr 30 03:28:07.343463 ignition[1281]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Apr 30 03:28:07.393766 ignition[1281]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Apr 30 03:28:07.536096 ignition[1281]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Apr 30 03:28:07.537276 ignition[1281]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Apr 30 03:28:07.537276 ignition[1281]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Apr 30 03:28:07.537276 ignition[1281]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 30 03:28:07.537276 ignition[1281]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 30 03:28:07.537276 ignition[1281]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 30 03:28:07.537276 ignition[1281]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 30 03:28:07.537276 ignition[1281]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 30 03:28:07.537276 ignition[1281]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 30 03:28:07.537276 ignition[1281]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 30 03:28:07.537276 ignition[1281]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 30 03:28:07.537276 ignition[1281]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Apr 30 03:28:07.537276 ignition[1281]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Apr 30 03:28:07.537276 ignition[1281]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Apr 30 03:28:07.537276 ignition[1281]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Apr 30 03:28:07.857283 ignition[1281]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Apr 30 03:28:07.925481 systemd-networkd[1085]: eth0: Gained IPv6LL Apr 30 03:28:08.190718 ignition[1281]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Apr 30 03:28:08.190718 ignition[1281]: INFO : files: op(c): [started] processing unit "containerd.service" Apr 30 03:28:08.193212 ignition[1281]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Apr 30 03:28:08.194124 ignition[1281]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Apr 30 03:28:08.194124 ignition[1281]: INFO : files: op(c): [finished] processing unit "containerd.service" Apr 30 03:28:08.194124 ignition[1281]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Apr 30 03:28:08.194124 ignition[1281]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 30 03:28:08.194124 ignition[1281]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 30 03:28:08.194124 ignition[1281]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Apr 30 03:28:08.194124 ignition[1281]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Apr 30 03:28:08.194124 ignition[1281]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Apr 30 03:28:08.194124 ignition[1281]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 30 03:28:08.194124 ignition[1281]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 30 03:28:08.194124 ignition[1281]: INFO : files: files passed Apr 30 03:28:08.194124 ignition[1281]: INFO : Ignition finished successfully Apr 30 03:28:08.195135 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 30 03:28:08.203408 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 30 03:28:08.206313 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 30 03:28:08.207957 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 30 03:28:08.208478 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 30 03:28:08.217305 initrd-setup-root-after-ignition[1309]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 30 03:28:08.217305 initrd-setup-root-after-ignition[1309]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 30 03:28:08.219619 initrd-setup-root-after-ignition[1313]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 30 03:28:08.220878 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 30 03:28:08.221759 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 30 03:28:08.225447 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 30 03:28:08.251674 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 30 03:28:08.251813 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 30 03:28:08.253136 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 30 03:28:08.254300 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 30 03:28:08.255089 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 30 03:28:08.263435 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 30 03:28:08.276575 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 30 03:28:08.281379 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 30 03:28:08.293964 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 30 03:28:08.294960 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 03:28:08.296034 systemd[1]: Stopped target timers.target - Timer Units. Apr 30 03:28:08.296906 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 30 03:28:08.297092 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 30 03:28:08.298323 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 30 03:28:08.299193 systemd[1]: Stopped target basic.target - Basic System. Apr 30 03:28:08.299964 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 30 03:28:08.300721 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 30 03:28:08.301578 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 30 03:28:08.302335 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 30 03:28:08.303072 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 30 03:28:08.303849 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 30 03:28:08.305061 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 30 03:28:08.305819 systemd[1]: Stopped target swap.target - Swaps. Apr 30 03:28:08.306527 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 30 03:28:08.306707 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 30 03:28:08.307773 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 30 03:28:08.308556 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 03:28:08.309333 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 30 03:28:08.310042 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 03:28:08.310585 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 30 03:28:08.310805 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 30 03:28:08.312180 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 30 03:28:08.312420 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 30 03:28:08.313206 systemd[1]: ignition-files.service: Deactivated successfully. Apr 30 03:28:08.313410 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 30 03:28:08.321452 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 30 03:28:08.323432 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 30 03:28:08.329139 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 30 03:28:08.329422 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 03:28:08.333602 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 30 03:28:08.334485 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 30 03:28:08.341889 ignition[1333]: INFO : Ignition 2.19.0 Apr 30 03:28:08.341889 ignition[1333]: INFO : Stage: umount Apr 30 03:28:08.344222 ignition[1333]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 03:28:08.344222 ignition[1333]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 30 03:28:08.344222 ignition[1333]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 30 03:28:08.345949 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 30 03:28:08.352066 ignition[1333]: INFO : PUT result: OK Apr 30 03:28:08.346082 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 30 03:28:08.356187 ignition[1333]: INFO : umount: umount passed Apr 30 03:28:08.356187 ignition[1333]: INFO : Ignition finished successfully Apr 30 03:28:08.354974 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 30 03:28:08.355101 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 30 03:28:08.356453 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 30 03:28:08.356561 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 30 03:28:08.357533 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 30 03:28:08.357593 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 30 03:28:08.358111 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 30 03:28:08.358173 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 30 03:28:08.360921 systemd[1]: Stopped target network.target - Network. Apr 30 03:28:08.362240 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 30 03:28:08.362320 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 30 03:28:08.363025 systemd[1]: Stopped target paths.target - Path Units. Apr 30 03:28:08.363480 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 30 03:28:08.367309 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 03:28:08.368450 systemd[1]: Stopped target slices.target - Slice Units. Apr 30 03:28:08.368997 systemd[1]: Stopped target sockets.target - Socket Units. Apr 30 03:28:08.369552 systemd[1]: iscsid.socket: Deactivated successfully. Apr 30 03:28:08.369610 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 30 03:28:08.370121 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 30 03:28:08.370203 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 30 03:28:08.370687 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 30 03:28:08.370761 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 30 03:28:08.372341 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 30 03:28:08.372406 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 30 03:28:08.373275 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 30 03:28:08.373792 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 30 03:28:08.376000 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 30 03:28:08.376933 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 30 03:28:08.377061 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 30 03:28:08.377219 systemd-networkd[1085]: eth0: DHCPv6 lease lost Apr 30 03:28:08.379194 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 30 03:28:08.379322 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 30 03:28:08.381258 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 30 03:28:08.381420 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 30 03:28:08.385460 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 30 03:28:08.385533 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 30 03:28:08.386276 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 30 03:28:08.386344 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 30 03:28:08.391361 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 30 03:28:08.391897 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 30 03:28:08.391975 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 30 03:28:08.392531 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 30 03:28:08.392589 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 30 03:28:08.393298 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 30 03:28:08.393354 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 30 03:28:08.395143 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 30 03:28:08.395727 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 03:28:08.396277 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 03:28:08.409485 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 30 03:28:08.409656 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 30 03:28:08.412581 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 30 03:28:08.412881 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 03:28:08.414271 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 30 03:28:08.414357 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 30 03:28:08.415367 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 30 03:28:08.415416 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 03:28:08.416045 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 30 03:28:08.416104 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 30 03:28:08.417250 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 30 03:28:08.417308 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 30 03:28:08.418359 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 30 03:28:08.418419 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 03:28:08.423357 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 30 03:28:08.423961 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 30 03:28:08.424049 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 03:28:08.424845 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 30 03:28:08.424906 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 30 03:28:08.426541 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 30 03:28:08.426604 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 03:28:08.427137 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 03:28:08.427218 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:28:08.434739 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 30 03:28:08.434883 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 30 03:28:08.436123 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 30 03:28:08.441402 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 30 03:28:08.454863 systemd[1]: Switching root. Apr 30 03:28:08.481192 systemd-journald[178]: Received SIGTERM from PID 1 (systemd). Apr 30 03:28:08.481259 systemd-journald[178]: Journal stopped Apr 30 03:28:09.754073 kernel: SELinux: policy capability network_peer_controls=1 Apr 30 03:28:09.754133 kernel: SELinux: policy capability open_perms=1 Apr 30 03:28:09.754146 kernel: SELinux: policy capability extended_socket_class=1 Apr 30 03:28:09.754158 kernel: SELinux: policy capability always_check_network=0 Apr 30 03:28:09.754301 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 30 03:28:09.754322 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 30 03:28:09.754334 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 30 03:28:09.754346 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 30 03:28:09.754357 kernel: audit: type=1403 audit(1745983688.852:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 30 03:28:09.754375 systemd[1]: Successfully loaded SELinux policy in 44.509ms. Apr 30 03:28:09.754401 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.465ms. Apr 30 03:28:09.754415 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 30 03:28:09.754431 systemd[1]: Detected virtualization amazon. Apr 30 03:28:09.754447 systemd[1]: Detected architecture x86-64. Apr 30 03:28:09.754462 systemd[1]: Detected first boot. Apr 30 03:28:09.754476 systemd[1]: Initializing machine ID from VM UUID. Apr 30 03:28:09.754488 zram_generator::config[1392]: No configuration found. Apr 30 03:28:09.754502 systemd[1]: Populated /etc with preset unit settings. Apr 30 03:28:09.754514 systemd[1]: Queued start job for default target multi-user.target. Apr 30 03:28:09.754526 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Apr 30 03:28:09.754539 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 30 03:28:09.754552 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 30 03:28:09.754567 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 30 03:28:09.754579 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 30 03:28:09.754591 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 30 03:28:09.754604 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 30 03:28:09.754616 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 30 03:28:09.754628 systemd[1]: Created slice user.slice - User and Session Slice. Apr 30 03:28:09.754641 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 03:28:09.754653 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 03:28:09.754665 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 30 03:28:09.754680 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 30 03:28:09.754692 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 30 03:28:09.754704 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 30 03:28:09.754717 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 30 03:28:09.754729 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 03:28:09.754742 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 30 03:28:09.754758 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 03:28:09.754771 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 30 03:28:09.754785 systemd[1]: Reached target slices.target - Slice Units. Apr 30 03:28:09.754797 systemd[1]: Reached target swap.target - Swaps. Apr 30 03:28:09.754809 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 30 03:28:09.754822 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 30 03:28:09.754835 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 30 03:28:09.754846 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 30 03:28:09.754859 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 30 03:28:09.754872 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 30 03:28:09.754884 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 03:28:09.754899 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 30 03:28:09.754911 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 30 03:28:09.754924 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 30 03:28:09.754936 systemd[1]: Mounting media.mount - External Media Directory... Apr 30 03:28:09.754948 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:28:09.754960 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 30 03:28:09.754972 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 30 03:28:09.754985 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 30 03:28:09.755000 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 30 03:28:09.755012 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 03:28:09.755024 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 30 03:28:09.755036 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 30 03:28:09.755047 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 03:28:09.755060 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 30 03:28:09.755073 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 03:28:09.755084 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 30 03:28:09.755096 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 03:28:09.755111 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 30 03:28:09.755123 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Apr 30 03:28:09.755136 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Apr 30 03:28:09.755148 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 30 03:28:09.757189 kernel: fuse: init (API version 7.39) Apr 30 03:28:09.757235 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 30 03:28:09.757250 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 30 03:28:09.757264 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 30 03:28:09.757284 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 30 03:28:09.757297 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:28:09.757310 kernel: loop: module loaded Apr 30 03:28:09.757321 kernel: ACPI: bus type drm_connector registered Apr 30 03:28:09.757334 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 30 03:28:09.757346 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 30 03:28:09.757392 systemd-journald[1497]: Collecting audit messages is disabled. Apr 30 03:28:09.757426 systemd[1]: Mounted media.mount - External Media Directory. Apr 30 03:28:09.757438 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 30 03:28:09.757450 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 30 03:28:09.757462 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 30 03:28:09.757475 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 30 03:28:09.757489 systemd-journald[1497]: Journal started Apr 30 03:28:09.757513 systemd-journald[1497]: Runtime Journal (/run/log/journal/ec2ae1e136e2f40999c7b748a74b5900) is 4.7M, max 38.2M, 33.4M free. Apr 30 03:28:09.758371 systemd[1]: Started systemd-journald.service - Journal Service. Apr 30 03:28:09.760675 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 03:28:09.761584 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 30 03:28:09.761751 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 30 03:28:09.762490 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 03:28:09.762637 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 03:28:09.763493 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 30 03:28:09.763637 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 30 03:28:09.764525 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 03:28:09.764667 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 03:28:09.765464 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 30 03:28:09.765604 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 30 03:28:09.766407 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 03:28:09.766555 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 03:28:09.767324 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 30 03:28:09.767988 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 30 03:28:09.768661 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 30 03:28:09.779809 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 30 03:28:09.787397 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 30 03:28:09.790269 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 30 03:28:09.790721 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 30 03:28:09.798429 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 30 03:28:09.805892 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 30 03:28:09.806516 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 30 03:28:09.810837 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 30 03:28:09.815339 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 30 03:28:09.830360 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 03:28:09.839869 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 30 03:28:09.844614 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 03:28:09.845353 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 30 03:28:09.846319 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 30 03:28:09.856430 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 30 03:28:09.865458 systemd-journald[1497]: Time spent on flushing to /var/log/journal/ec2ae1e136e2f40999c7b748a74b5900 is 68.570ms for 973 entries. Apr 30 03:28:09.865458 systemd-journald[1497]: System Journal (/var/log/journal/ec2ae1e136e2f40999c7b748a74b5900) is 8.0M, max 195.6M, 187.6M free. Apr 30 03:28:09.952415 systemd-journald[1497]: Received client request to flush runtime journal. Apr 30 03:28:09.872559 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 30 03:28:09.874783 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 30 03:28:09.908442 udevadm[1549]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Apr 30 03:28:09.919673 systemd-tmpfiles[1544]: ACLs are not supported, ignoring. Apr 30 03:28:09.919698 systemd-tmpfiles[1544]: ACLs are not supported, ignoring. Apr 30 03:28:09.930844 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 03:28:09.936406 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 30 03:28:09.947491 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 30 03:28:09.958749 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 30 03:28:10.008506 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 30 03:28:10.015480 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 30 03:28:10.040783 systemd-tmpfiles[1566]: ACLs are not supported, ignoring. Apr 30 03:28:10.041254 systemd-tmpfiles[1566]: ACLs are not supported, ignoring. Apr 30 03:28:10.049222 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 03:28:10.565493 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 30 03:28:10.572393 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 03:28:10.597985 systemd-udevd[1572]: Using default interface naming scheme 'v255'. Apr 30 03:28:10.633498 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 03:28:10.643389 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 30 03:28:10.671378 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 30 03:28:10.717230 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Apr 30 03:28:10.720338 (udev-worker)[1573]: Network interface NamePolicy= disabled on kernel command line. Apr 30 03:28:10.768318 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 30 03:28:10.816202 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Apr 30 03:28:10.844455 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Apr 30 03:28:10.864915 kernel: ACPI: button: Power Button [PWRF] Apr 30 03:28:10.864970 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input3 Apr 30 03:28:10.866727 kernel: ACPI: button: Sleep Button [SLPF] Apr 30 03:28:10.873914 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input4 Apr 30 03:28:10.900218 systemd-networkd[1575]: lo: Link UP Apr 30 03:28:10.900612 systemd-networkd[1575]: lo: Gained carrier Apr 30 03:28:10.902531 systemd-networkd[1575]: Enumeration completed Apr 30 03:28:10.903180 systemd-networkd[1575]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 03:28:10.903342 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 30 03:28:10.903988 systemd-networkd[1575]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 03:28:10.911380 systemd-networkd[1575]: eth0: Link UP Apr 30 03:28:10.911601 systemd-networkd[1575]: eth0: Gained carrier Apr 30 03:28:10.911648 systemd-networkd[1575]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 03:28:10.919449 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 30 03:28:10.920832 systemd-networkd[1575]: eth0: DHCPv4 address 172.31.23.191/20, gateway 172.31.16.1 acquired from 172.31.16.1 Apr 30 03:28:10.936197 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (1574) Apr 30 03:28:10.952229 kernel: mousedev: PS/2 mouse device common for all mice Apr 30 03:28:10.978493 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 03:28:10.996937 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 03:28:10.997305 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:28:11.011832 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 03:28:11.110428 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 30 03:28:11.130594 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Apr 30 03:28:11.138258 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 30 03:28:11.139686 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:28:11.163986 lvm[1696]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 30 03:28:11.191411 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 30 03:28:11.192417 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 30 03:28:11.198440 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 30 03:28:11.205267 lvm[1701]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 30 03:28:11.233566 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 30 03:28:11.235233 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 30 03:28:11.235908 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 30 03:28:11.235949 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 30 03:28:11.236579 systemd[1]: Reached target machines.target - Containers. Apr 30 03:28:11.238778 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 30 03:28:11.246351 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 30 03:28:11.248757 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 30 03:28:11.249586 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 03:28:11.255350 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 30 03:28:11.265406 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 30 03:28:11.269898 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 30 03:28:11.275265 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 30 03:28:11.291821 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 30 03:28:11.304550 kernel: loop0: detected capacity change from 0 to 140768 Apr 30 03:28:11.308414 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 30 03:28:11.311114 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 30 03:28:11.354303 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 30 03:28:11.380299 kernel: loop1: detected capacity change from 0 to 61336 Apr 30 03:28:11.493448 kernel: loop2: detected capacity change from 0 to 142488 Apr 30 03:28:11.567456 kernel: loop3: detected capacity change from 0 to 210664 Apr 30 03:28:11.765191 kernel: loop4: detected capacity change from 0 to 140768 Apr 30 03:28:11.791377 kernel: loop5: detected capacity change from 0 to 61336 Apr 30 03:28:11.813283 kernel: loop6: detected capacity change from 0 to 142488 Apr 30 03:28:11.843198 kernel: loop7: detected capacity change from 0 to 210664 Apr 30 03:28:11.869641 (sd-merge)[1723]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Apr 30 03:28:11.870241 (sd-merge)[1723]: Merged extensions into '/usr'. Apr 30 03:28:11.875267 systemd[1]: Reloading requested from client PID 1709 ('systemd-sysext') (unit systemd-sysext.service)... Apr 30 03:28:11.875580 systemd[1]: Reloading... Apr 30 03:28:11.876662 ldconfig[1705]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 30 03:28:11.930248 zram_generator::config[1752]: No configuration found. Apr 30 03:28:12.076766 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 03:28:12.147527 systemd[1]: Reloading finished in 271 ms. Apr 30 03:28:12.163624 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 30 03:28:12.164680 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 30 03:28:12.173377 systemd[1]: Starting ensure-sysext.service... Apr 30 03:28:12.176344 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 30 03:28:12.190312 systemd[1]: Reloading requested from client PID 1810 ('systemctl') (unit ensure-sysext.service)... Apr 30 03:28:12.190337 systemd[1]: Reloading... Apr 30 03:28:12.209669 systemd-tmpfiles[1811]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 30 03:28:12.210011 systemd-tmpfiles[1811]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 30 03:28:12.210879 systemd-tmpfiles[1811]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 30 03:28:12.211150 systemd-tmpfiles[1811]: ACLs are not supported, ignoring. Apr 30 03:28:12.211281 systemd-tmpfiles[1811]: ACLs are not supported, ignoring. Apr 30 03:28:12.214582 systemd-tmpfiles[1811]: Detected autofs mount point /boot during canonicalization of boot. Apr 30 03:28:12.214594 systemd-tmpfiles[1811]: Skipping /boot Apr 30 03:28:12.227859 systemd-tmpfiles[1811]: Detected autofs mount point /boot during canonicalization of boot. Apr 30 03:28:12.228094 systemd-tmpfiles[1811]: Skipping /boot Apr 30 03:28:12.260198 zram_generator::config[1836]: No configuration found. Apr 30 03:28:12.414174 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 03:28:12.488052 systemd[1]: Reloading finished in 297 ms. Apr 30 03:28:12.509743 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 03:28:12.521581 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 30 03:28:12.527553 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 30 03:28:12.531335 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 30 03:28:12.542787 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 30 03:28:12.548503 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 30 03:28:12.560532 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:28:12.561517 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 03:28:12.566273 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 03:28:12.582415 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 03:28:12.599560 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 03:28:12.602412 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 03:28:12.602617 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:28:12.608234 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 03:28:12.608483 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 03:28:12.614272 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 03:28:12.614520 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 03:28:12.626724 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 03:28:12.629846 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 03:28:12.639926 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:28:12.642664 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 03:28:12.654725 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 03:28:12.666397 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 03:28:12.674715 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 03:28:12.675434 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 03:28:12.675733 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:28:12.686143 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 30 03:28:12.688511 augenrules[1934]: No rules Apr 30 03:28:12.696327 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 03:28:12.696618 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 03:28:12.702552 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 30 03:28:12.703992 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 03:28:12.704330 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 03:28:12.711690 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 30 03:28:12.717135 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 03:28:12.718568 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 03:28:12.731798 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 30 03:28:12.732190 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 30 03:28:12.741297 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 30 03:28:12.746044 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:28:12.746454 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 03:28:12.753614 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 03:28:12.758022 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 30 03:28:12.771521 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 03:28:12.775297 systemd-resolved[1907]: Positive Trust Anchors: Apr 30 03:28:12.775309 systemd-resolved[1907]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 30 03:28:12.775370 systemd-resolved[1907]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 30 03:28:12.787585 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 03:28:12.789847 systemd-resolved[1907]: Defaulting to hostname 'linux'. Apr 30 03:28:12.792419 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 03:28:12.792741 systemd[1]: Reached target time-set.target - System Time Set. Apr 30 03:28:12.795485 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:28:12.800411 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 30 03:28:12.801803 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 30 03:28:12.805364 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 30 03:28:12.807417 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 03:28:12.807776 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 03:28:12.808715 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 30 03:28:12.808948 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 30 03:28:12.809876 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 03:28:12.810066 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 03:28:12.810896 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 03:28:12.811075 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 03:28:12.818016 systemd[1]: Finished ensure-sysext.service. Apr 30 03:28:12.823833 systemd[1]: Reached target network.target - Network. Apr 30 03:28:12.824334 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 30 03:28:12.824827 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 30 03:28:12.824890 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 30 03:28:12.824922 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 30 03:28:12.824957 systemd[1]: Reached target sysinit.target - System Initialization. Apr 30 03:28:12.825457 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 30 03:28:12.825851 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 30 03:28:12.826414 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 30 03:28:12.826866 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 30 03:28:12.827237 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 30 03:28:12.827584 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 30 03:28:12.827626 systemd[1]: Reached target paths.target - Path Units. Apr 30 03:28:12.827959 systemd[1]: Reached target timers.target - Timer Units. Apr 30 03:28:12.830506 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 30 03:28:12.832519 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 30 03:28:12.834901 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 30 03:28:12.838278 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 30 03:28:12.838682 systemd[1]: Reached target sockets.target - Socket Units. Apr 30 03:28:12.838994 systemd[1]: Reached target basic.target - Basic System. Apr 30 03:28:12.839464 systemd[1]: System is tainted: cgroupsv1 Apr 30 03:28:12.839498 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 30 03:28:12.839517 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 30 03:28:12.842267 systemd[1]: Starting containerd.service - containerd container runtime... Apr 30 03:28:12.845321 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Apr 30 03:28:12.852420 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 30 03:28:12.854893 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 30 03:28:12.857962 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 30 03:28:12.864956 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 30 03:28:12.880105 jq[1977]: false Apr 30 03:28:12.886282 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 30 03:28:12.889446 systemd[1]: Started ntpd.service - Network Time Service. Apr 30 03:28:12.901944 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 30 03:28:12.912307 systemd[1]: Starting setup-oem.service - Setup OEM... Apr 30 03:28:12.927385 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 30 03:28:12.955535 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 30 03:28:12.956318 extend-filesystems[1978]: Found loop4 Apr 30 03:28:12.956318 extend-filesystems[1978]: Found loop5 Apr 30 03:28:12.956318 extend-filesystems[1978]: Found loop6 Apr 30 03:28:12.956318 extend-filesystems[1978]: Found loop7 Apr 30 03:28:12.956318 extend-filesystems[1978]: Found nvme0n1 Apr 30 03:28:12.956318 extend-filesystems[1978]: Found nvme0n1p1 Apr 30 03:28:12.956318 extend-filesystems[1978]: Found nvme0n1p2 Apr 30 03:28:12.956318 extend-filesystems[1978]: Found nvme0n1p3 Apr 30 03:28:12.956318 extend-filesystems[1978]: Found usr Apr 30 03:28:12.956318 extend-filesystems[1978]: Found nvme0n1p4 Apr 30 03:28:12.956318 extend-filesystems[1978]: Found nvme0n1p6 Apr 30 03:28:12.956318 extend-filesystems[1978]: Found nvme0n1p7 Apr 30 03:28:12.956318 extend-filesystems[1978]: Found nvme0n1p9 Apr 30 03:28:12.956318 extend-filesystems[1978]: Checking size of /dev/nvme0n1p9 Apr 30 03:28:12.963048 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 30 03:28:12.965701 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 30 03:28:12.979713 systemd[1]: Starting update-engine.service - Update Engine... Apr 30 03:28:12.984264 systemd-networkd[1575]: eth0: Gained IPv6LL Apr 30 03:28:12.994620 dbus-daemon[1975]: [system] SELinux support is enabled Apr 30 03:28:12.998317 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 30 03:28:13.005555 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 30 03:28:13.008066 ntpd[1983]: ntpd 4.2.8p17@1.4004-o Tue Apr 29 22:12:23 UTC 2025 (1): Starting Apr 30 03:28:13.009236 ntpd[1983]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Apr 30 03:28:13.009925 ntpd[1983]: 30 Apr 03:28:13 ntpd[1983]: ntpd 4.2.8p17@1.4004-o Tue Apr 29 22:12:23 UTC 2025 (1): Starting Apr 30 03:28:13.009925 ntpd[1983]: 30 Apr 03:28:13 ntpd[1983]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Apr 30 03:28:13.009925 ntpd[1983]: 30 Apr 03:28:13 ntpd[1983]: ---------------------------------------------------- Apr 30 03:28:13.009925 ntpd[1983]: 30 Apr 03:28:13 ntpd[1983]: ntp-4 is maintained by Network Time Foundation, Apr 30 03:28:13.009925 ntpd[1983]: 30 Apr 03:28:13 ntpd[1983]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Apr 30 03:28:13.009925 ntpd[1983]: 30 Apr 03:28:13 ntpd[1983]: corporation. Support and training for ntp-4 are Apr 30 03:28:13.009925 ntpd[1983]: 30 Apr 03:28:13 ntpd[1983]: available at https://www.nwtime.org/support Apr 30 03:28:13.009925 ntpd[1983]: 30 Apr 03:28:13 ntpd[1983]: ---------------------------------------------------- Apr 30 03:28:13.009249 ntpd[1983]: ---------------------------------------------------- Apr 30 03:28:13.009259 ntpd[1983]: ntp-4 is maintained by Network Time Foundation, Apr 30 03:28:13.009270 ntpd[1983]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Apr 30 03:28:13.009279 ntpd[1983]: corporation. Support and training for ntp-4 are Apr 30 03:28:13.009290 ntpd[1983]: available at https://www.nwtime.org/support Apr 30 03:28:13.009300 ntpd[1983]: ---------------------------------------------------- Apr 30 03:28:13.012816 ntpd[1983]: proto: precision = 0.060 usec (-24) Apr 30 03:28:13.015336 ntpd[1983]: 30 Apr 03:28:13 ntpd[1983]: proto: precision = 0.060 usec (-24) Apr 30 03:28:13.015336 ntpd[1983]: 30 Apr 03:28:13 ntpd[1983]: basedate set to 2025-04-17 Apr 30 03:28:13.015336 ntpd[1983]: 30 Apr 03:28:13 ntpd[1983]: gps base set to 2025-04-20 (week 2363) Apr 30 03:28:13.013196 ntpd[1983]: basedate set to 2025-04-17 Apr 30 03:28:13.013212 ntpd[1983]: gps base set to 2025-04-20 (week 2363) Apr 30 03:28:13.019442 ntpd[1983]: Listen and drop on 0 v6wildcard [::]:123 Apr 30 03:28:13.019594 ntpd[1983]: 30 Apr 03:28:13 ntpd[1983]: Listen and drop on 0 v6wildcard [::]:123 Apr 30 03:28:13.019686 ntpd[1983]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Apr 30 03:28:13.019750 ntpd[1983]: 30 Apr 03:28:13 ntpd[1983]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Apr 30 03:28:13.019973 ntpd[1983]: Listen normally on 2 lo 127.0.0.1:123 Apr 30 03:28:13.020066 ntpd[1983]: 30 Apr 03:28:13 ntpd[1983]: Listen normally on 2 lo 127.0.0.1:123 Apr 30 03:28:13.020145 ntpd[1983]: Listen normally on 3 eth0 172.31.23.191:123 Apr 30 03:28:13.020238 ntpd[1983]: 30 Apr 03:28:13 ntpd[1983]: Listen normally on 3 eth0 172.31.23.191:123 Apr 30 03:28:13.020323 ntpd[1983]: Listen normally on 4 lo [::1]:123 Apr 30 03:28:13.020385 ntpd[1983]: 30 Apr 03:28:13 ntpd[1983]: Listen normally on 4 lo [::1]:123 Apr 30 03:28:13.023507 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 30 03:28:13.022247 ntpd[1983]: Listen normally on 5 eth0 [fe80::400:66ff:fed2:3f63%2]:123 Apr 30 03:28:13.024374 ntpd[1983]: 30 Apr 03:28:13 ntpd[1983]: Listen normally on 5 eth0 [fe80::400:66ff:fed2:3f63%2]:123 Apr 30 03:28:13.024374 ntpd[1983]: 30 Apr 03:28:13 ntpd[1983]: Listening on routing socket on fd #22 for interface updates Apr 30 03:28:13.024374 ntpd[1983]: 30 Apr 03:28:13 ntpd[1983]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 30 03:28:13.024374 ntpd[1983]: 30 Apr 03:28:13 ntpd[1983]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 30 03:28:13.022297 ntpd[1983]: Listening on routing socket on fd #22 for interface updates Apr 30 03:28:13.023748 ntpd[1983]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 30 03:28:13.023775 ntpd[1983]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 30 03:28:13.025665 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 30 03:28:13.025984 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 30 03:28:13.026870 dbus-daemon[1975]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1575 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Apr 30 03:28:13.030449 systemd[1]: motdgen.service: Deactivated successfully. Apr 30 03:28:13.030771 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 30 03:28:13.040741 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 30 03:28:13.041098 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 30 03:28:13.059212 jq[2002]: true Apr 30 03:28:13.075111 update_engine[1999]: I20250430 03:28:13.073542 1999 main.cc:92] Flatcar Update Engine starting Apr 30 03:28:13.090027 update_engine[1999]: I20250430 03:28:13.079695 1999 update_check_scheduler.cc:74] Next update check in 11m37s Apr 30 03:28:13.104039 extend-filesystems[1978]: Resized partition /dev/nvme0n1p9 Apr 30 03:28:13.107881 (ntainerd)[2020]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 30 03:28:13.121202 extend-filesystems[2028]: resize2fs 1.47.1 (20-May-2024) Apr 30 03:28:13.119583 systemd[1]: Reached target network-online.target - Network is Online. Apr 30 03:28:13.129673 jq[2017]: true Apr 30 03:28:13.130745 dbus-daemon[1975]: [system] Successfully activated service 'org.freedesktop.systemd1' Apr 30 03:28:13.131710 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:28:13.147450 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 30 03:28:13.151389 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 30 03:28:13.151440 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 30 03:28:13.152351 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 30 03:28:13.152383 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 30 03:28:13.157660 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Apr 30 03:28:13.186758 systemd[1]: Started update-engine.service - Update Engine. Apr 30 03:28:13.203626 tar[2008]: linux-amd64/helm Apr 30 03:28:13.212388 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Apr 30 03:28:13.215656 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 30 03:28:13.219872 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 30 03:28:13.235848 systemd[1]: Finished setup-oem.service - Setup OEM. Apr 30 03:28:13.258197 coreos-metadata[1974]: Apr 30 03:28:13.256 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Apr 30 03:28:13.290425 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Apr 30 03:28:13.285503 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Apr 30 03:28:13.290583 coreos-metadata[1974]: Apr 30 03:28:13.261 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Apr 30 03:28:13.290583 coreos-metadata[1974]: Apr 30 03:28:13.267 INFO Fetch successful Apr 30 03:28:13.290583 coreos-metadata[1974]: Apr 30 03:28:13.267 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Apr 30 03:28:13.290583 coreos-metadata[1974]: Apr 30 03:28:13.271 INFO Fetch successful Apr 30 03:28:13.290583 coreos-metadata[1974]: Apr 30 03:28:13.271 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Apr 30 03:28:13.290583 coreos-metadata[1974]: Apr 30 03:28:13.273 INFO Fetch successful Apr 30 03:28:13.290583 coreos-metadata[1974]: Apr 30 03:28:13.273 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Apr 30 03:28:13.295718 coreos-metadata[1974]: Apr 30 03:28:13.291 INFO Fetch successful Apr 30 03:28:13.295718 coreos-metadata[1974]: Apr 30 03:28:13.291 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Apr 30 03:28:13.295718 coreos-metadata[1974]: Apr 30 03:28:13.295 INFO Fetch failed with 404: resource not found Apr 30 03:28:13.295718 coreos-metadata[1974]: Apr 30 03:28:13.295 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Apr 30 03:28:13.296787 coreos-metadata[1974]: Apr 30 03:28:13.296 INFO Fetch successful Apr 30 03:28:13.296787 coreos-metadata[1974]: Apr 30 03:28:13.296 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Apr 30 03:28:13.311518 extend-filesystems[2028]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Apr 30 03:28:13.311518 extend-filesystems[2028]: old_desc_blocks = 1, new_desc_blocks = 1 Apr 30 03:28:13.311518 extend-filesystems[2028]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Apr 30 03:28:13.300947 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 30 03:28:13.341372 coreos-metadata[1974]: Apr 30 03:28:13.318 INFO Fetch successful Apr 30 03:28:13.341372 coreos-metadata[1974]: Apr 30 03:28:13.318 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Apr 30 03:28:13.341372 coreos-metadata[1974]: Apr 30 03:28:13.319 INFO Fetch successful Apr 30 03:28:13.341372 coreos-metadata[1974]: Apr 30 03:28:13.319 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Apr 30 03:28:13.341372 coreos-metadata[1974]: Apr 30 03:28:13.320 INFO Fetch successful Apr 30 03:28:13.341372 coreos-metadata[1974]: Apr 30 03:28:13.320 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Apr 30 03:28:13.341372 coreos-metadata[1974]: Apr 30 03:28:13.322 INFO Fetch successful Apr 30 03:28:13.341658 extend-filesystems[1978]: Resized filesystem in /dev/nvme0n1p9 Apr 30 03:28:13.304480 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 30 03:28:13.349034 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 30 03:28:13.376620 systemd-logind[1998]: Watching system buttons on /dev/input/event1 (Power Button) Apr 30 03:28:13.383535 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Apr 30 03:28:13.384471 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 30 03:28:13.385987 systemd-logind[1998]: Watching system buttons on /dev/input/event2 (Sleep Button) Apr 30 03:28:13.386012 systemd-logind[1998]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 30 03:28:13.394242 systemd-logind[1998]: New seat seat0. Apr 30 03:28:13.406035 systemd[1]: Started systemd-logind.service - User Login Management. Apr 30 03:28:13.438184 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (1584) Apr 30 03:28:13.456995 bash[2088]: Updated "/home/core/.ssh/authorized_keys" Apr 30 03:28:13.463730 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 30 03:28:13.478510 systemd[1]: Starting sshkeys.service... Apr 30 03:28:13.517903 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Apr 30 03:28:13.533735 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Apr 30 03:28:13.617885 dbus-daemon[1975]: [system] Successfully activated service 'org.freedesktop.hostname1' Apr 30 03:28:13.618326 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Apr 30 03:28:13.621101 dbus-daemon[1975]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=2047 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Apr 30 03:28:13.632111 systemd[1]: Starting polkit.service - Authorization Manager... Apr 30 03:28:13.685862 amazon-ssm-agent[2059]: Initializing new seelog logger Apr 30 03:28:13.686294 amazon-ssm-agent[2059]: New Seelog Logger Creation Complete Apr 30 03:28:13.686294 amazon-ssm-agent[2059]: 2025/04/30 03:28:13 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 30 03:28:13.686294 amazon-ssm-agent[2059]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 30 03:28:13.689702 amazon-ssm-agent[2059]: 2025/04/30 03:28:13 processing appconfig overrides Apr 30 03:28:13.697256 amazon-ssm-agent[2059]: 2025/04/30 03:28:13 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 30 03:28:13.697256 amazon-ssm-agent[2059]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 30 03:28:13.697641 amazon-ssm-agent[2059]: 2025/04/30 03:28:13 processing appconfig overrides Apr 30 03:28:13.697996 amazon-ssm-agent[2059]: 2025/04/30 03:28:13 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 30 03:28:13.697996 amazon-ssm-agent[2059]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 30 03:28:13.698099 amazon-ssm-agent[2059]: 2025/04/30 03:28:13 processing appconfig overrides Apr 30 03:28:13.703650 amazon-ssm-agent[2059]: 2025-04-30 03:28:13 INFO Proxy environment variables: Apr 30 03:28:13.726579 amazon-ssm-agent[2059]: 2025/04/30 03:28:13 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 30 03:28:13.727230 amazon-ssm-agent[2059]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 30 03:28:13.727230 amazon-ssm-agent[2059]: 2025/04/30 03:28:13 processing appconfig overrides Apr 30 03:28:13.765905 polkitd[2109]: Started polkitd version 121 Apr 30 03:28:13.815851 polkitd[2109]: Loading rules from directory /etc/polkit-1/rules.d Apr 30 03:28:13.818278 amazon-ssm-agent[2059]: 2025-04-30 03:28:13 INFO https_proxy: Apr 30 03:28:13.824389 polkitd[2109]: Loading rules from directory /usr/share/polkit-1/rules.d Apr 30 03:28:13.836333 polkitd[2109]: Finished loading, compiling and executing 2 rules Apr 30 03:28:13.837367 systemd[1]: Started polkit.service - Authorization Manager. Apr 30 03:28:13.837077 dbus-daemon[1975]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Apr 30 03:28:13.845467 polkitd[2109]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Apr 30 03:28:13.904369 systemd-hostnamed[2047]: Hostname set to (transient) Apr 30 03:28:13.905207 systemd-resolved[1907]: System hostname changed to 'ip-172-31-23-191'. Apr 30 03:28:13.920991 amazon-ssm-agent[2059]: 2025-04-30 03:28:13 INFO http_proxy: Apr 30 03:28:13.934811 locksmithd[2049]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 30 03:28:13.938823 coreos-metadata[2096]: Apr 30 03:28:13.937 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Apr 30 03:28:13.942595 coreos-metadata[2096]: Apr 30 03:28:13.939 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Apr 30 03:28:13.954506 coreos-metadata[2096]: Apr 30 03:28:13.952 INFO Fetch successful Apr 30 03:28:13.954506 coreos-metadata[2096]: Apr 30 03:28:13.952 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Apr 30 03:28:13.954506 coreos-metadata[2096]: Apr 30 03:28:13.953 INFO Fetch successful Apr 30 03:28:13.956073 unknown[2096]: wrote ssh authorized keys file for user: core Apr 30 03:28:14.005243 update-ssh-keys[2212]: Updated "/home/core/.ssh/authorized_keys" Apr 30 03:28:14.008623 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Apr 30 03:28:14.015840 systemd[1]: Finished sshkeys.service. Apr 30 03:28:14.018108 amazon-ssm-agent[2059]: 2025-04-30 03:28:13 INFO no_proxy: Apr 30 03:28:14.127182 amazon-ssm-agent[2059]: 2025-04-30 03:28:13 INFO Checking if agent identity type OnPrem can be assumed Apr 30 03:28:14.233180 amazon-ssm-agent[2059]: 2025-04-30 03:28:13 INFO Checking if agent identity type EC2 can be assumed Apr 30 03:28:14.330714 containerd[2020]: time="2025-04-30T03:28:14.330553627Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 30 03:28:14.332449 amazon-ssm-agent[2059]: 2025-04-30 03:28:14 INFO Agent will take identity from EC2 Apr 30 03:28:14.436180 amazon-ssm-agent[2059]: 2025-04-30 03:28:14 INFO [amazon-ssm-agent] using named pipe channel for IPC Apr 30 03:28:14.460298 containerd[2020]: time="2025-04-30T03:28:14.460204767Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 30 03:28:14.465610 containerd[2020]: time="2025-04-30T03:28:14.465539102Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.88-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 30 03:28:14.465769 containerd[2020]: time="2025-04-30T03:28:14.465751877Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 30 03:28:14.465864 containerd[2020]: time="2025-04-30T03:28:14.465849715Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 30 03:28:14.467915 containerd[2020]: time="2025-04-30T03:28:14.467376199Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 30 03:28:14.467915 containerd[2020]: time="2025-04-30T03:28:14.467452126Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 30 03:28:14.467915 containerd[2020]: time="2025-04-30T03:28:14.467539918Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 03:28:14.467915 containerd[2020]: time="2025-04-30T03:28:14.467559419Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 30 03:28:14.468280 containerd[2020]: time="2025-04-30T03:28:14.468157278Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 03:28:14.468280 containerd[2020]: time="2025-04-30T03:28:14.468232760Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 30 03:28:14.468280 containerd[2020]: time="2025-04-30T03:28:14.468255538Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 03:28:14.468501 containerd[2020]: time="2025-04-30T03:28:14.468433672Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 30 03:28:14.468670 containerd[2020]: time="2025-04-30T03:28:14.468653078Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 30 03:28:14.469507 containerd[2020]: time="2025-04-30T03:28:14.469484729Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 30 03:28:14.470345 containerd[2020]: time="2025-04-30T03:28:14.470317843Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 03:28:14.472364 containerd[2020]: time="2025-04-30T03:28:14.470431523Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 30 03:28:14.472364 containerd[2020]: time="2025-04-30T03:28:14.470552490Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 30 03:28:14.472364 containerd[2020]: time="2025-04-30T03:28:14.470607625Z" level=info msg="metadata content store policy set" policy=shared Apr 30 03:28:14.477756 containerd[2020]: time="2025-04-30T03:28:14.477716558Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 30 03:28:14.477976 containerd[2020]: time="2025-04-30T03:28:14.477957293Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 30 03:28:14.478149 containerd[2020]: time="2025-04-30T03:28:14.478132747Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 30 03:28:14.478244 containerd[2020]: time="2025-04-30T03:28:14.478229079Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 30 03:28:14.479454 containerd[2020]: time="2025-04-30T03:28:14.479133538Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 30 03:28:14.479454 containerd[2020]: time="2025-04-30T03:28:14.479350074Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 30 03:28:14.480016 containerd[2020]: time="2025-04-30T03:28:14.479995888Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 30 03:28:14.480344 containerd[2020]: time="2025-04-30T03:28:14.480325481Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 30 03:28:14.482179 containerd[2020]: time="2025-04-30T03:28:14.481188448Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 30 03:28:14.482179 containerd[2020]: time="2025-04-30T03:28:14.481228188Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 30 03:28:14.482179 containerd[2020]: time="2025-04-30T03:28:14.481250977Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 30 03:28:14.482179 containerd[2020]: time="2025-04-30T03:28:14.481270731Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 30 03:28:14.482179 containerd[2020]: time="2025-04-30T03:28:14.481289826Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 30 03:28:14.482179 containerd[2020]: time="2025-04-30T03:28:14.481312959Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 30 03:28:14.482179 containerd[2020]: time="2025-04-30T03:28:14.481347979Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 30 03:28:14.482179 containerd[2020]: time="2025-04-30T03:28:14.481368678Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 30 03:28:14.482179 containerd[2020]: time="2025-04-30T03:28:14.481387168Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 30 03:28:14.482179 containerd[2020]: time="2025-04-30T03:28:14.481404772Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 30 03:28:14.482179 containerd[2020]: time="2025-04-30T03:28:14.481436101Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 30 03:28:14.482179 containerd[2020]: time="2025-04-30T03:28:14.481456087Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 30 03:28:14.482179 containerd[2020]: time="2025-04-30T03:28:14.481474979Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 30 03:28:14.482179 containerd[2020]: time="2025-04-30T03:28:14.481494438Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 30 03:28:14.482713 containerd[2020]: time="2025-04-30T03:28:14.481512272Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 30 03:28:14.482713 containerd[2020]: time="2025-04-30T03:28:14.481531078Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 30 03:28:14.482713 containerd[2020]: time="2025-04-30T03:28:14.481548993Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 30 03:28:14.482713 containerd[2020]: time="2025-04-30T03:28:14.481568029Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 30 03:28:14.482713 containerd[2020]: time="2025-04-30T03:28:14.481587235Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 30 03:28:14.482713 containerd[2020]: time="2025-04-30T03:28:14.481610366Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 30 03:28:14.482713 containerd[2020]: time="2025-04-30T03:28:14.481630384Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 30 03:28:14.482713 containerd[2020]: time="2025-04-30T03:28:14.481648703Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 30 03:28:14.482713 containerd[2020]: time="2025-04-30T03:28:14.481679980Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 30 03:28:14.482713 containerd[2020]: time="2025-04-30T03:28:14.481703882Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 30 03:28:14.482713 containerd[2020]: time="2025-04-30T03:28:14.481739713Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 30 03:28:14.482713 containerd[2020]: time="2025-04-30T03:28:14.481757579Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 30 03:28:14.482713 containerd[2020]: time="2025-04-30T03:28:14.481773578Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 30 03:28:14.482713 containerd[2020]: time="2025-04-30T03:28:14.481823541Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 30 03:28:14.483232 containerd[2020]: time="2025-04-30T03:28:14.481848338Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 30 03:28:14.483232 containerd[2020]: time="2025-04-30T03:28:14.481866379Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 30 03:28:14.483232 containerd[2020]: time="2025-04-30T03:28:14.481884875Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 30 03:28:14.483232 containerd[2020]: time="2025-04-30T03:28:14.481899490Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 30 03:28:14.483232 containerd[2020]: time="2025-04-30T03:28:14.481917306Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 30 03:28:14.483232 containerd[2020]: time="2025-04-30T03:28:14.481931622Z" level=info msg="NRI interface is disabled by configuration." Apr 30 03:28:14.483232 containerd[2020]: time="2025-04-30T03:28:14.481947411Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 30 03:28:14.486370 containerd[2020]: time="2025-04-30T03:28:14.485480739Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 30 03:28:14.486370 containerd[2020]: time="2025-04-30T03:28:14.485586987Z" level=info msg="Connect containerd service" Apr 30 03:28:14.486370 containerd[2020]: time="2025-04-30T03:28:14.485638704Z" level=info msg="using legacy CRI server" Apr 30 03:28:14.486370 containerd[2020]: time="2025-04-30T03:28:14.485648655Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 30 03:28:14.486370 containerd[2020]: time="2025-04-30T03:28:14.485789362Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 30 03:28:14.488185 containerd[2020]: time="2025-04-30T03:28:14.487010381Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 30 03:28:14.490827 containerd[2020]: time="2025-04-30T03:28:14.488634552Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 30 03:28:14.490827 containerd[2020]: time="2025-04-30T03:28:14.488711716Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 30 03:28:14.490827 containerd[2020]: time="2025-04-30T03:28:14.488791608Z" level=info msg="Start subscribing containerd event" Apr 30 03:28:14.490827 containerd[2020]: time="2025-04-30T03:28:14.488838064Z" level=info msg="Start recovering state" Apr 30 03:28:14.490827 containerd[2020]: time="2025-04-30T03:28:14.488945640Z" level=info msg="Start event monitor" Apr 30 03:28:14.490827 containerd[2020]: time="2025-04-30T03:28:14.488960240Z" level=info msg="Start snapshots syncer" Apr 30 03:28:14.490827 containerd[2020]: time="2025-04-30T03:28:14.488973137Z" level=info msg="Start cni network conf syncer for default" Apr 30 03:28:14.490827 containerd[2020]: time="2025-04-30T03:28:14.488983694Z" level=info msg="Start streaming server" Apr 30 03:28:14.489217 systemd[1]: Started containerd.service - containerd container runtime. Apr 30 03:28:14.493189 containerd[2020]: time="2025-04-30T03:28:14.492217019Z" level=info msg="containerd successfully booted in 0.166748s" Apr 30 03:28:14.535623 amazon-ssm-agent[2059]: 2025-04-30 03:28:14 INFO [amazon-ssm-agent] using named pipe channel for IPC Apr 30 03:28:14.635586 amazon-ssm-agent[2059]: 2025-04-30 03:28:14 INFO [amazon-ssm-agent] using named pipe channel for IPC Apr 30 03:28:14.734711 amazon-ssm-agent[2059]: 2025-04-30 03:28:14 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Apr 30 03:28:14.841184 amazon-ssm-agent[2059]: 2025-04-30 03:28:14 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Apr 30 03:28:14.923288 tar[2008]: linux-amd64/LICENSE Apr 30 03:28:14.925584 tar[2008]: linux-amd64/README.md Apr 30 03:28:14.933997 amazon-ssm-agent[2059]: 2025-04-30 03:28:14 INFO [amazon-ssm-agent] Starting Core Agent Apr 30 03:28:14.933997 amazon-ssm-agent[2059]: 2025-04-30 03:28:14 INFO [amazon-ssm-agent] registrar detected. Attempting registration Apr 30 03:28:14.933997 amazon-ssm-agent[2059]: 2025-04-30 03:28:14 INFO [Registrar] Starting registrar module Apr 30 03:28:14.933997 amazon-ssm-agent[2059]: 2025-04-30 03:28:14 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Apr 30 03:28:14.933997 amazon-ssm-agent[2059]: 2025-04-30 03:28:14 INFO [EC2Identity] EC2 registration was successful. Apr 30 03:28:14.933997 amazon-ssm-agent[2059]: 2025-04-30 03:28:14 INFO [CredentialRefresher] credentialRefresher has started Apr 30 03:28:14.933997 amazon-ssm-agent[2059]: 2025-04-30 03:28:14 INFO [CredentialRefresher] Starting credentials refresher loop Apr 30 03:28:14.933997 amazon-ssm-agent[2059]: 2025-04-30 03:28:14 INFO EC2RoleProvider Successfully connected with instance profile role credentials Apr 30 03:28:14.941146 amazon-ssm-agent[2059]: 2025-04-30 03:28:14 INFO [CredentialRefresher] Next credential rotation will be in 31.82499441225 minutes Apr 30 03:28:14.950678 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 30 03:28:14.973697 sshd_keygen[2025]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 30 03:28:14.999258 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 30 03:28:15.010568 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 30 03:28:15.018985 systemd[1]: issuegen.service: Deactivated successfully. Apr 30 03:28:15.019324 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 30 03:28:15.035654 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 30 03:28:15.049295 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 30 03:28:15.056626 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 30 03:28:15.066492 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 30 03:28:15.067371 systemd[1]: Reached target getty.target - Login Prompts. Apr 30 03:28:15.941980 amazon-ssm-agent[2059]: 2025-04-30 03:28:15 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Apr 30 03:28:16.043228 amazon-ssm-agent[2059]: 2025-04-30 03:28:15 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2249) started Apr 30 03:28:16.067366 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:28:16.068334 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 30 03:28:16.069661 systemd[1]: Startup finished in 5.742s (kernel) + 7.259s (userspace) = 13.002s. Apr 30 03:28:16.078545 (kubelet)[2264]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 03:28:16.144188 amazon-ssm-agent[2059]: 2025-04-30 03:28:15 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Apr 30 03:28:16.917565 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 30 03:28:16.924456 systemd[1]: Started sshd@0-172.31.23.191:22-147.75.109.163:37610.service - OpenSSH per-connection server daemon (147.75.109.163:37610). Apr 30 03:28:17.178607 sshd[2277]: Accepted publickey for core from 147.75.109.163 port 37610 ssh2: RSA SHA256:7ZQea3lKZeIY1pq8546y2SpcWopo7i1peiZKBcYFJ3g Apr 30 03:28:17.181547 sshd[2277]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:28:17.194116 systemd-logind[1998]: New session 1 of user core. Apr 30 03:28:17.194712 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 30 03:28:17.201517 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 30 03:28:17.220520 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 30 03:28:17.234008 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 30 03:28:17.238442 (systemd)[2283]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 30 03:28:17.378687 systemd[2283]: Queued start job for default target default.target. Apr 30 03:28:17.379277 systemd[2283]: Created slice app.slice - User Application Slice. Apr 30 03:28:17.379311 systemd[2283]: Reached target paths.target - Paths. Apr 30 03:28:17.379330 systemd[2283]: Reached target timers.target - Timers. Apr 30 03:28:17.384302 systemd[2283]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 30 03:28:17.395343 systemd[2283]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 30 03:28:17.395440 systemd[2283]: Reached target sockets.target - Sockets. Apr 30 03:28:17.395462 systemd[2283]: Reached target basic.target - Basic System. Apr 30 03:28:17.395532 systemd[2283]: Reached target default.target - Main User Target. Apr 30 03:28:17.395573 systemd[2283]: Startup finished in 150ms. Apr 30 03:28:17.396261 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 30 03:28:17.407622 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 30 03:28:17.616981 systemd[1]: Started sshd@1-172.31.23.191:22-147.75.109.163:37614.service - OpenSSH per-connection server daemon (147.75.109.163:37614). Apr 30 03:28:17.787621 kubelet[2264]: E0430 03:28:17.787521 2264 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 03:28:17.789979 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 03:28:17.790206 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 03:28:17.864146 sshd[2297]: Accepted publickey for core from 147.75.109.163 port 37614 ssh2: RSA SHA256:7ZQea3lKZeIY1pq8546y2SpcWopo7i1peiZKBcYFJ3g Apr 30 03:28:17.866216 sshd[2297]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:28:17.871258 systemd-logind[1998]: New session 2 of user core. Apr 30 03:28:17.880567 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 30 03:28:18.057102 sshd[2297]: pam_unix(sshd:session): session closed for user core Apr 30 03:28:18.059715 systemd[1]: sshd@1-172.31.23.191:22-147.75.109.163:37614.service: Deactivated successfully. Apr 30 03:28:18.063512 systemd-logind[1998]: Session 2 logged out. Waiting for processes to exit. Apr 30 03:28:18.064408 systemd[1]: session-2.scope: Deactivated successfully. Apr 30 03:28:18.065301 systemd-logind[1998]: Removed session 2. Apr 30 03:28:18.099563 systemd[1]: Started sshd@2-172.31.23.191:22-147.75.109.163:37620.service - OpenSSH per-connection server daemon (147.75.109.163:37620). Apr 30 03:28:18.343231 sshd[2308]: Accepted publickey for core from 147.75.109.163 port 37620 ssh2: RSA SHA256:7ZQea3lKZeIY1pq8546y2SpcWopo7i1peiZKBcYFJ3g Apr 30 03:28:18.344944 sshd[2308]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:28:18.350090 systemd-logind[1998]: New session 3 of user core. Apr 30 03:28:18.356556 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 30 03:28:18.531148 sshd[2308]: pam_unix(sshd:session): session closed for user core Apr 30 03:28:18.534376 systemd[1]: sshd@2-172.31.23.191:22-147.75.109.163:37620.service: Deactivated successfully. Apr 30 03:28:18.537089 systemd-logind[1998]: Session 3 logged out. Waiting for processes to exit. Apr 30 03:28:18.538157 systemd[1]: session-3.scope: Deactivated successfully. Apr 30 03:28:18.539674 systemd-logind[1998]: Removed session 3. Apr 30 03:28:18.574484 systemd[1]: Started sshd@3-172.31.23.191:22-147.75.109.163:37630.service - OpenSSH per-connection server daemon (147.75.109.163:37630). Apr 30 03:28:18.817244 sshd[2316]: Accepted publickey for core from 147.75.109.163 port 37630 ssh2: RSA SHA256:7ZQea3lKZeIY1pq8546y2SpcWopo7i1peiZKBcYFJ3g Apr 30 03:28:18.818921 sshd[2316]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:28:18.823359 systemd-logind[1998]: New session 4 of user core. Apr 30 03:28:18.839554 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 30 03:28:19.010076 sshd[2316]: pam_unix(sshd:session): session closed for user core Apr 30 03:28:19.012468 systemd[1]: sshd@3-172.31.23.191:22-147.75.109.163:37630.service: Deactivated successfully. Apr 30 03:28:19.015876 systemd[1]: session-4.scope: Deactivated successfully. Apr 30 03:28:19.015913 systemd-logind[1998]: Session 4 logged out. Waiting for processes to exit. Apr 30 03:28:19.018430 systemd-logind[1998]: Removed session 4. Apr 30 03:28:19.052561 systemd[1]: Started sshd@4-172.31.23.191:22-147.75.109.163:37640.service - OpenSSH per-connection server daemon (147.75.109.163:37640). Apr 30 03:28:19.311477 sshd[2324]: Accepted publickey for core from 147.75.109.163 port 37640 ssh2: RSA SHA256:7ZQea3lKZeIY1pq8546y2SpcWopo7i1peiZKBcYFJ3g Apr 30 03:28:19.312932 sshd[2324]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:28:19.317755 systemd-logind[1998]: New session 5 of user core. Apr 30 03:28:19.320489 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 30 03:28:19.486741 sudo[2328]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 30 03:28:19.487032 sudo[2328]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 03:28:19.498090 sudo[2328]: pam_unix(sudo:session): session closed for user root Apr 30 03:28:19.536207 sshd[2324]: pam_unix(sshd:session): session closed for user core Apr 30 03:28:19.539500 systemd[1]: sshd@4-172.31.23.191:22-147.75.109.163:37640.service: Deactivated successfully. Apr 30 03:28:19.543152 systemd-logind[1998]: Session 5 logged out. Waiting for processes to exit. Apr 30 03:28:19.543519 systemd[1]: session-5.scope: Deactivated successfully. Apr 30 03:28:19.545241 systemd-logind[1998]: Removed session 5. Apr 30 03:28:19.577671 systemd[1]: Started sshd@5-172.31.23.191:22-147.75.109.163:37644.service - OpenSSH per-connection server daemon (147.75.109.163:37644). Apr 30 03:28:19.819407 sshd[2333]: Accepted publickey for core from 147.75.109.163 port 37644 ssh2: RSA SHA256:7ZQea3lKZeIY1pq8546y2SpcWopo7i1peiZKBcYFJ3g Apr 30 03:28:19.820860 sshd[2333]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:28:19.825719 systemd-logind[1998]: New session 6 of user core. Apr 30 03:28:19.832513 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 30 03:28:19.974521 sudo[2338]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 30 03:28:19.974927 sudo[2338]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 03:28:19.979135 sudo[2338]: pam_unix(sudo:session): session closed for user root Apr 30 03:28:19.984917 sudo[2337]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 30 03:28:19.985419 sudo[2337]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 03:28:20.000902 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 30 03:28:20.003269 auditctl[2341]: No rules Apr 30 03:28:20.003680 systemd[1]: audit-rules.service: Deactivated successfully. Apr 30 03:28:20.003951 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 30 03:28:20.546067 systemd-resolved[1907]: Clock change detected. Flushing caches. Apr 30 03:28:20.547550 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 30 03:28:20.574434 augenrules[2360]: No rules Apr 30 03:28:20.576371 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 30 03:28:20.580462 sudo[2337]: pam_unix(sudo:session): session closed for user root Apr 30 03:28:20.617957 sshd[2333]: pam_unix(sshd:session): session closed for user core Apr 30 03:28:20.621055 systemd[1]: sshd@5-172.31.23.191:22-147.75.109.163:37644.service: Deactivated successfully. Apr 30 03:28:20.625246 systemd-logind[1998]: Session 6 logged out. Waiting for processes to exit. Apr 30 03:28:20.626346 systemd[1]: session-6.scope: Deactivated successfully. Apr 30 03:28:20.627596 systemd-logind[1998]: Removed session 6. Apr 30 03:28:20.662937 systemd[1]: Started sshd@6-172.31.23.191:22-147.75.109.163:37656.service - OpenSSH per-connection server daemon (147.75.109.163:37656). Apr 30 03:28:20.905608 sshd[2369]: Accepted publickey for core from 147.75.109.163 port 37656 ssh2: RSA SHA256:7ZQea3lKZeIY1pq8546y2SpcWopo7i1peiZKBcYFJ3g Apr 30 03:28:20.906749 sshd[2369]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:28:20.911688 systemd-logind[1998]: New session 7 of user core. Apr 30 03:28:20.920975 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 30 03:28:21.060841 sudo[2373]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 30 03:28:21.061129 sudo[2373]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 03:28:21.426924 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 30 03:28:21.436301 (dockerd)[2389]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 30 03:28:21.807104 dockerd[2389]: time="2025-04-30T03:28:21.807000007Z" level=info msg="Starting up" Apr 30 03:28:21.899842 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport516766041-merged.mount: Deactivated successfully. Apr 30 03:28:22.069109 dockerd[2389]: time="2025-04-30T03:28:22.068561153Z" level=info msg="Loading containers: start." Apr 30 03:28:22.195590 kernel: Initializing XFRM netlink socket Apr 30 03:28:22.227659 (udev-worker)[2410]: Network interface NamePolicy= disabled on kernel command line. Apr 30 03:28:22.288072 systemd-networkd[1575]: docker0: Link UP Apr 30 03:28:22.312339 dockerd[2389]: time="2025-04-30T03:28:22.312288570Z" level=info msg="Loading containers: done." Apr 30 03:28:22.329611 dockerd[2389]: time="2025-04-30T03:28:22.329433351Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 30 03:28:22.329818 dockerd[2389]: time="2025-04-30T03:28:22.329609900Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 30 03:28:22.329818 dockerd[2389]: time="2025-04-30T03:28:22.329766624Z" level=info msg="Daemon has completed initialization" Apr 30 03:28:22.367480 dockerd[2389]: time="2025-04-30T03:28:22.367359687Z" level=info msg="API listen on /run/docker.sock" Apr 30 03:28:22.367873 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 30 03:28:23.746864 containerd[2020]: time="2025-04-30T03:28:23.746762127Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\"" Apr 30 03:28:24.322031 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3660775492.mount: Deactivated successfully. Apr 30 03:28:25.879761 containerd[2020]: time="2025-04-30T03:28:25.879706588Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:28:25.880787 containerd[2020]: time="2025-04-30T03:28:25.880742240Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.12: active requests=0, bytes read=32674873" Apr 30 03:28:25.885153 containerd[2020]: time="2025-04-30T03:28:25.885075658Z" level=info msg="ImageCreate event name:\"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:28:25.890471 containerd[2020]: time="2025-04-30T03:28:25.889795204Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.12\" with image id \"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\", size \"32671673\" in 2.142964183s" Apr 30 03:28:25.890471 containerd[2020]: time="2025-04-30T03:28:25.889840984Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\" returns image reference \"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\"" Apr 30 03:28:25.890471 containerd[2020]: time="2025-04-30T03:28:25.890412655Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:28:25.914778 containerd[2020]: time="2025-04-30T03:28:25.914749614Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\"" Apr 30 03:28:27.703236 containerd[2020]: time="2025-04-30T03:28:27.703165797Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:28:27.704234 containerd[2020]: time="2025-04-30T03:28:27.704185799Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.12: active requests=0, bytes read=29617534" Apr 30 03:28:27.705365 containerd[2020]: time="2025-04-30T03:28:27.705277175Z" level=info msg="ImageCreate event name:\"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:28:27.708317 containerd[2020]: time="2025-04-30T03:28:27.708257620Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:28:27.710709 containerd[2020]: time="2025-04-30T03:28:27.709837088Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.12\" with image id \"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\", size \"31105907\" in 1.794911731s" Apr 30 03:28:27.710709 containerd[2020]: time="2025-04-30T03:28:27.709871954Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\" returns image reference \"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\"" Apr 30 03:28:27.736449 containerd[2020]: time="2025-04-30T03:28:27.736411874Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\"" Apr 30 03:28:28.498881 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 30 03:28:28.505875 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:28:28.750388 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:28:28.764851 (kubelet)[2615]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 03:28:28.851916 kubelet[2615]: E0430 03:28:28.851858 2615 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 03:28:28.859487 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 03:28:28.859770 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 03:28:29.143129 containerd[2020]: time="2025-04-30T03:28:29.142909029Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:28:29.145259 containerd[2020]: time="2025-04-30T03:28:29.145169051Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.12: active requests=0, bytes read=17903682" Apr 30 03:28:29.147986 containerd[2020]: time="2025-04-30T03:28:29.147910971Z" level=info msg="ImageCreate event name:\"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:28:29.153516 containerd[2020]: time="2025-04-30T03:28:29.153471602Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:28:29.154748 containerd[2020]: time="2025-04-30T03:28:29.154627304Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.12\" with image id \"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\", size \"19392073\" in 1.418178276s" Apr 30 03:28:29.154748 containerd[2020]: time="2025-04-30T03:28:29.154665240Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\" returns image reference \"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\"" Apr 30 03:28:29.183176 containerd[2020]: time="2025-04-30T03:28:29.183131945Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\"" Apr 30 03:28:30.375946 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4006738906.mount: Deactivated successfully. Apr 30 03:28:30.873867 containerd[2020]: time="2025-04-30T03:28:30.873772559Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:28:30.875602 containerd[2020]: time="2025-04-30T03:28:30.875525162Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.12: active requests=0, bytes read=29185817" Apr 30 03:28:30.877716 containerd[2020]: time="2025-04-30T03:28:30.877658034Z" level=info msg="ImageCreate event name:\"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:28:30.881408 containerd[2020]: time="2025-04-30T03:28:30.881360782Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:28:30.882723 containerd[2020]: time="2025-04-30T03:28:30.882136964Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.12\" with image id \"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\", repo tag \"registry.k8s.io/kube-proxy:v1.30.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\", size \"29184836\" in 1.698952587s" Apr 30 03:28:30.882723 containerd[2020]: time="2025-04-30T03:28:30.882180477Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\" returns image reference \"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\"" Apr 30 03:28:30.908228 containerd[2020]: time="2025-04-30T03:28:30.908192076Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Apr 30 03:28:31.441396 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount735295808.mount: Deactivated successfully. Apr 30 03:28:32.347294 containerd[2020]: time="2025-04-30T03:28:32.347223148Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:28:32.348645 containerd[2020]: time="2025-04-30T03:28:32.348591938Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Apr 30 03:28:32.349877 containerd[2020]: time="2025-04-30T03:28:32.349818837Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:28:32.352870 containerd[2020]: time="2025-04-30T03:28:32.352815095Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:28:32.353712 containerd[2020]: time="2025-04-30T03:28:32.353680195Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.44544425s" Apr 30 03:28:32.353781 containerd[2020]: time="2025-04-30T03:28:32.353720028Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Apr 30 03:28:32.380501 containerd[2020]: time="2025-04-30T03:28:32.379763479Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Apr 30 03:28:32.928397 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2717436281.mount: Deactivated successfully. Apr 30 03:28:32.933832 containerd[2020]: time="2025-04-30T03:28:32.933790314Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:28:32.934712 containerd[2020]: time="2025-04-30T03:28:32.934651177Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Apr 30 03:28:32.935772 containerd[2020]: time="2025-04-30T03:28:32.935726838Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:28:32.938201 containerd[2020]: time="2025-04-30T03:28:32.938169797Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:28:32.939300 containerd[2020]: time="2025-04-30T03:28:32.938842061Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 559.038528ms" Apr 30 03:28:32.939300 containerd[2020]: time="2025-04-30T03:28:32.938884028Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Apr 30 03:28:32.964311 containerd[2020]: time="2025-04-30T03:28:32.964277378Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Apr 30 03:28:33.456647 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3009418361.mount: Deactivated successfully. Apr 30 03:28:35.344643 containerd[2020]: time="2025-04-30T03:28:35.344551751Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:28:35.346647 containerd[2020]: time="2025-04-30T03:28:35.346587984Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" Apr 30 03:28:35.349094 containerd[2020]: time="2025-04-30T03:28:35.349032594Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:28:35.353042 containerd[2020]: time="2025-04-30T03:28:35.353001496Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:28:35.354243 containerd[2020]: time="2025-04-30T03:28:35.354084313Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 2.389723335s" Apr 30 03:28:35.354243 containerd[2020]: time="2025-04-30T03:28:35.354115529Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Apr 30 03:28:38.239924 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:28:38.245935 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:28:38.276691 systemd[1]: Reloading requested from client PID 2810 ('systemctl') (unit session-7.scope)... Apr 30 03:28:38.276708 systemd[1]: Reloading... Apr 30 03:28:38.383590 zram_generator::config[2849]: No configuration found. Apr 30 03:28:38.559770 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 03:28:38.644484 systemd[1]: Reloading finished in 367 ms. Apr 30 03:28:38.693239 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:28:38.698129 systemd[1]: kubelet.service: Deactivated successfully. Apr 30 03:28:38.698485 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:28:38.706016 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:28:38.895731 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:28:38.900044 (kubelet)[2929]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 30 03:28:38.952701 kubelet[2929]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 03:28:38.953031 kubelet[2929]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Apr 30 03:28:38.953031 kubelet[2929]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 03:28:38.953031 kubelet[2929]: I0430 03:28:38.952807 2929 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 30 03:28:39.345658 kubelet[2929]: I0430 03:28:39.345614 2929 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Apr 30 03:28:39.345658 kubelet[2929]: I0430 03:28:39.345646 2929 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 30 03:28:39.345945 kubelet[2929]: I0430 03:28:39.345918 2929 server.go:927] "Client rotation is on, will bootstrap in background" Apr 30 03:28:39.379055 kubelet[2929]: I0430 03:28:39.378998 2929 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 30 03:28:39.381982 kubelet[2929]: E0430 03:28:39.381949 2929 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.23.191:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.23.191:6443: connect: connection refused Apr 30 03:28:39.397791 kubelet[2929]: I0430 03:28:39.397736 2929 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 30 03:28:39.402855 kubelet[2929]: I0430 03:28:39.402782 2929 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 30 03:28:39.403431 kubelet[2929]: I0430 03:28:39.402855 2929 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-23-191","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Apr 30 03:28:39.404890 kubelet[2929]: I0430 03:28:39.404853 2929 topology_manager.go:138] "Creating topology manager with none policy" Apr 30 03:28:39.404890 kubelet[2929]: I0430 03:28:39.404889 2929 container_manager_linux.go:301] "Creating device plugin manager" Apr 30 03:28:39.409149 kubelet[2929]: I0430 03:28:39.409089 2929 state_mem.go:36] "Initialized new in-memory state store" Apr 30 03:28:39.410345 kubelet[2929]: I0430 03:28:39.410316 2929 kubelet.go:400] "Attempting to sync node with API server" Apr 30 03:28:39.410345 kubelet[2929]: I0430 03:28:39.410344 2929 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 30 03:28:39.410680 kubelet[2929]: I0430 03:28:39.410375 2929 kubelet.go:312] "Adding apiserver pod source" Apr 30 03:28:39.410680 kubelet[2929]: I0430 03:28:39.410399 2929 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 30 03:28:39.419166 kubelet[2929]: W0430 03:28:39.418774 2929 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.23.191:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.23.191:6443: connect: connection refused Apr 30 03:28:39.419166 kubelet[2929]: E0430 03:28:39.418848 2929 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.23.191:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.23.191:6443: connect: connection refused Apr 30 03:28:39.419474 kubelet[2929]: W0430 03:28:39.419409 2929 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.23.191:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-23-191&limit=500&resourceVersion=0": dial tcp 172.31.23.191:6443: connect: connection refused Apr 30 03:28:39.419474 kubelet[2929]: E0430 03:28:39.419451 2929 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.23.191:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-23-191&limit=500&resourceVersion=0": dial tcp 172.31.23.191:6443: connect: connection refused Apr 30 03:28:39.419814 kubelet[2929]: I0430 03:28:39.419632 2929 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 30 03:28:39.421634 kubelet[2929]: I0430 03:28:39.421592 2929 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Apr 30 03:28:39.421727 kubelet[2929]: W0430 03:28:39.421653 2929 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 30 03:28:39.422471 kubelet[2929]: I0430 03:28:39.422436 2929 server.go:1264] "Started kubelet" Apr 30 03:28:39.423706 kubelet[2929]: I0430 03:28:39.423664 2929 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Apr 30 03:28:39.430863 kubelet[2929]: I0430 03:28:39.430797 2929 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 30 03:28:39.431548 kubelet[2929]: I0430 03:28:39.431262 2929 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 30 03:28:39.431548 kubelet[2929]: E0430 03:28:39.431409 2929 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.23.191:6443/api/v1/namespaces/default/events\": dial tcp 172.31.23.191:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-23-191.183afaf133c9d1a8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-23-191,UID:ip-172-31-23-191,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-23-191,},FirstTimestamp:2025-04-30 03:28:39.422415272 +0000 UTC m=+0.518590205,LastTimestamp:2025-04-30 03:28:39.422415272 +0000 UTC m=+0.518590205,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-23-191,}" Apr 30 03:28:39.434529 kubelet[2929]: I0430 03:28:39.434308 2929 server.go:455] "Adding debug handlers to kubelet server" Apr 30 03:28:39.437780 kubelet[2929]: I0430 03:28:39.436898 2929 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 30 03:28:39.441772 kubelet[2929]: I0430 03:28:39.441224 2929 volume_manager.go:291] "Starting Kubelet Volume Manager" Apr 30 03:28:39.446889 kubelet[2929]: I0430 03:28:39.446862 2929 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Apr 30 03:28:39.447108 kubelet[2929]: I0430 03:28:39.447042 2929 reconciler.go:26] "Reconciler: start to sync state" Apr 30 03:28:39.447696 kubelet[2929]: W0430 03:28:39.447383 2929 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.23.191:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.23.191:6443: connect: connection refused Apr 30 03:28:39.447696 kubelet[2929]: E0430 03:28:39.447432 2929 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.23.191:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.23.191:6443: connect: connection refused Apr 30 03:28:39.447696 kubelet[2929]: E0430 03:28:39.447484 2929 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.191:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-191?timeout=10s\": dial tcp 172.31.23.191:6443: connect: connection refused" interval="200ms" Apr 30 03:28:39.447821 kubelet[2929]: E0430 03:28:39.447745 2929 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 30 03:28:39.448801 kubelet[2929]: I0430 03:28:39.448389 2929 factory.go:221] Registration of the systemd container factory successfully Apr 30 03:28:39.448801 kubelet[2929]: I0430 03:28:39.448452 2929 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 30 03:28:39.449736 kubelet[2929]: I0430 03:28:39.449721 2929 factory.go:221] Registration of the containerd container factory successfully Apr 30 03:28:39.463294 kubelet[2929]: I0430 03:28:39.463251 2929 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Apr 30 03:28:39.464627 kubelet[2929]: I0430 03:28:39.464497 2929 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Apr 30 03:28:39.464627 kubelet[2929]: I0430 03:28:39.464526 2929 status_manager.go:217] "Starting to sync pod status with apiserver" Apr 30 03:28:39.464627 kubelet[2929]: I0430 03:28:39.464548 2929 kubelet.go:2337] "Starting kubelet main sync loop" Apr 30 03:28:39.464823 kubelet[2929]: E0430 03:28:39.464804 2929 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 30 03:28:39.472409 kubelet[2929]: W0430 03:28:39.472260 2929 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.23.191:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.23.191:6443: connect: connection refused Apr 30 03:28:39.472409 kubelet[2929]: E0430 03:28:39.472318 2929 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.23.191:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.23.191:6443: connect: connection refused Apr 30 03:28:39.486419 kubelet[2929]: I0430 03:28:39.486200 2929 cpu_manager.go:214] "Starting CPU manager" policy="none" Apr 30 03:28:39.486419 kubelet[2929]: I0430 03:28:39.486219 2929 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Apr 30 03:28:39.486419 kubelet[2929]: I0430 03:28:39.486235 2929 state_mem.go:36] "Initialized new in-memory state store" Apr 30 03:28:39.490879 kubelet[2929]: I0430 03:28:39.490765 2929 policy_none.go:49] "None policy: Start" Apr 30 03:28:39.491466 kubelet[2929]: I0430 03:28:39.491425 2929 memory_manager.go:170] "Starting memorymanager" policy="None" Apr 30 03:28:39.491466 kubelet[2929]: I0430 03:28:39.491458 2929 state_mem.go:35] "Initializing new in-memory state store" Apr 30 03:28:39.497553 kubelet[2929]: I0430 03:28:39.497523 2929 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 30 03:28:39.497750 kubelet[2929]: I0430 03:28:39.497716 2929 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 30 03:28:39.497845 kubelet[2929]: I0430 03:28:39.497812 2929 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 30 03:28:39.502006 kubelet[2929]: E0430 03:28:39.501974 2929 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-23-191\" not found" Apr 30 03:28:39.543850 kubelet[2929]: I0430 03:28:39.543813 2929 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-23-191" Apr 30 03:28:39.544214 kubelet[2929]: E0430 03:28:39.544183 2929 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.23.191:6443/api/v1/nodes\": dial tcp 172.31.23.191:6443: connect: connection refused" node="ip-172-31-23-191" Apr 30 03:28:39.566102 kubelet[2929]: I0430 03:28:39.565742 2929 topology_manager.go:215] "Topology Admit Handler" podUID="4687d6a66dc4ea77407d51c42e6763d6" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-23-191" Apr 30 03:28:39.567253 kubelet[2929]: I0430 03:28:39.567221 2929 topology_manager.go:215] "Topology Admit Handler" podUID="3cd3b43929ebc163fcf06bbc4503efe3" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-23-191" Apr 30 03:28:39.568939 kubelet[2929]: I0430 03:28:39.568754 2929 topology_manager.go:215] "Topology Admit Handler" podUID="31ca1f2d508ea85593b822b217696566" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-23-191" Apr 30 03:28:39.648678 kubelet[2929]: I0430 03:28:39.648395 2929 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4687d6a66dc4ea77407d51c42e6763d6-ca-certs\") pod \"kube-apiserver-ip-172-31-23-191\" (UID: \"4687d6a66dc4ea77407d51c42e6763d6\") " pod="kube-system/kube-apiserver-ip-172-31-23-191" Apr 30 03:28:39.648678 kubelet[2929]: I0430 03:28:39.648430 2929 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4687d6a66dc4ea77407d51c42e6763d6-k8s-certs\") pod \"kube-apiserver-ip-172-31-23-191\" (UID: \"4687d6a66dc4ea77407d51c42e6763d6\") " pod="kube-system/kube-apiserver-ip-172-31-23-191" Apr 30 03:28:39.648678 kubelet[2929]: I0430 03:28:39.648455 2929 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4687d6a66dc4ea77407d51c42e6763d6-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-23-191\" (UID: \"4687d6a66dc4ea77407d51c42e6763d6\") " pod="kube-system/kube-apiserver-ip-172-31-23-191" Apr 30 03:28:39.648678 kubelet[2929]: I0430 03:28:39.648472 2929 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3cd3b43929ebc163fcf06bbc4503efe3-kubeconfig\") pod \"kube-controller-manager-ip-172-31-23-191\" (UID: \"3cd3b43929ebc163fcf06bbc4503efe3\") " pod="kube-system/kube-controller-manager-ip-172-31-23-191" Apr 30 03:28:39.648678 kubelet[2929]: E0430 03:28:39.648476 2929 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.191:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-191?timeout=10s\": dial tcp 172.31.23.191:6443: connect: connection refused" interval="400ms" Apr 30 03:28:39.648907 kubelet[2929]: I0430 03:28:39.648490 2929 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/31ca1f2d508ea85593b822b217696566-kubeconfig\") pod \"kube-scheduler-ip-172-31-23-191\" (UID: \"31ca1f2d508ea85593b822b217696566\") " pod="kube-system/kube-scheduler-ip-172-31-23-191" Apr 30 03:28:39.648907 kubelet[2929]: I0430 03:28:39.648529 2929 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3cd3b43929ebc163fcf06bbc4503efe3-ca-certs\") pod \"kube-controller-manager-ip-172-31-23-191\" (UID: \"3cd3b43929ebc163fcf06bbc4503efe3\") " pod="kube-system/kube-controller-manager-ip-172-31-23-191" Apr 30 03:28:39.648907 kubelet[2929]: I0430 03:28:39.648549 2929 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3cd3b43929ebc163fcf06bbc4503efe3-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-23-191\" (UID: \"3cd3b43929ebc163fcf06bbc4503efe3\") " pod="kube-system/kube-controller-manager-ip-172-31-23-191" Apr 30 03:28:39.648907 kubelet[2929]: I0430 03:28:39.648587 2929 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3cd3b43929ebc163fcf06bbc4503efe3-k8s-certs\") pod \"kube-controller-manager-ip-172-31-23-191\" (UID: \"3cd3b43929ebc163fcf06bbc4503efe3\") " pod="kube-system/kube-controller-manager-ip-172-31-23-191" Apr 30 03:28:39.648907 kubelet[2929]: I0430 03:28:39.648606 2929 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3cd3b43929ebc163fcf06bbc4503efe3-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-23-191\" (UID: \"3cd3b43929ebc163fcf06bbc4503efe3\") " pod="kube-system/kube-controller-manager-ip-172-31-23-191" Apr 30 03:28:39.746333 kubelet[2929]: I0430 03:28:39.746250 2929 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-23-191" Apr 30 03:28:39.746577 kubelet[2929]: E0430 03:28:39.746545 2929 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.23.191:6443/api/v1/nodes\": dial tcp 172.31.23.191:6443: connect: connection refused" node="ip-172-31-23-191" Apr 30 03:28:39.877724 containerd[2020]: time="2025-04-30T03:28:39.876763354Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-23-191,Uid:3cd3b43929ebc163fcf06bbc4503efe3,Namespace:kube-system,Attempt:0,}" Apr 30 03:28:39.885173 containerd[2020]: time="2025-04-30T03:28:39.884691862Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-23-191,Uid:31ca1f2d508ea85593b822b217696566,Namespace:kube-system,Attempt:0,}" Apr 30 03:28:39.885173 containerd[2020]: time="2025-04-30T03:28:39.884934468Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-23-191,Uid:4687d6a66dc4ea77407d51c42e6763d6,Namespace:kube-system,Attempt:0,}" Apr 30 03:28:40.049944 kubelet[2929]: E0430 03:28:40.049896 2929 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.191:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-191?timeout=10s\": dial tcp 172.31.23.191:6443: connect: connection refused" interval="800ms" Apr 30 03:28:40.148733 kubelet[2929]: I0430 03:28:40.148692 2929 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-23-191" Apr 30 03:28:40.149034 kubelet[2929]: E0430 03:28:40.149003 2929 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.23.191:6443/api/v1/nodes\": dial tcp 172.31.23.191:6443: connect: connection refused" node="ip-172-31-23-191" Apr 30 03:28:40.416732 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount55787298.mount: Deactivated successfully. Apr 30 03:28:40.432338 containerd[2020]: time="2025-04-30T03:28:40.432279482Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 03:28:40.434400 containerd[2020]: time="2025-04-30T03:28:40.434357414Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 03:28:40.436294 containerd[2020]: time="2025-04-30T03:28:40.436202000Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Apr 30 03:28:40.438191 containerd[2020]: time="2025-04-30T03:28:40.438128567Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 30 03:28:40.440242 containerd[2020]: time="2025-04-30T03:28:40.440187046Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 03:28:40.443102 containerd[2020]: time="2025-04-30T03:28:40.442556735Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 03:28:40.446128 containerd[2020]: time="2025-04-30T03:28:40.446043655Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 30 03:28:40.448739 containerd[2020]: time="2025-04-30T03:28:40.448669708Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 03:28:40.456600 containerd[2020]: time="2025-04-30T03:28:40.454584363Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 577.710198ms" Apr 30 03:28:40.462502 containerd[2020]: time="2025-04-30T03:28:40.462447783Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 577.66996ms" Apr 30 03:28:40.472394 containerd[2020]: time="2025-04-30T03:28:40.472345407Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 587.335184ms" Apr 30 03:28:40.476815 kubelet[2929]: W0430 03:28:40.476704 2929 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.23.191:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.23.191:6443: connect: connection refused Apr 30 03:28:40.476815 kubelet[2929]: E0430 03:28:40.476784 2929 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.23.191:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.23.191:6443: connect: connection refused Apr 30 03:28:40.666322 containerd[2020]: time="2025-04-30T03:28:40.665660309Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:28:40.666322 containerd[2020]: time="2025-04-30T03:28:40.665729360Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:28:40.666322 containerd[2020]: time="2025-04-30T03:28:40.665762540Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:28:40.666322 containerd[2020]: time="2025-04-30T03:28:40.665899803Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:28:40.668617 containerd[2020]: time="2025-04-30T03:28:40.667447498Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:28:40.668617 containerd[2020]: time="2025-04-30T03:28:40.667510186Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:28:40.668617 containerd[2020]: time="2025-04-30T03:28:40.667533754Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:28:40.668617 containerd[2020]: time="2025-04-30T03:28:40.667656825Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:28:40.669594 containerd[2020]: time="2025-04-30T03:28:40.668342966Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:28:40.669594 containerd[2020]: time="2025-04-30T03:28:40.668404676Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:28:40.669594 containerd[2020]: time="2025-04-30T03:28:40.668429237Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:28:40.669594 containerd[2020]: time="2025-04-30T03:28:40.668535940Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:28:40.702243 kubelet[2929]: W0430 03:28:40.701404 2929 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.23.191:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.23.191:6443: connect: connection refused Apr 30 03:28:40.702243 kubelet[2929]: E0430 03:28:40.701496 2929 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.23.191:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.23.191:6443: connect: connection refused Apr 30 03:28:40.769898 kubelet[2929]: W0430 03:28:40.769701 2929 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.23.191:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-23-191&limit=500&resourceVersion=0": dial tcp 172.31.23.191:6443: connect: connection refused Apr 30 03:28:40.770046 kubelet[2929]: E0430 03:28:40.769924 2929 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.23.191:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-23-191&limit=500&resourceVersion=0": dial tcp 172.31.23.191:6443: connect: connection refused Apr 30 03:28:40.787346 containerd[2020]: time="2025-04-30T03:28:40.787299391Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-23-191,Uid:31ca1f2d508ea85593b822b217696566,Namespace:kube-system,Attempt:0,} returns sandbox id \"7048168b3aeb876df13e2f442c78ccba6bf5066ff33eea84b0436e2819adecb3\"" Apr 30 03:28:40.802261 containerd[2020]: time="2025-04-30T03:28:40.802188840Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-23-191,Uid:4687d6a66dc4ea77407d51c42e6763d6,Namespace:kube-system,Attempt:0,} returns sandbox id \"16ee9fae8c2b8d498772467d16514aa56d3d3bcb0dcb218726536bd9b98f039c\"" Apr 30 03:28:40.811412 containerd[2020]: time="2025-04-30T03:28:40.810514597Z" level=info msg="CreateContainer within sandbox \"16ee9fae8c2b8d498772467d16514aa56d3d3bcb0dcb218726536bd9b98f039c\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 30 03:28:40.811412 containerd[2020]: time="2025-04-30T03:28:40.810964254Z" level=info msg="CreateContainer within sandbox \"7048168b3aeb876df13e2f442c78ccba6bf5066ff33eea84b0436e2819adecb3\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 30 03:28:40.813239 containerd[2020]: time="2025-04-30T03:28:40.813199055Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-23-191,Uid:3cd3b43929ebc163fcf06bbc4503efe3,Namespace:kube-system,Attempt:0,} returns sandbox id \"11f3669d236c693b50e36f93fb23ec8faa6f5de13579f9a5dd2cb590020c88cf\"" Apr 30 03:28:40.816200 containerd[2020]: time="2025-04-30T03:28:40.816115579Z" level=info msg="CreateContainer within sandbox \"11f3669d236c693b50e36f93fb23ec8faa6f5de13579f9a5dd2cb590020c88cf\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 30 03:28:40.848052 kubelet[2929]: W0430 03:28:40.847985 2929 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.23.191:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.23.191:6443: connect: connection refused Apr 30 03:28:40.848052 kubelet[2929]: E0430 03:28:40.848049 2929 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.23.191:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.23.191:6443: connect: connection refused Apr 30 03:28:40.850508 kubelet[2929]: E0430 03:28:40.850468 2929 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.191:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-191?timeout=10s\": dial tcp 172.31.23.191:6443: connect: connection refused" interval="1.6s" Apr 30 03:28:40.857356 containerd[2020]: time="2025-04-30T03:28:40.857311539Z" level=info msg="CreateContainer within sandbox \"16ee9fae8c2b8d498772467d16514aa56d3d3bcb0dcb218726536bd9b98f039c\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"4381b1cbaf02d412013dc6118d34e0437502a0962101150fb7c77d8873167593\"" Apr 30 03:28:40.857991 containerd[2020]: time="2025-04-30T03:28:40.857969912Z" level=info msg="StartContainer for \"4381b1cbaf02d412013dc6118d34e0437502a0962101150fb7c77d8873167593\"" Apr 30 03:28:40.865749 containerd[2020]: time="2025-04-30T03:28:40.865715609Z" level=info msg="CreateContainer within sandbox \"7048168b3aeb876df13e2f442c78ccba6bf5066ff33eea84b0436e2819adecb3\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"d687f5287dd2ead22a5c370cf2f61327d2568f4e1c6ebb33e8a056072437943d\"" Apr 30 03:28:40.866669 containerd[2020]: time="2025-04-30T03:28:40.866351402Z" level=info msg="StartContainer for \"d687f5287dd2ead22a5c370cf2f61327d2568f4e1c6ebb33e8a056072437943d\"" Apr 30 03:28:40.869021 containerd[2020]: time="2025-04-30T03:28:40.868994437Z" level=info msg="CreateContainer within sandbox \"11f3669d236c693b50e36f93fb23ec8faa6f5de13579f9a5dd2cb590020c88cf\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"06c2a9bfcf6a701a114a8effa3fcafeb43db0da307d1274fea643bd895c4cfb8\"" Apr 30 03:28:40.869777 containerd[2020]: time="2025-04-30T03:28:40.869753265Z" level=info msg="StartContainer for \"06c2a9bfcf6a701a114a8effa3fcafeb43db0da307d1274fea643bd895c4cfb8\"" Apr 30 03:28:40.953735 kubelet[2929]: I0430 03:28:40.952813 2929 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-23-191" Apr 30 03:28:40.955340 kubelet[2929]: E0430 03:28:40.954849 2929 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.23.191:6443/api/v1/nodes\": dial tcp 172.31.23.191:6443: connect: connection refused" node="ip-172-31-23-191" Apr 30 03:28:40.967608 containerd[2020]: time="2025-04-30T03:28:40.966850279Z" level=info msg="StartContainer for \"4381b1cbaf02d412013dc6118d34e0437502a0962101150fb7c77d8873167593\" returns successfully" Apr 30 03:28:40.983496 containerd[2020]: time="2025-04-30T03:28:40.983375919Z" level=info msg="StartContainer for \"d687f5287dd2ead22a5c370cf2f61327d2568f4e1c6ebb33e8a056072437943d\" returns successfully" Apr 30 03:28:41.008450 containerd[2020]: time="2025-04-30T03:28:41.007282099Z" level=info msg="StartContainer for \"06c2a9bfcf6a701a114a8effa3fcafeb43db0da307d1274fea643bd895c4cfb8\" returns successfully" Apr 30 03:28:41.552450 kubelet[2929]: E0430 03:28:41.550816 2929 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.23.191:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.23.191:6443: connect: connection refused Apr 30 03:28:42.451494 kubelet[2929]: E0430 03:28:42.451443 2929 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.191:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-191?timeout=10s\": dial tcp 172.31.23.191:6443: connect: connection refused" interval="3.2s" Apr 30 03:28:42.556811 kubelet[2929]: I0430 03:28:42.556781 2929 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-23-191" Apr 30 03:28:43.892758 kubelet[2929]: I0430 03:28:43.892707 2929 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-23-191" Apr 30 03:28:44.422121 kubelet[2929]: I0430 03:28:44.421914 2929 apiserver.go:52] "Watching apiserver" Apr 30 03:28:44.447630 kubelet[2929]: I0430 03:28:44.447551 2929 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Apr 30 03:28:44.471360 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Apr 30 03:28:45.847194 systemd[1]: Reloading requested from client PID 3205 ('systemctl') (unit session-7.scope)... Apr 30 03:28:45.847214 systemd[1]: Reloading... Apr 30 03:28:45.948071 zram_generator::config[3241]: No configuration found. Apr 30 03:28:46.086776 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 03:28:46.182010 systemd[1]: Reloading finished in 334 ms. Apr 30 03:28:46.213196 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:28:46.226779 systemd[1]: kubelet.service: Deactivated successfully. Apr 30 03:28:46.227150 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:28:46.234113 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:28:46.454057 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:28:46.463663 (kubelet)[3315]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 30 03:28:46.543175 kubelet[3315]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 03:28:46.543175 kubelet[3315]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Apr 30 03:28:46.543175 kubelet[3315]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 03:28:46.543707 kubelet[3315]: I0430 03:28:46.543242 3315 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 30 03:28:46.552944 kubelet[3315]: I0430 03:28:46.552901 3315 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Apr 30 03:28:46.552944 kubelet[3315]: I0430 03:28:46.552942 3315 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 30 03:28:46.553307 kubelet[3315]: I0430 03:28:46.553226 3315 server.go:927] "Client rotation is on, will bootstrap in background" Apr 30 03:28:46.555224 kubelet[3315]: I0430 03:28:46.555196 3315 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Apr 30 03:28:46.556667 kubelet[3315]: I0430 03:28:46.556628 3315 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 30 03:28:46.563189 kubelet[3315]: I0430 03:28:46.563157 3315 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 30 03:28:46.564097 kubelet[3315]: I0430 03:28:46.563804 3315 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 30 03:28:46.564097 kubelet[3315]: I0430 03:28:46.563833 3315 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-23-191","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Apr 30 03:28:46.564097 kubelet[3315]: I0430 03:28:46.563997 3315 topology_manager.go:138] "Creating topology manager with none policy" Apr 30 03:28:46.564097 kubelet[3315]: I0430 03:28:46.564006 3315 container_manager_linux.go:301] "Creating device plugin manager" Apr 30 03:28:46.566727 kubelet[3315]: I0430 03:28:46.566707 3315 state_mem.go:36] "Initialized new in-memory state store" Apr 30 03:28:46.567126 kubelet[3315]: I0430 03:28:46.566960 3315 kubelet.go:400] "Attempting to sync node with API server" Apr 30 03:28:46.567126 kubelet[3315]: I0430 03:28:46.566978 3315 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 30 03:28:46.567126 kubelet[3315]: I0430 03:28:46.567000 3315 kubelet.go:312] "Adding apiserver pod source" Apr 30 03:28:46.567126 kubelet[3315]: I0430 03:28:46.567020 3315 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 30 03:28:46.571739 kubelet[3315]: I0430 03:28:46.570527 3315 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 30 03:28:46.573595 kubelet[3315]: I0430 03:28:46.573142 3315 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Apr 30 03:28:46.573722 kubelet[3315]: I0430 03:28:46.573708 3315 server.go:1264] "Started kubelet" Apr 30 03:28:46.580139 kubelet[3315]: I0430 03:28:46.579025 3315 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 30 03:28:46.590175 kubelet[3315]: I0430 03:28:46.590134 3315 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Apr 30 03:28:46.592922 kubelet[3315]: I0430 03:28:46.592876 3315 server.go:455] "Adding debug handlers to kubelet server" Apr 30 03:28:46.602417 kubelet[3315]: E0430 03:28:46.602363 3315 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 30 03:28:46.604179 kubelet[3315]: I0430 03:28:46.602671 3315 volume_manager.go:291] "Starting Kubelet Volume Manager" Apr 30 03:28:46.604179 kubelet[3315]: I0430 03:28:46.602812 3315 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 30 03:28:46.604179 kubelet[3315]: I0430 03:28:46.603159 3315 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 30 03:28:46.606931 kubelet[3315]: I0430 03:28:46.606594 3315 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Apr 30 03:28:46.606931 kubelet[3315]: I0430 03:28:46.606768 3315 reconciler.go:26] "Reconciler: start to sync state" Apr 30 03:28:46.613986 kubelet[3315]: I0430 03:28:46.613919 3315 factory.go:221] Registration of the systemd container factory successfully Apr 30 03:28:46.614861 kubelet[3315]: I0430 03:28:46.614198 3315 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 30 03:28:46.619038 kubelet[3315]: I0430 03:28:46.618172 3315 factory.go:221] Registration of the containerd container factory successfully Apr 30 03:28:46.622709 kubelet[3315]: I0430 03:28:46.622653 3315 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Apr 30 03:28:46.626209 kubelet[3315]: I0430 03:28:46.626178 3315 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Apr 30 03:28:46.626336 kubelet[3315]: I0430 03:28:46.626218 3315 status_manager.go:217] "Starting to sync pod status with apiserver" Apr 30 03:28:46.626336 kubelet[3315]: I0430 03:28:46.626237 3315 kubelet.go:2337] "Starting kubelet main sync loop" Apr 30 03:28:46.626336 kubelet[3315]: E0430 03:28:46.626280 3315 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 30 03:28:46.677200 kubelet[3315]: I0430 03:28:46.677172 3315 cpu_manager.go:214] "Starting CPU manager" policy="none" Apr 30 03:28:46.677367 kubelet[3315]: I0430 03:28:46.677356 3315 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Apr 30 03:28:46.677437 kubelet[3315]: I0430 03:28:46.677429 3315 state_mem.go:36] "Initialized new in-memory state store" Apr 30 03:28:46.677654 kubelet[3315]: I0430 03:28:46.677636 3315 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 30 03:28:46.677743 kubelet[3315]: I0430 03:28:46.677724 3315 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 30 03:28:46.677789 kubelet[3315]: I0430 03:28:46.677784 3315 policy_none.go:49] "None policy: Start" Apr 30 03:28:46.678377 kubelet[3315]: I0430 03:28:46.678358 3315 memory_manager.go:170] "Starting memorymanager" policy="None" Apr 30 03:28:46.678441 kubelet[3315]: I0430 03:28:46.678382 3315 state_mem.go:35] "Initializing new in-memory state store" Apr 30 03:28:46.678535 kubelet[3315]: I0430 03:28:46.678521 3315 state_mem.go:75] "Updated machine memory state" Apr 30 03:28:46.680095 kubelet[3315]: I0430 03:28:46.679823 3315 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 30 03:28:46.680095 kubelet[3315]: I0430 03:28:46.679965 3315 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 30 03:28:46.680095 kubelet[3315]: I0430 03:28:46.680045 3315 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 30 03:28:46.713006 kubelet[3315]: I0430 03:28:46.711662 3315 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-23-191" Apr 30 03:28:46.718496 kubelet[3315]: I0430 03:28:46.718465 3315 kubelet_node_status.go:112] "Node was previously registered" node="ip-172-31-23-191" Apr 30 03:28:46.718636 kubelet[3315]: I0430 03:28:46.718544 3315 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-23-191" Apr 30 03:28:46.727226 kubelet[3315]: I0430 03:28:46.726396 3315 topology_manager.go:215] "Topology Admit Handler" podUID="4687d6a66dc4ea77407d51c42e6763d6" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-23-191" Apr 30 03:28:46.727226 kubelet[3315]: I0430 03:28:46.726528 3315 topology_manager.go:215] "Topology Admit Handler" podUID="3cd3b43929ebc163fcf06bbc4503efe3" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-23-191" Apr 30 03:28:46.727226 kubelet[3315]: I0430 03:28:46.726581 3315 topology_manager.go:215] "Topology Admit Handler" podUID="31ca1f2d508ea85593b822b217696566" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-23-191" Apr 30 03:28:46.736140 kubelet[3315]: E0430 03:28:46.735829 3315 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ip-172-31-23-191\" already exists" pod="kube-system/kube-scheduler-ip-172-31-23-191" Apr 30 03:28:46.909248 kubelet[3315]: I0430 03:28:46.909207 3315 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4687d6a66dc4ea77407d51c42e6763d6-ca-certs\") pod \"kube-apiserver-ip-172-31-23-191\" (UID: \"4687d6a66dc4ea77407d51c42e6763d6\") " pod="kube-system/kube-apiserver-ip-172-31-23-191" Apr 30 03:28:46.909483 kubelet[3315]: I0430 03:28:46.909261 3315 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4687d6a66dc4ea77407d51c42e6763d6-k8s-certs\") pod \"kube-apiserver-ip-172-31-23-191\" (UID: \"4687d6a66dc4ea77407d51c42e6763d6\") " pod="kube-system/kube-apiserver-ip-172-31-23-191" Apr 30 03:28:46.909483 kubelet[3315]: I0430 03:28:46.909280 3315 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3cd3b43929ebc163fcf06bbc4503efe3-ca-certs\") pod \"kube-controller-manager-ip-172-31-23-191\" (UID: \"3cd3b43929ebc163fcf06bbc4503efe3\") " pod="kube-system/kube-controller-manager-ip-172-31-23-191" Apr 30 03:28:46.909483 kubelet[3315]: I0430 03:28:46.909307 3315 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3cd3b43929ebc163fcf06bbc4503efe3-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-23-191\" (UID: \"3cd3b43929ebc163fcf06bbc4503efe3\") " pod="kube-system/kube-controller-manager-ip-172-31-23-191" Apr 30 03:28:46.909483 kubelet[3315]: I0430 03:28:46.909325 3315 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3cd3b43929ebc163fcf06bbc4503efe3-k8s-certs\") pod \"kube-controller-manager-ip-172-31-23-191\" (UID: \"3cd3b43929ebc163fcf06bbc4503efe3\") " pod="kube-system/kube-controller-manager-ip-172-31-23-191" Apr 30 03:28:46.909483 kubelet[3315]: I0430 03:28:46.909342 3315 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4687d6a66dc4ea77407d51c42e6763d6-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-23-191\" (UID: \"4687d6a66dc4ea77407d51c42e6763d6\") " pod="kube-system/kube-apiserver-ip-172-31-23-191" Apr 30 03:28:46.909669 kubelet[3315]: I0430 03:28:46.909368 3315 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3cd3b43929ebc163fcf06bbc4503efe3-kubeconfig\") pod \"kube-controller-manager-ip-172-31-23-191\" (UID: \"3cd3b43929ebc163fcf06bbc4503efe3\") " pod="kube-system/kube-controller-manager-ip-172-31-23-191" Apr 30 03:28:46.909669 kubelet[3315]: I0430 03:28:46.909384 3315 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3cd3b43929ebc163fcf06bbc4503efe3-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-23-191\" (UID: \"3cd3b43929ebc163fcf06bbc4503efe3\") " pod="kube-system/kube-controller-manager-ip-172-31-23-191" Apr 30 03:28:46.909669 kubelet[3315]: I0430 03:28:46.909402 3315 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/31ca1f2d508ea85593b822b217696566-kubeconfig\") pod \"kube-scheduler-ip-172-31-23-191\" (UID: \"31ca1f2d508ea85593b822b217696566\") " pod="kube-system/kube-scheduler-ip-172-31-23-191" Apr 30 03:28:47.568490 kubelet[3315]: I0430 03:28:47.568448 3315 apiserver.go:52] "Watching apiserver" Apr 30 03:28:47.606840 kubelet[3315]: I0430 03:28:47.606805 3315 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Apr 30 03:28:47.663674 kubelet[3315]: E0430 03:28:47.661747 3315 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ip-172-31-23-191\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-23-191" Apr 30 03:28:47.726061 kubelet[3315]: I0430 03:28:47.725961 3315 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-23-191" podStartSLOduration=2.725944254 podStartE2EDuration="2.725944254s" podCreationTimestamp="2025-04-30 03:28:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 03:28:47.725898535 +0000 UTC m=+1.252115286" watchObservedRunningTime="2025-04-30 03:28:47.725944254 +0000 UTC m=+1.252161005" Apr 30 03:28:47.803442 kubelet[3315]: I0430 03:28:47.802993 3315 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-23-191" podStartSLOduration=1.802970625 podStartE2EDuration="1.802970625s" podCreationTimestamp="2025-04-30 03:28:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 03:28:47.774167584 +0000 UTC m=+1.300384335" watchObservedRunningTime="2025-04-30 03:28:47.802970625 +0000 UTC m=+1.329187380" Apr 30 03:28:47.832331 kubelet[3315]: I0430 03:28:47.832169 3315 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-23-191" podStartSLOduration=1.832148764 podStartE2EDuration="1.832148764s" podCreationTimestamp="2025-04-30 03:28:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 03:28:47.803310443 +0000 UTC m=+1.329527194" watchObservedRunningTime="2025-04-30 03:28:47.832148764 +0000 UTC m=+1.358365511" Apr 30 03:28:52.304344 sudo[2373]: pam_unix(sudo:session): session closed for user root Apr 30 03:28:52.342068 sshd[2369]: pam_unix(sshd:session): session closed for user core Apr 30 03:28:52.345466 systemd[1]: sshd@6-172.31.23.191:22-147.75.109.163:37656.service: Deactivated successfully. Apr 30 03:28:52.349542 systemd-logind[1998]: Session 7 logged out. Waiting for processes to exit. Apr 30 03:28:52.350084 systemd[1]: session-7.scope: Deactivated successfully. Apr 30 03:28:52.351910 systemd-logind[1998]: Removed session 7. Apr 30 03:28:58.605289 update_engine[1999]: I20250430 03:28:58.605211 1999 update_attempter.cc:509] Updating boot flags... Apr 30 03:28:58.700671 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (3405) Apr 30 03:28:58.825598 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (3407) Apr 30 03:28:59.002617 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (3407) Apr 30 03:29:02.224833 kubelet[3315]: I0430 03:29:02.224692 3315 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 30 03:29:02.247593 containerd[2020]: time="2025-04-30T03:29:02.245042688Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 30 03:29:02.248091 kubelet[3315]: I0430 03:29:02.245851 3315 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 30 03:29:03.212807 kubelet[3315]: I0430 03:29:03.212709 3315 topology_manager.go:215] "Topology Admit Handler" podUID="e16cc761-fad5-498e-8cd6-e68102c82ee5" podNamespace="kube-system" podName="kube-proxy-gpjrf" Apr 30 03:29:03.314014 kubelet[3315]: I0430 03:29:03.312872 3315 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e16cc761-fad5-498e-8cd6-e68102c82ee5-kube-proxy\") pod \"kube-proxy-gpjrf\" (UID: \"e16cc761-fad5-498e-8cd6-e68102c82ee5\") " pod="kube-system/kube-proxy-gpjrf" Apr 30 03:29:03.314014 kubelet[3315]: I0430 03:29:03.312920 3315 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e16cc761-fad5-498e-8cd6-e68102c82ee5-lib-modules\") pod \"kube-proxy-gpjrf\" (UID: \"e16cc761-fad5-498e-8cd6-e68102c82ee5\") " pod="kube-system/kube-proxy-gpjrf" Apr 30 03:29:03.314014 kubelet[3315]: I0430 03:29:03.312947 3315 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxmxr\" (UniqueName: \"kubernetes.io/projected/e16cc761-fad5-498e-8cd6-e68102c82ee5-kube-api-access-fxmxr\") pod \"kube-proxy-gpjrf\" (UID: \"e16cc761-fad5-498e-8cd6-e68102c82ee5\") " pod="kube-system/kube-proxy-gpjrf" Apr 30 03:29:03.314014 kubelet[3315]: I0430 03:29:03.312985 3315 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e16cc761-fad5-498e-8cd6-e68102c82ee5-xtables-lock\") pod \"kube-proxy-gpjrf\" (UID: \"e16cc761-fad5-498e-8cd6-e68102c82ee5\") " pod="kube-system/kube-proxy-gpjrf" Apr 30 03:29:03.331433 kubelet[3315]: I0430 03:29:03.326712 3315 topology_manager.go:215] "Topology Admit Handler" podUID="0db021b3-e9d1-4d96-ac81-5cf5fbe25c3d" podNamespace="tigera-operator" podName="tigera-operator-797db67f8-rcm4b" Apr 30 03:29:03.517548 containerd[2020]: time="2025-04-30T03:29:03.517443817Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gpjrf,Uid:e16cc761-fad5-498e-8cd6-e68102c82ee5,Namespace:kube-system,Attempt:0,}" Apr 30 03:29:03.519742 kubelet[3315]: I0430 03:29:03.519704 3315 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/0db021b3-e9d1-4d96-ac81-5cf5fbe25c3d-var-lib-calico\") pod \"tigera-operator-797db67f8-rcm4b\" (UID: \"0db021b3-e9d1-4d96-ac81-5cf5fbe25c3d\") " pod="tigera-operator/tigera-operator-797db67f8-rcm4b" Apr 30 03:29:03.519742 kubelet[3315]: I0430 03:29:03.519755 3315 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7z4t\" (UniqueName: \"kubernetes.io/projected/0db021b3-e9d1-4d96-ac81-5cf5fbe25c3d-kube-api-access-v7z4t\") pod \"tigera-operator-797db67f8-rcm4b\" (UID: \"0db021b3-e9d1-4d96-ac81-5cf5fbe25c3d\") " pod="tigera-operator/tigera-operator-797db67f8-rcm4b" Apr 30 03:29:03.562213 containerd[2020]: time="2025-04-30T03:29:03.562085554Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:29:03.562213 containerd[2020]: time="2025-04-30T03:29:03.562156785Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:29:03.562213 containerd[2020]: time="2025-04-30T03:29:03.562178543Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:03.562990 containerd[2020]: time="2025-04-30T03:29:03.562381138Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:03.616310 containerd[2020]: time="2025-04-30T03:29:03.616270791Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gpjrf,Uid:e16cc761-fad5-498e-8cd6-e68102c82ee5,Namespace:kube-system,Attempt:0,} returns sandbox id \"85b9fc56cd314ab5a974d00c2c6beed07744714f5d6bbcec4536c7350d17b6fa\"" Apr 30 03:29:03.620509 containerd[2020]: time="2025-04-30T03:29:03.619920561Z" level=info msg="CreateContainer within sandbox \"85b9fc56cd314ab5a974d00c2c6beed07744714f5d6bbcec4536c7350d17b6fa\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 30 03:29:03.701362 containerd[2020]: time="2025-04-30T03:29:03.701296102Z" level=info msg="CreateContainer within sandbox \"85b9fc56cd314ab5a974d00c2c6beed07744714f5d6bbcec4536c7350d17b6fa\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"cffdd03eb3c1c3f78a402b1b7e56d25c2fc8945fc44fb06a7ccebdafc51128a3\"" Apr 30 03:29:03.703306 containerd[2020]: time="2025-04-30T03:29:03.702045289Z" level=info msg="StartContainer for \"cffdd03eb3c1c3f78a402b1b7e56d25c2fc8945fc44fb06a7ccebdafc51128a3\"" Apr 30 03:29:03.780860 containerd[2020]: time="2025-04-30T03:29:03.780145324Z" level=info msg="StartContainer for \"cffdd03eb3c1c3f78a402b1b7e56d25c2fc8945fc44fb06a7ccebdafc51128a3\" returns successfully" Apr 30 03:29:03.939934 containerd[2020]: time="2025-04-30T03:29:03.939895562Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-797db67f8-rcm4b,Uid:0db021b3-e9d1-4d96-ac81-5cf5fbe25c3d,Namespace:tigera-operator,Attempt:0,}" Apr 30 03:29:03.980016 containerd[2020]: time="2025-04-30T03:29:03.979879658Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:29:03.980173 containerd[2020]: time="2025-04-30T03:29:03.980049624Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:29:03.980173 containerd[2020]: time="2025-04-30T03:29:03.980107120Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:03.981247 containerd[2020]: time="2025-04-30T03:29:03.980488893Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:04.062035 containerd[2020]: time="2025-04-30T03:29:04.061146870Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-797db67f8-rcm4b,Uid:0db021b3-e9d1-4d96-ac81-5cf5fbe25c3d,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"ee8c4b4cf190ee3381a983a022efbfa80823f7d9b0dcdce27d9d561a35777cf9\"" Apr 30 03:29:04.077536 containerd[2020]: time="2025-04-30T03:29:04.077279481Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\"" Apr 30 03:29:04.731407 kubelet[3315]: I0430 03:29:04.731308 3315 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-gpjrf" podStartSLOduration=1.731270302 podStartE2EDuration="1.731270302s" podCreationTimestamp="2025-04-30 03:29:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 03:29:04.731014582 +0000 UTC m=+18.257231332" watchObservedRunningTime="2025-04-30 03:29:04.731270302 +0000 UTC m=+18.257487053" Apr 30 03:29:05.869791 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4020317733.mount: Deactivated successfully. Apr 30 03:29:07.121309 containerd[2020]: time="2025-04-30T03:29:07.121256931Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:07.122210 containerd[2020]: time="2025-04-30T03:29:07.122069848Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.7: active requests=0, bytes read=22002662" Apr 30 03:29:07.123614 containerd[2020]: time="2025-04-30T03:29:07.123108298Z" level=info msg="ImageCreate event name:\"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:07.125759 containerd[2020]: time="2025-04-30T03:29:07.125706956Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:07.139594 containerd[2020]: time="2025-04-30T03:29:07.139416079Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.7\" with image id \"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\", repo tag \"quay.io/tigera/operator:v1.36.7\", repo digest \"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\", size \"21998657\" in 3.062081055s" Apr 30 03:29:07.139594 containerd[2020]: time="2025-04-30T03:29:07.139468108Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\" returns image reference \"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\"" Apr 30 03:29:07.142073 containerd[2020]: time="2025-04-30T03:29:07.142029438Z" level=info msg="CreateContainer within sandbox \"ee8c4b4cf190ee3381a983a022efbfa80823f7d9b0dcdce27d9d561a35777cf9\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Apr 30 03:29:07.161606 containerd[2020]: time="2025-04-30T03:29:07.161322570Z" level=info msg="CreateContainer within sandbox \"ee8c4b4cf190ee3381a983a022efbfa80823f7d9b0dcdce27d9d561a35777cf9\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"680d7b9163ff3f4c5019192dca5d9f0dc74c603cbbf31e5fd115dfcf6e48ba5c\"" Apr 30 03:29:07.162436 containerd[2020]: time="2025-04-30T03:29:07.162397968Z" level=info msg="StartContainer for \"680d7b9163ff3f4c5019192dca5d9f0dc74c603cbbf31e5fd115dfcf6e48ba5c\"" Apr 30 03:29:07.219465 systemd[1]: run-containerd-runc-k8s.io-680d7b9163ff3f4c5019192dca5d9f0dc74c603cbbf31e5fd115dfcf6e48ba5c-runc.LMeLMx.mount: Deactivated successfully. Apr 30 03:29:07.255576 containerd[2020]: time="2025-04-30T03:29:07.255495771Z" level=info msg="StartContainer for \"680d7b9163ff3f4c5019192dca5d9f0dc74c603cbbf31e5fd115dfcf6e48ba5c\" returns successfully" Apr 30 03:29:07.740047 kubelet[3315]: I0430 03:29:07.737544 3315 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-797db67f8-rcm4b" podStartSLOduration=1.660811332 podStartE2EDuration="4.73752743s" podCreationTimestamp="2025-04-30 03:29:03 +0000 UTC" firstStartedPulling="2025-04-30 03:29:04.063954632 +0000 UTC m=+17.590171372" lastFinishedPulling="2025-04-30 03:29:07.140670726 +0000 UTC m=+20.666887470" observedRunningTime="2025-04-30 03:29:07.73749965 +0000 UTC m=+21.263716401" watchObservedRunningTime="2025-04-30 03:29:07.73752743 +0000 UTC m=+21.263744161" Apr 30 03:29:10.660351 kubelet[3315]: I0430 03:29:10.656393 3315 topology_manager.go:215] "Topology Admit Handler" podUID="ade21f3d-5259-47ed-ad9a-431742ebb77b" podNamespace="calico-system" podName="calico-typha-6b548f9f97-7ktsj" Apr 30 03:29:10.771738 kubelet[3315]: I0430 03:29:10.770874 3315 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2b2l5\" (UniqueName: \"kubernetes.io/projected/ade21f3d-5259-47ed-ad9a-431742ebb77b-kube-api-access-2b2l5\") pod \"calico-typha-6b548f9f97-7ktsj\" (UID: \"ade21f3d-5259-47ed-ad9a-431742ebb77b\") " pod="calico-system/calico-typha-6b548f9f97-7ktsj" Apr 30 03:29:10.772161 kubelet[3315]: I0430 03:29:10.772085 3315 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ade21f3d-5259-47ed-ad9a-431742ebb77b-tigera-ca-bundle\") pod \"calico-typha-6b548f9f97-7ktsj\" (UID: \"ade21f3d-5259-47ed-ad9a-431742ebb77b\") " pod="calico-system/calico-typha-6b548f9f97-7ktsj" Apr 30 03:29:10.772305 kubelet[3315]: I0430 03:29:10.772290 3315 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/ade21f3d-5259-47ed-ad9a-431742ebb77b-typha-certs\") pod \"calico-typha-6b548f9f97-7ktsj\" (UID: \"ade21f3d-5259-47ed-ad9a-431742ebb77b\") " pod="calico-system/calico-typha-6b548f9f97-7ktsj" Apr 30 03:29:10.892389 kubelet[3315]: I0430 03:29:10.889997 3315 topology_manager.go:215] "Topology Admit Handler" podUID="9f950506-6b51-4472-a7c6-05d30c4d7f9f" podNamespace="calico-system" podName="calico-node-5pb4r" Apr 30 03:29:10.982002 containerd[2020]: time="2025-04-30T03:29:10.981895391Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6b548f9f97-7ktsj,Uid:ade21f3d-5259-47ed-ad9a-431742ebb77b,Namespace:calico-system,Attempt:0,}" Apr 30 03:29:11.035294 kubelet[3315]: I0430 03:29:11.034598 3315 topology_manager.go:215] "Topology Admit Handler" podUID="46a42e1d-4f1d-46c0-be67-d687a45629b1" podNamespace="calico-system" podName="csi-node-driver-lfxjq" Apr 30 03:29:11.035294 kubelet[3315]: E0430 03:29:11.035096 3315 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lfxjq" podUID="46a42e1d-4f1d-46c0-be67-d687a45629b1" Apr 30 03:29:11.053332 containerd[2020]: time="2025-04-30T03:29:11.052301775Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:29:11.053332 containerd[2020]: time="2025-04-30T03:29:11.052372916Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:29:11.053332 containerd[2020]: time="2025-04-30T03:29:11.052399250Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:11.053332 containerd[2020]: time="2025-04-30T03:29:11.052553909Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:11.075719 kubelet[3315]: I0430 03:29:11.074753 3315 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/9f950506-6b51-4472-a7c6-05d30c4d7f9f-cni-bin-dir\") pod \"calico-node-5pb4r\" (UID: \"9f950506-6b51-4472-a7c6-05d30c4d7f9f\") " pod="calico-system/calico-node-5pb4r" Apr 30 03:29:11.075719 kubelet[3315]: I0430 03:29:11.074801 3315 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9f950506-6b51-4472-a7c6-05d30c4d7f9f-xtables-lock\") pod \"calico-node-5pb4r\" (UID: \"9f950506-6b51-4472-a7c6-05d30c4d7f9f\") " pod="calico-system/calico-node-5pb4r" Apr 30 03:29:11.075719 kubelet[3315]: I0430 03:29:11.074828 3315 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/9f950506-6b51-4472-a7c6-05d30c4d7f9f-var-run-calico\") pod \"calico-node-5pb4r\" (UID: \"9f950506-6b51-4472-a7c6-05d30c4d7f9f\") " pod="calico-system/calico-node-5pb4r" Apr 30 03:29:11.075719 kubelet[3315]: I0430 03:29:11.074853 3315 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/9f950506-6b51-4472-a7c6-05d30c4d7f9f-cni-log-dir\") pod \"calico-node-5pb4r\" (UID: \"9f950506-6b51-4472-a7c6-05d30c4d7f9f\") " pod="calico-system/calico-node-5pb4r" Apr 30 03:29:11.075719 kubelet[3315]: I0430 03:29:11.074876 3315 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/9f950506-6b51-4472-a7c6-05d30c4d7f9f-flexvol-driver-host\") pod \"calico-node-5pb4r\" (UID: \"9f950506-6b51-4472-a7c6-05d30c4d7f9f\") " pod="calico-system/calico-node-5pb4r" Apr 30 03:29:11.076044 kubelet[3315]: I0430 03:29:11.074902 3315 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/9f950506-6b51-4472-a7c6-05d30c4d7f9f-node-certs\") pod \"calico-node-5pb4r\" (UID: \"9f950506-6b51-4472-a7c6-05d30c4d7f9f\") " pod="calico-system/calico-node-5pb4r" Apr 30 03:29:11.076044 kubelet[3315]: I0430 03:29:11.074925 3315 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9f950506-6b51-4472-a7c6-05d30c4d7f9f-lib-modules\") pod \"calico-node-5pb4r\" (UID: \"9f950506-6b51-4472-a7c6-05d30c4d7f9f\") " pod="calico-system/calico-node-5pb4r" Apr 30 03:29:11.076044 kubelet[3315]: I0430 03:29:11.074948 3315 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9f950506-6b51-4472-a7c6-05d30c4d7f9f-tigera-ca-bundle\") pod \"calico-node-5pb4r\" (UID: \"9f950506-6b51-4472-a7c6-05d30c4d7f9f\") " pod="calico-system/calico-node-5pb4r" Apr 30 03:29:11.076044 kubelet[3315]: I0430 03:29:11.074970 3315 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/9f950506-6b51-4472-a7c6-05d30c4d7f9f-cni-net-dir\") pod \"calico-node-5pb4r\" (UID: \"9f950506-6b51-4472-a7c6-05d30c4d7f9f\") " pod="calico-system/calico-node-5pb4r" Apr 30 03:29:11.076044 kubelet[3315]: I0430 03:29:11.074996 3315 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/9f950506-6b51-4472-a7c6-05d30c4d7f9f-policysync\") pod \"calico-node-5pb4r\" (UID: \"9f950506-6b51-4472-a7c6-05d30c4d7f9f\") " pod="calico-system/calico-node-5pb4r" Apr 30 03:29:11.076233 kubelet[3315]: I0430 03:29:11.075025 3315 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/9f950506-6b51-4472-a7c6-05d30c4d7f9f-var-lib-calico\") pod \"calico-node-5pb4r\" (UID: \"9f950506-6b51-4472-a7c6-05d30c4d7f9f\") " pod="calico-system/calico-node-5pb4r" Apr 30 03:29:11.076233 kubelet[3315]: I0430 03:29:11.075050 3315 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xb7r6\" (UniqueName: \"kubernetes.io/projected/9f950506-6b51-4472-a7c6-05d30c4d7f9f-kube-api-access-xb7r6\") pod \"calico-node-5pb4r\" (UID: \"9f950506-6b51-4472-a7c6-05d30c4d7f9f\") " pod="calico-system/calico-node-5pb4r" Apr 30 03:29:11.176636 kubelet[3315]: I0430 03:29:11.175212 3315 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/46a42e1d-4f1d-46c0-be67-d687a45629b1-kubelet-dir\") pod \"csi-node-driver-lfxjq\" (UID: \"46a42e1d-4f1d-46c0-be67-d687a45629b1\") " pod="calico-system/csi-node-driver-lfxjq" Apr 30 03:29:11.176636 kubelet[3315]: I0430 03:29:11.175254 3315 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/46a42e1d-4f1d-46c0-be67-d687a45629b1-socket-dir\") pod \"csi-node-driver-lfxjq\" (UID: \"46a42e1d-4f1d-46c0-be67-d687a45629b1\") " pod="calico-system/csi-node-driver-lfxjq" Apr 30 03:29:11.176636 kubelet[3315]: I0430 03:29:11.175348 3315 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/46a42e1d-4f1d-46c0-be67-d687a45629b1-registration-dir\") pod \"csi-node-driver-lfxjq\" (UID: \"46a42e1d-4f1d-46c0-be67-d687a45629b1\") " pod="calico-system/csi-node-driver-lfxjq" Apr 30 03:29:11.176636 kubelet[3315]: I0430 03:29:11.175431 3315 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdg2l\" (UniqueName: \"kubernetes.io/projected/46a42e1d-4f1d-46c0-be67-d687a45629b1-kube-api-access-gdg2l\") pod \"csi-node-driver-lfxjq\" (UID: \"46a42e1d-4f1d-46c0-be67-d687a45629b1\") " pod="calico-system/csi-node-driver-lfxjq" Apr 30 03:29:11.176636 kubelet[3315]: I0430 03:29:11.175467 3315 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/46a42e1d-4f1d-46c0-be67-d687a45629b1-varrun\") pod \"csi-node-driver-lfxjq\" (UID: \"46a42e1d-4f1d-46c0-be67-d687a45629b1\") " pod="calico-system/csi-node-driver-lfxjq" Apr 30 03:29:11.182239 kubelet[3315]: E0430 03:29:11.182206 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:11.182432 kubelet[3315]: W0430 03:29:11.182409 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:11.185292 kubelet[3315]: E0430 03:29:11.185258 3315 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:11.192111 kubelet[3315]: E0430 03:29:11.192085 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:11.192269 kubelet[3315]: W0430 03:29:11.192255 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:11.192351 kubelet[3315]: E0430 03:29:11.192340 3315 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:11.208853 kubelet[3315]: E0430 03:29:11.208743 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:11.209075 kubelet[3315]: W0430 03:29:11.209052 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:11.209244 kubelet[3315]: E0430 03:29:11.209138 3315 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:11.220108 containerd[2020]: time="2025-04-30T03:29:11.219983553Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6b548f9f97-7ktsj,Uid:ade21f3d-5259-47ed-ad9a-431742ebb77b,Namespace:calico-system,Attempt:0,} returns sandbox id \"8710cb032452789bd4cb0b6dcfbfc73f0af53ba4b00c4ca56c7dcc0ce74c6ff3\"" Apr 30 03:29:11.224304 containerd[2020]: time="2025-04-30T03:29:11.223959583Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\"" Apr 30 03:29:11.276649 kubelet[3315]: E0430 03:29:11.276593 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:11.276649 kubelet[3315]: W0430 03:29:11.276638 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:11.276859 kubelet[3315]: E0430 03:29:11.276666 3315 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:11.277123 kubelet[3315]: E0430 03:29:11.277100 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:11.277223 kubelet[3315]: W0430 03:29:11.277118 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:11.277804 kubelet[3315]: E0430 03:29:11.277637 3315 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:11.278264 kubelet[3315]: E0430 03:29:11.278240 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:11.278474 kubelet[3315]: W0430 03:29:11.278369 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:11.278941 kubelet[3315]: E0430 03:29:11.278402 3315 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:11.279252 kubelet[3315]: E0430 03:29:11.279158 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:11.279252 kubelet[3315]: W0430 03:29:11.279173 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:11.279252 kubelet[3315]: E0430 03:29:11.279194 3315 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:11.279885 kubelet[3315]: E0430 03:29:11.279866 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:11.279885 kubelet[3315]: W0430 03:29:11.279882 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:11.280202 kubelet[3315]: E0430 03:29:11.279908 3315 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:11.280708 kubelet[3315]: E0430 03:29:11.280685 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:11.280708 kubelet[3315]: W0430 03:29:11.280702 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:11.280846 kubelet[3315]: E0430 03:29:11.280775 3315 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:11.286585 kubelet[3315]: E0430 03:29:11.281742 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:11.286585 kubelet[3315]: W0430 03:29:11.281757 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:11.286585 kubelet[3315]: E0430 03:29:11.281872 3315 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:11.286585 kubelet[3315]: E0430 03:29:11.283283 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:11.286585 kubelet[3315]: W0430 03:29:11.283296 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:11.286585 kubelet[3315]: E0430 03:29:11.283603 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:11.286585 kubelet[3315]: W0430 03:29:11.283616 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:11.286585 kubelet[3315]: E0430 03:29:11.283894 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:11.286585 kubelet[3315]: W0430 03:29:11.283905 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:11.286585 kubelet[3315]: E0430 03:29:11.283920 3315 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:11.286585 kubelet[3315]: E0430 03:29:11.284109 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:11.287133 kubelet[3315]: W0430 03:29:11.284118 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:11.287133 kubelet[3315]: E0430 03:29:11.284129 3315 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:11.287133 kubelet[3315]: E0430 03:29:11.284291 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:11.287133 kubelet[3315]: W0430 03:29:11.284299 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:11.287133 kubelet[3315]: E0430 03:29:11.284312 3315 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:11.289710 kubelet[3315]: E0430 03:29:11.288623 3315 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:11.289710 kubelet[3315]: E0430 03:29:11.288659 3315 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:11.289710 kubelet[3315]: E0430 03:29:11.289381 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:11.289710 kubelet[3315]: W0430 03:29:11.289397 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:11.289710 kubelet[3315]: E0430 03:29:11.289418 3315 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:11.290932 kubelet[3315]: E0430 03:29:11.290529 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:11.290932 kubelet[3315]: W0430 03:29:11.290543 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:11.290932 kubelet[3315]: E0430 03:29:11.290589 3315 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:11.291153 kubelet[3315]: E0430 03:29:11.291088 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:11.291153 kubelet[3315]: W0430 03:29:11.291104 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:11.291548 kubelet[3315]: E0430 03:29:11.291164 3315 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:11.292013 kubelet[3315]: E0430 03:29:11.291994 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:11.292013 kubelet[3315]: W0430 03:29:11.292013 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:11.293165 kubelet[3315]: E0430 03:29:11.292353 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:11.293165 kubelet[3315]: W0430 03:29:11.292378 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:11.293165 kubelet[3315]: E0430 03:29:11.292661 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:11.293165 kubelet[3315]: W0430 03:29:11.292673 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:11.293165 kubelet[3315]: E0430 03:29:11.292690 3315 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:11.293165 kubelet[3315]: E0430 03:29:11.292725 3315 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:11.293165 kubelet[3315]: E0430 03:29:11.292830 3315 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:11.293165 kubelet[3315]: E0430 03:29:11.293170 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:11.295051 kubelet[3315]: W0430 03:29:11.293182 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:11.295051 kubelet[3315]: E0430 03:29:11.293225 3315 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:11.295051 kubelet[3315]: E0430 03:29:11.293752 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:11.295051 kubelet[3315]: W0430 03:29:11.293765 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:11.295051 kubelet[3315]: E0430 03:29:11.293971 3315 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:11.295051 kubelet[3315]: E0430 03:29:11.294172 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:11.295051 kubelet[3315]: W0430 03:29:11.294182 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:11.295051 kubelet[3315]: E0430 03:29:11.294526 3315 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:11.295606 kubelet[3315]: E0430 03:29:11.295319 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:11.295606 kubelet[3315]: W0430 03:29:11.295332 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:11.295606 kubelet[3315]: E0430 03:29:11.295372 3315 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:11.296010 kubelet[3315]: E0430 03:29:11.295991 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:11.296010 kubelet[3315]: W0430 03:29:11.296007 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:11.296161 kubelet[3315]: E0430 03:29:11.296137 3315 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:11.296505 kubelet[3315]: E0430 03:29:11.296338 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:11.296505 kubelet[3315]: W0430 03:29:11.296352 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:11.296505 kubelet[3315]: E0430 03:29:11.296374 3315 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:11.296835 kubelet[3315]: E0430 03:29:11.296820 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:11.296918 kubelet[3315]: W0430 03:29:11.296907 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:11.297012 kubelet[3315]: E0430 03:29:11.297000 3315 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:11.299681 kubelet[3315]: E0430 03:29:11.299658 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:11.299681 kubelet[3315]: W0430 03:29:11.299678 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:11.299859 kubelet[3315]: E0430 03:29:11.299698 3315 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:11.505779 containerd[2020]: time="2025-04-30T03:29:11.505584972Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-5pb4r,Uid:9f950506-6b51-4472-a7c6-05d30c4d7f9f,Namespace:calico-system,Attempt:0,}" Apr 30 03:29:11.568265 containerd[2020]: time="2025-04-30T03:29:11.568065402Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:29:11.570523 containerd[2020]: time="2025-04-30T03:29:11.570482667Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:29:11.571325 containerd[2020]: time="2025-04-30T03:29:11.571290665Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:11.575411 containerd[2020]: time="2025-04-30T03:29:11.575283992Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:11.636703 containerd[2020]: time="2025-04-30T03:29:11.636529653Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-5pb4r,Uid:9f950506-6b51-4472-a7c6-05d30c4d7f9f,Namespace:calico-system,Attempt:0,} returns sandbox id \"133d75c115f02b8c0cd78e8b89860f7642ef433e9597c210ddf1220d811152f1\"" Apr 30 03:29:12.630280 kubelet[3315]: E0430 03:29:12.628325 3315 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lfxjq" podUID="46a42e1d-4f1d-46c0-be67-d687a45629b1" Apr 30 03:29:13.393159 containerd[2020]: time="2025-04-30T03:29:13.393066516Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:13.393781 containerd[2020]: time="2025-04-30T03:29:13.393742724Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.3: active requests=0, bytes read=30426870" Apr 30 03:29:13.394309 containerd[2020]: time="2025-04-30T03:29:13.394253118Z" level=info msg="ImageCreate event name:\"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:13.400690 containerd[2020]: time="2025-04-30T03:29:13.400134813Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:13.401341 containerd[2020]: time="2025-04-30T03:29:13.401291118Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.3\" with image id \"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\", size \"31919484\" in 2.177286138s" Apr 30 03:29:13.401445 containerd[2020]: time="2025-04-30T03:29:13.401349479Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\" returns image reference \"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\"" Apr 30 03:29:13.403006 containerd[2020]: time="2025-04-30T03:29:13.402976267Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\"" Apr 30 03:29:13.425178 containerd[2020]: time="2025-04-30T03:29:13.425135940Z" level=info msg="CreateContainer within sandbox \"8710cb032452789bd4cb0b6dcfbfc73f0af53ba4b00c4ca56c7dcc0ce74c6ff3\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Apr 30 03:29:13.443689 containerd[2020]: time="2025-04-30T03:29:13.443487656Z" level=info msg="CreateContainer within sandbox \"8710cb032452789bd4cb0b6dcfbfc73f0af53ba4b00c4ca56c7dcc0ce74c6ff3\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"8e15dbc6f202ee13e8a5b1c0066067f783099edea32afebbc215018c63f84bbf\"" Apr 30 03:29:13.444624 containerd[2020]: time="2025-04-30T03:29:13.444259439Z" level=info msg="StartContainer for \"8e15dbc6f202ee13e8a5b1c0066067f783099edea32afebbc215018c63f84bbf\"" Apr 30 03:29:13.547405 containerd[2020]: time="2025-04-30T03:29:13.547352254Z" level=info msg="StartContainer for \"8e15dbc6f202ee13e8a5b1c0066067f783099edea32afebbc215018c63f84bbf\" returns successfully" Apr 30 03:29:13.807470 kubelet[3315]: E0430 03:29:13.807049 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:13.807470 kubelet[3315]: W0430 03:29:13.807084 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:13.807470 kubelet[3315]: E0430 03:29:13.807110 3315 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:13.808741 kubelet[3315]: E0430 03:29:13.808704 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:13.808741 kubelet[3315]: W0430 03:29:13.808725 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:13.808974 kubelet[3315]: E0430 03:29:13.808752 3315 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:13.809191 kubelet[3315]: E0430 03:29:13.809101 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:13.809540 kubelet[3315]: W0430 03:29:13.809508 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:13.809641 kubelet[3315]: E0430 03:29:13.809551 3315 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:13.809890 kubelet[3315]: E0430 03:29:13.809862 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:13.809890 kubelet[3315]: W0430 03:29:13.809880 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:13.810015 kubelet[3315]: E0430 03:29:13.809896 3315 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:13.810275 kubelet[3315]: E0430 03:29:13.810254 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:13.810275 kubelet[3315]: W0430 03:29:13.810272 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:13.810416 kubelet[3315]: E0430 03:29:13.810287 3315 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:13.810516 kubelet[3315]: E0430 03:29:13.810499 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:13.810516 kubelet[3315]: W0430 03:29:13.810514 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:13.810754 kubelet[3315]: E0430 03:29:13.810529 3315 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:13.810809 kubelet[3315]: E0430 03:29:13.810797 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:13.810849 kubelet[3315]: W0430 03:29:13.810808 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:13.810849 kubelet[3315]: E0430 03:29:13.810824 3315 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:13.811085 kubelet[3315]: E0430 03:29:13.811066 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:13.811085 kubelet[3315]: W0430 03:29:13.811080 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:13.811197 kubelet[3315]: E0430 03:29:13.811095 3315 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:13.811387 kubelet[3315]: E0430 03:29:13.811369 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:13.811387 kubelet[3315]: W0430 03:29:13.811384 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:13.811514 kubelet[3315]: E0430 03:29:13.811398 3315 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:13.811656 kubelet[3315]: E0430 03:29:13.811640 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:13.811656 kubelet[3315]: W0430 03:29:13.811653 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:13.811789 kubelet[3315]: E0430 03:29:13.811670 3315 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:13.811897 kubelet[3315]: E0430 03:29:13.811881 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:13.811897 kubelet[3315]: W0430 03:29:13.811894 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:13.812038 kubelet[3315]: E0430 03:29:13.811907 3315 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:13.812155 kubelet[3315]: E0430 03:29:13.812142 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:13.812207 kubelet[3315]: W0430 03:29:13.812156 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:13.812207 kubelet[3315]: E0430 03:29:13.812169 3315 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:13.812408 kubelet[3315]: E0430 03:29:13.812380 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:13.812408 kubelet[3315]: W0430 03:29:13.812392 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:13.812408 kubelet[3315]: E0430 03:29:13.812406 3315 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:13.812662 kubelet[3315]: E0430 03:29:13.812632 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:13.812662 kubelet[3315]: W0430 03:29:13.812642 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:13.812662 kubelet[3315]: E0430 03:29:13.812655 3315 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:13.812886 kubelet[3315]: E0430 03:29:13.812876 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:13.812934 kubelet[3315]: W0430 03:29:13.812888 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:13.812934 kubelet[3315]: E0430 03:29:13.812901 3315 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:13.813588 kubelet[3315]: E0430 03:29:13.813283 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:13.813588 kubelet[3315]: W0430 03:29:13.813292 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:13.813588 kubelet[3315]: E0430 03:29:13.813302 3315 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:13.813811 kubelet[3315]: E0430 03:29:13.813595 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:13.813811 kubelet[3315]: W0430 03:29:13.813606 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:13.813811 kubelet[3315]: E0430 03:29:13.813654 3315 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:13.813959 kubelet[3315]: E0430 03:29:13.813898 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:13.813959 kubelet[3315]: W0430 03:29:13.813908 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:13.813959 kubelet[3315]: E0430 03:29:13.813927 3315 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:13.814241 kubelet[3315]: E0430 03:29:13.814225 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:13.814308 kubelet[3315]: W0430 03:29:13.814246 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:13.814308 kubelet[3315]: E0430 03:29:13.814266 3315 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:13.814532 kubelet[3315]: E0430 03:29:13.814515 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:13.814532 kubelet[3315]: W0430 03:29:13.814529 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:13.814959 kubelet[3315]: E0430 03:29:13.814549 3315 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:13.814959 kubelet[3315]: E0430 03:29:13.814866 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:13.814959 kubelet[3315]: W0430 03:29:13.814878 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:13.814959 kubelet[3315]: E0430 03:29:13.814903 3315 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:13.815259 kubelet[3315]: E0430 03:29:13.815224 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:13.815259 kubelet[3315]: W0430 03:29:13.815238 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:13.815518 kubelet[3315]: E0430 03:29:13.815354 3315 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:13.815518 kubelet[3315]: E0430 03:29:13.815437 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:13.815518 kubelet[3315]: W0430 03:29:13.815447 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:13.815518 kubelet[3315]: E0430 03:29:13.815477 3315 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:13.815813 kubelet[3315]: E0430 03:29:13.815716 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:13.815813 kubelet[3315]: W0430 03:29:13.815727 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:13.815813 kubelet[3315]: E0430 03:29:13.815745 3315 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:13.816188 kubelet[3315]: E0430 03:29:13.816171 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:13.816188 kubelet[3315]: W0430 03:29:13.816185 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:13.816298 kubelet[3315]: E0430 03:29:13.816212 3315 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:13.816492 kubelet[3315]: E0430 03:29:13.816476 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:13.816492 kubelet[3315]: W0430 03:29:13.816490 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:13.816703 kubelet[3315]: E0430 03:29:13.816507 3315 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:13.816761 kubelet[3315]: E0430 03:29:13.816720 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:13.816761 kubelet[3315]: W0430 03:29:13.816730 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:13.816761 kubelet[3315]: E0430 03:29:13.816743 3315 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:13.817007 kubelet[3315]: E0430 03:29:13.816983 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:13.817007 kubelet[3315]: W0430 03:29:13.816994 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:13.817300 kubelet[3315]: E0430 03:29:13.817166 3315 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:13.817300 kubelet[3315]: E0430 03:29:13.817186 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:13.817300 kubelet[3315]: W0430 03:29:13.817194 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:13.817300 kubelet[3315]: E0430 03:29:13.817206 3315 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:13.817772 kubelet[3315]: E0430 03:29:13.817755 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:13.817772 kubelet[3315]: W0430 03:29:13.817769 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:13.817874 kubelet[3315]: E0430 03:29:13.817799 3315 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:13.818060 kubelet[3315]: E0430 03:29:13.818043 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:13.818060 kubelet[3315]: W0430 03:29:13.818056 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:13.818166 kubelet[3315]: E0430 03:29:13.818076 3315 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:13.818475 kubelet[3315]: E0430 03:29:13.818458 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:13.818475 kubelet[3315]: W0430 03:29:13.818471 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:13.818620 kubelet[3315]: E0430 03:29:13.818489 3315 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:13.818925 kubelet[3315]: E0430 03:29:13.818909 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:13.818925 kubelet[3315]: W0430 03:29:13.818922 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:13.819012 kubelet[3315]: E0430 03:29:13.818936 3315 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:14.629932 kubelet[3315]: E0430 03:29:14.628478 3315 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lfxjq" podUID="46a42e1d-4f1d-46c0-be67-d687a45629b1" Apr 30 03:29:14.749196 kubelet[3315]: I0430 03:29:14.748473 3315 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 30 03:29:14.773061 containerd[2020]: time="2025-04-30T03:29:14.772911277Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:14.775101 containerd[2020]: time="2025-04-30T03:29:14.775042468Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3: active requests=0, bytes read=5366937" Apr 30 03:29:14.777848 containerd[2020]: time="2025-04-30T03:29:14.777784478Z" level=info msg="ImageCreate event name:\"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:14.782524 containerd[2020]: time="2025-04-30T03:29:14.782379644Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:14.783921 containerd[2020]: time="2025-04-30T03:29:14.783445469Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" with image id \"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\", size \"6859519\" in 1.379263235s" Apr 30 03:29:14.783921 containerd[2020]: time="2025-04-30T03:29:14.783474791Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" returns image reference \"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\"" Apr 30 03:29:14.787397 containerd[2020]: time="2025-04-30T03:29:14.787357938Z" level=info msg="CreateContainer within sandbox \"133d75c115f02b8c0cd78e8b89860f7642ef433e9597c210ddf1220d811152f1\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Apr 30 03:29:14.813249 containerd[2020]: time="2025-04-30T03:29:14.813197572Z" level=info msg="CreateContainer within sandbox \"133d75c115f02b8c0cd78e8b89860f7642ef433e9597c210ddf1220d811152f1\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"aa0c54da0baec9701f0495c51d884a73dd7236665b273c2aa249b4be9bff613b\"" Apr 30 03:29:14.815292 containerd[2020]: time="2025-04-30T03:29:14.813940391Z" level=info msg="StartContainer for \"aa0c54da0baec9701f0495c51d884a73dd7236665b273c2aa249b4be9bff613b\"" Apr 30 03:29:14.821264 kubelet[3315]: E0430 03:29:14.821229 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:14.821264 kubelet[3315]: W0430 03:29:14.821256 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:14.821902 kubelet[3315]: E0430 03:29:14.821282 3315 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:14.821902 kubelet[3315]: E0430 03:29:14.821621 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:14.821902 kubelet[3315]: W0430 03:29:14.821634 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:14.821902 kubelet[3315]: E0430 03:29:14.821664 3315 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:14.822090 kubelet[3315]: E0430 03:29:14.821928 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:14.822090 kubelet[3315]: W0430 03:29:14.821940 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:14.822090 kubelet[3315]: E0430 03:29:14.821986 3315 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:14.822402 kubelet[3315]: E0430 03:29:14.822247 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:14.822402 kubelet[3315]: W0430 03:29:14.822259 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:14.822402 kubelet[3315]: E0430 03:29:14.822272 3315 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:14.823191 kubelet[3315]: E0430 03:29:14.822599 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:14.823191 kubelet[3315]: W0430 03:29:14.822611 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:14.823191 kubelet[3315]: E0430 03:29:14.822632 3315 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:14.823191 kubelet[3315]: E0430 03:29:14.823001 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:14.823191 kubelet[3315]: W0430 03:29:14.823045 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:14.823191 kubelet[3315]: E0430 03:29:14.823060 3315 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:14.825013 kubelet[3315]: E0430 03:29:14.824720 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:14.825013 kubelet[3315]: W0430 03:29:14.824735 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:14.825013 kubelet[3315]: E0430 03:29:14.824779 3315 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:14.825195 kubelet[3315]: E0430 03:29:14.825148 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:14.825195 kubelet[3315]: W0430 03:29:14.825159 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:14.825195 kubelet[3315]: E0430 03:29:14.825174 3315 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:14.825677 kubelet[3315]: E0430 03:29:14.825409 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:14.825677 kubelet[3315]: W0430 03:29:14.825419 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:14.825677 kubelet[3315]: E0430 03:29:14.825449 3315 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:14.825822 kubelet[3315]: E0430 03:29:14.825754 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:14.825822 kubelet[3315]: W0430 03:29:14.825779 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:14.825822 kubelet[3315]: E0430 03:29:14.825793 3315 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:14.826450 kubelet[3315]: E0430 03:29:14.826043 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:14.826450 kubelet[3315]: W0430 03:29:14.826057 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:14.826450 kubelet[3315]: E0430 03:29:14.826070 3315 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:14.827774 kubelet[3315]: E0430 03:29:14.827274 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:14.827774 kubelet[3315]: W0430 03:29:14.827305 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:14.827774 kubelet[3315]: E0430 03:29:14.827320 3315 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:14.827774 kubelet[3315]: E0430 03:29:14.827696 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:14.827774 kubelet[3315]: W0430 03:29:14.827714 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:14.827774 kubelet[3315]: E0430 03:29:14.827729 3315 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:14.828608 kubelet[3315]: E0430 03:29:14.828020 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:14.828608 kubelet[3315]: W0430 03:29:14.828030 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:14.828608 kubelet[3315]: E0430 03:29:14.828045 3315 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:14.828608 kubelet[3315]: E0430 03:29:14.828315 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:14.828608 kubelet[3315]: W0430 03:29:14.828326 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:14.828608 kubelet[3315]: E0430 03:29:14.828340 3315 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:14.899141 containerd[2020]: time="2025-04-30T03:29:14.899023711Z" level=info msg="StartContainer for \"aa0c54da0baec9701f0495c51d884a73dd7236665b273c2aa249b4be9bff613b\" returns successfully" Apr 30 03:29:14.950434 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-aa0c54da0baec9701f0495c51d884a73dd7236665b273c2aa249b4be9bff613b-rootfs.mount: Deactivated successfully. Apr 30 03:29:14.973931 containerd[2020]: time="2025-04-30T03:29:14.958054441Z" level=info msg="shim disconnected" id=aa0c54da0baec9701f0495c51d884a73dd7236665b273c2aa249b4be9bff613b namespace=k8s.io Apr 30 03:29:14.974183 containerd[2020]: time="2025-04-30T03:29:14.973938315Z" level=warning msg="cleaning up after shim disconnected" id=aa0c54da0baec9701f0495c51d884a73dd7236665b273c2aa249b4be9bff613b namespace=k8s.io Apr 30 03:29:14.974183 containerd[2020]: time="2025-04-30T03:29:14.973960173Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:29:15.764010 containerd[2020]: time="2025-04-30T03:29:15.763964985Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\"" Apr 30 03:29:15.785997 kubelet[3315]: I0430 03:29:15.785607 3315 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6b548f9f97-7ktsj" podStartSLOduration=3.606037625 podStartE2EDuration="5.785555281s" podCreationTimestamp="2025-04-30 03:29:10 +0000 UTC" firstStartedPulling="2025-04-30 03:29:11.223142628 +0000 UTC m=+24.749359370" lastFinishedPulling="2025-04-30 03:29:13.402660257 +0000 UTC m=+26.928877026" observedRunningTime="2025-04-30 03:29:13.75881781 +0000 UTC m=+27.285034560" watchObservedRunningTime="2025-04-30 03:29:15.785555281 +0000 UTC m=+29.311772032" Apr 30 03:29:16.627919 kubelet[3315]: E0430 03:29:16.627560 3315 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lfxjq" podUID="46a42e1d-4f1d-46c0-be67-d687a45629b1" Apr 30 03:29:18.627600 kubelet[3315]: E0430 03:29:18.627246 3315 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lfxjq" podUID="46a42e1d-4f1d-46c0-be67-d687a45629b1" Apr 30 03:29:19.413111 containerd[2020]: time="2025-04-30T03:29:19.412856281Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:19.414796 containerd[2020]: time="2025-04-30T03:29:19.414737247Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.3: active requests=0, bytes read=97793683" Apr 30 03:29:19.416990 containerd[2020]: time="2025-04-30T03:29:19.416958092Z" level=info msg="ImageCreate event name:\"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:19.420535 containerd[2020]: time="2025-04-30T03:29:19.420457880Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:19.422131 containerd[2020]: time="2025-04-30T03:29:19.421452403Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.3\" with image id \"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\", size \"99286305\" in 3.65743363s" Apr 30 03:29:19.422131 containerd[2020]: time="2025-04-30T03:29:19.421499328Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\" returns image reference \"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\"" Apr 30 03:29:19.425349 containerd[2020]: time="2025-04-30T03:29:19.425296854Z" level=info msg="CreateContainer within sandbox \"133d75c115f02b8c0cd78e8b89860f7642ef433e9597c210ddf1220d811152f1\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 30 03:29:19.464225 containerd[2020]: time="2025-04-30T03:29:19.464045930Z" level=info msg="CreateContainer within sandbox \"133d75c115f02b8c0cd78e8b89860f7642ef433e9597c210ddf1220d811152f1\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"c6d4fc7ef4d161c79ce5acf8f548a17da6d0d5ed4089df2b706041cee13d3cdc\"" Apr 30 03:29:19.465268 containerd[2020]: time="2025-04-30T03:29:19.465239657Z" level=info msg="StartContainer for \"c6d4fc7ef4d161c79ce5acf8f548a17da6d0d5ed4089df2b706041cee13d3cdc\"" Apr 30 03:29:19.604071 containerd[2020]: time="2025-04-30T03:29:19.603674427Z" level=info msg="StartContainer for \"c6d4fc7ef4d161c79ce5acf8f548a17da6d0d5ed4089df2b706041cee13d3cdc\" returns successfully" Apr 30 03:29:20.629599 kubelet[3315]: E0430 03:29:20.628000 3315 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lfxjq" podUID="46a42e1d-4f1d-46c0-be67-d687a45629b1" Apr 30 03:29:20.651838 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c6d4fc7ef4d161c79ce5acf8f548a17da6d0d5ed4089df2b706041cee13d3cdc-rootfs.mount: Deactivated successfully. Apr 30 03:29:20.665808 containerd[2020]: time="2025-04-30T03:29:20.665700517Z" level=info msg="shim disconnected" id=c6d4fc7ef4d161c79ce5acf8f548a17da6d0d5ed4089df2b706041cee13d3cdc namespace=k8s.io Apr 30 03:29:20.665808 containerd[2020]: time="2025-04-30T03:29:20.665802534Z" level=warning msg="cleaning up after shim disconnected" id=c6d4fc7ef4d161c79ce5acf8f548a17da6d0d5ed4089df2b706041cee13d3cdc namespace=k8s.io Apr 30 03:29:20.665808 containerd[2020]: time="2025-04-30T03:29:20.665815183Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:29:20.679262 kubelet[3315]: I0430 03:29:20.679209 3315 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Apr 30 03:29:20.716697 kubelet[3315]: I0430 03:29:20.715746 3315 topology_manager.go:215] "Topology Admit Handler" podUID="3d7c4146-df2d-4753-83ec-174f6ff20d4b" podNamespace="kube-system" podName="coredns-7db6d8ff4d-k4mpk" Apr 30 03:29:20.723157 kubelet[3315]: I0430 03:29:20.723118 3315 topology_manager.go:215] "Topology Admit Handler" podUID="83a6b31a-fa39-475d-820a-3c65d1ea9b44" podNamespace="calico-system" podName="calico-kube-controllers-5b8dc9df9d-46jp7" Apr 30 03:29:20.724182 kubelet[3315]: I0430 03:29:20.723681 3315 topology_manager.go:215] "Topology Admit Handler" podUID="c88b3859-8c19-40ee-b3d5-67ca01136bf7" podNamespace="kube-system" podName="coredns-7db6d8ff4d-qrm6j" Apr 30 03:29:20.726526 kubelet[3315]: I0430 03:29:20.726370 3315 topology_manager.go:215] "Topology Admit Handler" podUID="9a75c049-74d5-4f65-bcf2-58f5a64e3866" podNamespace="calico-apiserver" podName="calico-apiserver-6568d4bb6-l2z69" Apr 30 03:29:20.739015 kubelet[3315]: I0430 03:29:20.738947 3315 topology_manager.go:215] "Topology Admit Handler" podUID="d42df863-30d9-489a-a202-be20feb2d875" podNamespace="calico-apiserver" podName="calico-apiserver-6568d4bb6-sbcbv" Apr 30 03:29:20.740368 kubelet[3315]: I0430 03:29:20.740029 3315 topology_manager.go:215] "Topology Admit Handler" podUID="6170a3e5-e4fb-4596-abdd-016a02fa9e9d" podNamespace="calico-apiserver" podName="calico-apiserver-79d7797bfd-7hhqk" Apr 30 03:29:20.745223 kubelet[3315]: W0430 03:29:20.744041 3315 reflector.go:547] object-"calico-apiserver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ip-172-31-23-191" cannot list resource "configmaps" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ip-172-31-23-191' and this object Apr 30 03:29:20.746872 kubelet[3315]: E0430 03:29:20.745511 3315 reflector.go:150] object-"calico-apiserver"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ip-172-31-23-191" cannot list resource "configmaps" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ip-172-31-23-191' and this object Apr 30 03:29:20.778692 containerd[2020]: time="2025-04-30T03:29:20.778322766Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" Apr 30 03:29:20.867450 kubelet[3315]: I0430 03:29:20.867400 3315 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/83a6b31a-fa39-475d-820a-3c65d1ea9b44-tigera-ca-bundle\") pod \"calico-kube-controllers-5b8dc9df9d-46jp7\" (UID: \"83a6b31a-fa39-475d-820a-3c65d1ea9b44\") " pod="calico-system/calico-kube-controllers-5b8dc9df9d-46jp7" Apr 30 03:29:20.867682 kubelet[3315]: I0430 03:29:20.867651 3315 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/9a75c049-74d5-4f65-bcf2-58f5a64e3866-calico-apiserver-certs\") pod \"calico-apiserver-6568d4bb6-l2z69\" (UID: \"9a75c049-74d5-4f65-bcf2-58f5a64e3866\") " pod="calico-apiserver/calico-apiserver-6568d4bb6-l2z69" Apr 30 03:29:20.867727 kubelet[3315]: I0430 03:29:20.867706 3315 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c88b3859-8c19-40ee-b3d5-67ca01136bf7-config-volume\") pod \"coredns-7db6d8ff4d-qrm6j\" (UID: \"c88b3859-8c19-40ee-b3d5-67ca01136bf7\") " pod="kube-system/coredns-7db6d8ff4d-qrm6j" Apr 30 03:29:20.868829 kubelet[3315]: I0430 03:29:20.867736 3315 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/6170a3e5-e4fb-4596-abdd-016a02fa9e9d-calico-apiserver-certs\") pod \"calico-apiserver-79d7797bfd-7hhqk\" (UID: \"6170a3e5-e4fb-4596-abdd-016a02fa9e9d\") " pod="calico-apiserver/calico-apiserver-79d7797bfd-7hhqk" Apr 30 03:29:20.868829 kubelet[3315]: I0430 03:29:20.867755 3315 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6k2p\" (UniqueName: \"kubernetes.io/projected/83a6b31a-fa39-475d-820a-3c65d1ea9b44-kube-api-access-k6k2p\") pod \"calico-kube-controllers-5b8dc9df9d-46jp7\" (UID: \"83a6b31a-fa39-475d-820a-3c65d1ea9b44\") " pod="calico-system/calico-kube-controllers-5b8dc9df9d-46jp7" Apr 30 03:29:20.868829 kubelet[3315]: I0430 03:29:20.867784 3315 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x25c4\" (UniqueName: \"kubernetes.io/projected/6170a3e5-e4fb-4596-abdd-016a02fa9e9d-kube-api-access-x25c4\") pod \"calico-apiserver-79d7797bfd-7hhqk\" (UID: \"6170a3e5-e4fb-4596-abdd-016a02fa9e9d\") " pod="calico-apiserver/calico-apiserver-79d7797bfd-7hhqk" Apr 30 03:29:20.868829 kubelet[3315]: I0430 03:29:20.867836 3315 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3d7c4146-df2d-4753-83ec-174f6ff20d4b-config-volume\") pod \"coredns-7db6d8ff4d-k4mpk\" (UID: \"3d7c4146-df2d-4753-83ec-174f6ff20d4b\") " pod="kube-system/coredns-7db6d8ff4d-k4mpk" Apr 30 03:29:20.868829 kubelet[3315]: I0430 03:29:20.867854 3315 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-667x5\" (UniqueName: \"kubernetes.io/projected/d42df863-30d9-489a-a202-be20feb2d875-kube-api-access-667x5\") pod \"calico-apiserver-6568d4bb6-sbcbv\" (UID: \"d42df863-30d9-489a-a202-be20feb2d875\") " pod="calico-apiserver/calico-apiserver-6568d4bb6-sbcbv" Apr 30 03:29:20.868994 kubelet[3315]: I0430 03:29:20.867871 3315 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnt9p\" (UniqueName: \"kubernetes.io/projected/c88b3859-8c19-40ee-b3d5-67ca01136bf7-kube-api-access-nnt9p\") pod \"coredns-7db6d8ff4d-qrm6j\" (UID: \"c88b3859-8c19-40ee-b3d5-67ca01136bf7\") " pod="kube-system/coredns-7db6d8ff4d-qrm6j" Apr 30 03:29:20.868994 kubelet[3315]: I0430 03:29:20.867888 3315 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d42df863-30d9-489a-a202-be20feb2d875-calico-apiserver-certs\") pod \"calico-apiserver-6568d4bb6-sbcbv\" (UID: \"d42df863-30d9-489a-a202-be20feb2d875\") " pod="calico-apiserver/calico-apiserver-6568d4bb6-sbcbv" Apr 30 03:29:20.868994 kubelet[3315]: I0430 03:29:20.867904 3315 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bz85j\" (UniqueName: \"kubernetes.io/projected/9a75c049-74d5-4f65-bcf2-58f5a64e3866-kube-api-access-bz85j\") pod \"calico-apiserver-6568d4bb6-l2z69\" (UID: \"9a75c049-74d5-4f65-bcf2-58f5a64e3866\") " pod="calico-apiserver/calico-apiserver-6568d4bb6-l2z69" Apr 30 03:29:20.868994 kubelet[3315]: I0430 03:29:20.867922 3315 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mkjh\" (UniqueName: \"kubernetes.io/projected/3d7c4146-df2d-4753-83ec-174f6ff20d4b-kube-api-access-5mkjh\") pod \"coredns-7db6d8ff4d-k4mpk\" (UID: \"3d7c4146-df2d-4753-83ec-174f6ff20d4b\") " pod="kube-system/coredns-7db6d8ff4d-k4mpk" Apr 30 03:29:21.031467 containerd[2020]: time="2025-04-30T03:29:21.031399147Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-qrm6j,Uid:c88b3859-8c19-40ee-b3d5-67ca01136bf7,Namespace:kube-system,Attempt:0,}" Apr 30 03:29:21.041045 containerd[2020]: time="2025-04-30T03:29:21.041004836Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-k4mpk,Uid:3d7c4146-df2d-4753-83ec-174f6ff20d4b,Namespace:kube-system,Attempt:0,}" Apr 30 03:29:21.052524 containerd[2020]: time="2025-04-30T03:29:21.052483370Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5b8dc9df9d-46jp7,Uid:83a6b31a-fa39-475d-820a-3c65d1ea9b44,Namespace:calico-system,Attempt:0,}" Apr 30 03:29:21.421200 containerd[2020]: time="2025-04-30T03:29:21.420942249Z" level=error msg="Failed to destroy network for sandbox \"4fcd206f431d41d3a3adfa9d1f3ad114bbed19b51472047d4c2f017c7225a66a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:21.421892 containerd[2020]: time="2025-04-30T03:29:21.421777117Z" level=error msg="Failed to destroy network for sandbox \"c87985201e60124587795a8826d2392ad8ce4fbb57e303ba69bc39ce0e766987\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:21.424913 containerd[2020]: time="2025-04-30T03:29:21.424652280Z" level=error msg="encountered an error cleaning up failed sandbox \"c87985201e60124587795a8826d2392ad8ce4fbb57e303ba69bc39ce0e766987\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:21.425050 containerd[2020]: time="2025-04-30T03:29:21.425020415Z" level=error msg="encountered an error cleaning up failed sandbox \"4fcd206f431d41d3a3adfa9d1f3ad114bbed19b51472047d4c2f017c7225a66a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:21.434425 containerd[2020]: time="2025-04-30T03:29:21.432495707Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-k4mpk,Uid:3d7c4146-df2d-4753-83ec-174f6ff20d4b,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4fcd206f431d41d3a3adfa9d1f3ad114bbed19b51472047d4c2f017c7225a66a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:21.444346 containerd[2020]: time="2025-04-30T03:29:21.443260331Z" level=error msg="Failed to destroy network for sandbox \"62e57ba3bc99709f661f6600d0fcca08694390b3ad3a9bb90b19e100ecc465e2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:21.444346 containerd[2020]: time="2025-04-30T03:29:21.444103169Z" level=error msg="encountered an error cleaning up failed sandbox \"62e57ba3bc99709f661f6600d0fcca08694390b3ad3a9bb90b19e100ecc465e2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:21.444346 containerd[2020]: time="2025-04-30T03:29:21.444151024Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5b8dc9df9d-46jp7,Uid:83a6b31a-fa39-475d-820a-3c65d1ea9b44,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"62e57ba3bc99709f661f6600d0fcca08694390b3ad3a9bb90b19e100ecc465e2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:21.444595 kubelet[3315]: E0430 03:29:21.444489 3315 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"62e57ba3bc99709f661f6600d0fcca08694390b3ad3a9bb90b19e100ecc465e2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:21.444595 kubelet[3315]: E0430 03:29:21.444587 3315 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"62e57ba3bc99709f661f6600d0fcca08694390b3ad3a9bb90b19e100ecc465e2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5b8dc9df9d-46jp7" Apr 30 03:29:21.444710 kubelet[3315]: E0430 03:29:21.444609 3315 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"62e57ba3bc99709f661f6600d0fcca08694390b3ad3a9bb90b19e100ecc465e2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5b8dc9df9d-46jp7" Apr 30 03:29:21.444710 kubelet[3315]: E0430 03:29:21.444664 3315 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5b8dc9df9d-46jp7_calico-system(83a6b31a-fa39-475d-820a-3c65d1ea9b44)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5b8dc9df9d-46jp7_calico-system(83a6b31a-fa39-475d-820a-3c65d1ea9b44)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"62e57ba3bc99709f661f6600d0fcca08694390b3ad3a9bb90b19e100ecc465e2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5b8dc9df9d-46jp7" podUID="83a6b31a-fa39-475d-820a-3c65d1ea9b44" Apr 30 03:29:21.445019 kubelet[3315]: E0430 03:29:21.444861 3315 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4fcd206f431d41d3a3adfa9d1f3ad114bbed19b51472047d4c2f017c7225a66a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:21.445019 kubelet[3315]: E0430 03:29:21.444921 3315 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4fcd206f431d41d3a3adfa9d1f3ad114bbed19b51472047d4c2f017c7225a66a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-k4mpk" Apr 30 03:29:21.445019 kubelet[3315]: E0430 03:29:21.444940 3315 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4fcd206f431d41d3a3adfa9d1f3ad114bbed19b51472047d4c2f017c7225a66a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-k4mpk" Apr 30 03:29:21.445131 kubelet[3315]: E0430 03:29:21.444982 3315 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-k4mpk_kube-system(3d7c4146-df2d-4753-83ec-174f6ff20d4b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-k4mpk_kube-system(3d7c4146-df2d-4753-83ec-174f6ff20d4b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4fcd206f431d41d3a3adfa9d1f3ad114bbed19b51472047d4c2f017c7225a66a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-k4mpk" podUID="3d7c4146-df2d-4753-83ec-174f6ff20d4b" Apr 30 03:29:21.445269 containerd[2020]: time="2025-04-30T03:29:21.445239744Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-qrm6j,Uid:c88b3859-8c19-40ee-b3d5-67ca01136bf7,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c87985201e60124587795a8826d2392ad8ce4fbb57e303ba69bc39ce0e766987\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:21.445541 kubelet[3315]: E0430 03:29:21.445432 3315 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c87985201e60124587795a8826d2392ad8ce4fbb57e303ba69bc39ce0e766987\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:21.445541 kubelet[3315]: E0430 03:29:21.445464 3315 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c87985201e60124587795a8826d2392ad8ce4fbb57e303ba69bc39ce0e766987\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-qrm6j" Apr 30 03:29:21.445541 kubelet[3315]: E0430 03:29:21.445479 3315 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c87985201e60124587795a8826d2392ad8ce4fbb57e303ba69bc39ce0e766987\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-qrm6j" Apr 30 03:29:21.445670 kubelet[3315]: E0430 03:29:21.445505 3315 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-qrm6j_kube-system(c88b3859-8c19-40ee-b3d5-67ca01136bf7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-qrm6j_kube-system(c88b3859-8c19-40ee-b3d5-67ca01136bf7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c87985201e60124587795a8826d2392ad8ce4fbb57e303ba69bc39ce0e766987\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-qrm6j" podUID="c88b3859-8c19-40ee-b3d5-67ca01136bf7" Apr 30 03:29:21.643927 systemd[1]: Started sshd@7-172.31.23.191:22-147.75.109.163:47512.service - OpenSSH per-connection server daemon (147.75.109.163:47512). Apr 30 03:29:21.652401 containerd[2020]: time="2025-04-30T03:29:21.652363942Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6568d4bb6-l2z69,Uid:9a75c049-74d5-4f65-bcf2-58f5a64e3866,Namespace:calico-apiserver,Attempt:0,}" Apr 30 03:29:21.664619 containerd[2020]: time="2025-04-30T03:29:21.664574962Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6568d4bb6-sbcbv,Uid:d42df863-30d9-489a-a202-be20feb2d875,Namespace:calico-apiserver,Attempt:0,}" Apr 30 03:29:21.667124 containerd[2020]: time="2025-04-30T03:29:21.667019735Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79d7797bfd-7hhqk,Uid:6170a3e5-e4fb-4596-abdd-016a02fa9e9d,Namespace:calico-apiserver,Attempt:0,}" Apr 30 03:29:21.784628 kubelet[3315]: I0430 03:29:21.784313 3315 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c87985201e60124587795a8826d2392ad8ce4fbb57e303ba69bc39ce0e766987" Apr 30 03:29:21.791058 kubelet[3315]: I0430 03:29:21.791020 3315 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4fcd206f431d41d3a3adfa9d1f3ad114bbed19b51472047d4c2f017c7225a66a" Apr 30 03:29:21.798986 containerd[2020]: time="2025-04-30T03:29:21.798943100Z" level=info msg="StopPodSandbox for \"4fcd206f431d41d3a3adfa9d1f3ad114bbed19b51472047d4c2f017c7225a66a\"" Apr 30 03:29:21.801332 containerd[2020]: time="2025-04-30T03:29:21.801148953Z" level=info msg="StopPodSandbox for \"c87985201e60124587795a8826d2392ad8ce4fbb57e303ba69bc39ce0e766987\"" Apr 30 03:29:21.804592 containerd[2020]: time="2025-04-30T03:29:21.804207182Z" level=info msg="Ensure that sandbox c87985201e60124587795a8826d2392ad8ce4fbb57e303ba69bc39ce0e766987 in task-service has been cleanup successfully" Apr 30 03:29:21.804712 containerd[2020]: time="2025-04-30T03:29:21.804595688Z" level=info msg="Ensure that sandbox 4fcd206f431d41d3a3adfa9d1f3ad114bbed19b51472047d4c2f017c7225a66a in task-service has been cleanup successfully" Apr 30 03:29:21.807351 kubelet[3315]: I0430 03:29:21.807327 3315 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="62e57ba3bc99709f661f6600d0fcca08694390b3ad3a9bb90b19e100ecc465e2" Apr 30 03:29:21.811592 containerd[2020]: time="2025-04-30T03:29:21.811104947Z" level=info msg="StopPodSandbox for \"62e57ba3bc99709f661f6600d0fcca08694390b3ad3a9bb90b19e100ecc465e2\"" Apr 30 03:29:21.811592 containerd[2020]: time="2025-04-30T03:29:21.811314070Z" level=info msg="Ensure that sandbox 62e57ba3bc99709f661f6600d0fcca08694390b3ad3a9bb90b19e100ecc465e2 in task-service has been cleanup successfully" Apr 30 03:29:21.942464 sshd[4400]: Accepted publickey for core from 147.75.109.163 port 47512 ssh2: RSA SHA256:7ZQea3lKZeIY1pq8546y2SpcWopo7i1peiZKBcYFJ3g Apr 30 03:29:21.947315 sshd[4400]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:29:21.974205 systemd-logind[1998]: New session 8 of user core. Apr 30 03:29:21.979674 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 30 03:29:22.012798 containerd[2020]: time="2025-04-30T03:29:22.012745696Z" level=error msg="StopPodSandbox for \"c87985201e60124587795a8826d2392ad8ce4fbb57e303ba69bc39ce0e766987\" failed" error="failed to destroy network for sandbox \"c87985201e60124587795a8826d2392ad8ce4fbb57e303ba69bc39ce0e766987\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:22.013544 kubelet[3315]: E0430 03:29:22.013287 3315 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c87985201e60124587795a8826d2392ad8ce4fbb57e303ba69bc39ce0e766987\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c87985201e60124587795a8826d2392ad8ce4fbb57e303ba69bc39ce0e766987" Apr 30 03:29:22.013544 kubelet[3315]: E0430 03:29:22.013362 3315 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c87985201e60124587795a8826d2392ad8ce4fbb57e303ba69bc39ce0e766987"} Apr 30 03:29:22.013544 kubelet[3315]: E0430 03:29:22.013441 3315 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c88b3859-8c19-40ee-b3d5-67ca01136bf7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c87985201e60124587795a8826d2392ad8ce4fbb57e303ba69bc39ce0e766987\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:29:22.013544 kubelet[3315]: E0430 03:29:22.013473 3315 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c88b3859-8c19-40ee-b3d5-67ca01136bf7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c87985201e60124587795a8826d2392ad8ce4fbb57e303ba69bc39ce0e766987\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-qrm6j" podUID="c88b3859-8c19-40ee-b3d5-67ca01136bf7" Apr 30 03:29:22.058936 containerd[2020]: time="2025-04-30T03:29:22.058808210Z" level=error msg="Failed to destroy network for sandbox \"68f40e1cb60f80588c61ce600f3c436b12aceb52142759708197e5d1c8f9fc3e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:22.060212 containerd[2020]: time="2025-04-30T03:29:22.060169830Z" level=error msg="encountered an error cleaning up failed sandbox \"68f40e1cb60f80588c61ce600f3c436b12aceb52142759708197e5d1c8f9fc3e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:22.060382 containerd[2020]: time="2025-04-30T03:29:22.060352517Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6568d4bb6-l2z69,Uid:9a75c049-74d5-4f65-bcf2-58f5a64e3866,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"68f40e1cb60f80588c61ce600f3c436b12aceb52142759708197e5d1c8f9fc3e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:22.063671 kubelet[3315]: E0430 03:29:22.060851 3315 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"68f40e1cb60f80588c61ce600f3c436b12aceb52142759708197e5d1c8f9fc3e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:22.063671 kubelet[3315]: E0430 03:29:22.060934 3315 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"68f40e1cb60f80588c61ce600f3c436b12aceb52142759708197e5d1c8f9fc3e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6568d4bb6-l2z69" Apr 30 03:29:22.063671 kubelet[3315]: E0430 03:29:22.060963 3315 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"68f40e1cb60f80588c61ce600f3c436b12aceb52142759708197e5d1c8f9fc3e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6568d4bb6-l2z69" Apr 30 03:29:22.064243 kubelet[3315]: E0430 03:29:22.061018 3315 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6568d4bb6-l2z69_calico-apiserver(9a75c049-74d5-4f65-bcf2-58f5a64e3866)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6568d4bb6-l2z69_calico-apiserver(9a75c049-74d5-4f65-bcf2-58f5a64e3866)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"68f40e1cb60f80588c61ce600f3c436b12aceb52142759708197e5d1c8f9fc3e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6568d4bb6-l2z69" podUID="9a75c049-74d5-4f65-bcf2-58f5a64e3866" Apr 30 03:29:22.085992 containerd[2020]: time="2025-04-30T03:29:22.085922524Z" level=error msg="StopPodSandbox for \"4fcd206f431d41d3a3adfa9d1f3ad114bbed19b51472047d4c2f017c7225a66a\" failed" error="failed to destroy network for sandbox \"4fcd206f431d41d3a3adfa9d1f3ad114bbed19b51472047d4c2f017c7225a66a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:22.089463 kubelet[3315]: E0430 03:29:22.089332 3315 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4fcd206f431d41d3a3adfa9d1f3ad114bbed19b51472047d4c2f017c7225a66a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4fcd206f431d41d3a3adfa9d1f3ad114bbed19b51472047d4c2f017c7225a66a" Apr 30 03:29:22.089463 kubelet[3315]: E0430 03:29:22.089406 3315 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4fcd206f431d41d3a3adfa9d1f3ad114bbed19b51472047d4c2f017c7225a66a"} Apr 30 03:29:22.089770 kubelet[3315]: E0430 03:29:22.089597 3315 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3d7c4146-df2d-4753-83ec-174f6ff20d4b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4fcd206f431d41d3a3adfa9d1f3ad114bbed19b51472047d4c2f017c7225a66a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:29:22.089770 kubelet[3315]: E0430 03:29:22.089638 3315 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3d7c4146-df2d-4753-83ec-174f6ff20d4b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4fcd206f431d41d3a3adfa9d1f3ad114bbed19b51472047d4c2f017c7225a66a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-k4mpk" podUID="3d7c4146-df2d-4753-83ec-174f6ff20d4b" Apr 30 03:29:22.098752 containerd[2020]: time="2025-04-30T03:29:22.098698046Z" level=error msg="Failed to destroy network for sandbox \"c8f2652bf7b95e77a0379e746c629dccf4c3bde4ff433715c03fa0e8619c3e04\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:22.099402 containerd[2020]: time="2025-04-30T03:29:22.099360675Z" level=error msg="encountered an error cleaning up failed sandbox \"c8f2652bf7b95e77a0379e746c629dccf4c3bde4ff433715c03fa0e8619c3e04\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:22.101819 containerd[2020]: time="2025-04-30T03:29:22.100994561Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79d7797bfd-7hhqk,Uid:6170a3e5-e4fb-4596-abdd-016a02fa9e9d,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c8f2652bf7b95e77a0379e746c629dccf4c3bde4ff433715c03fa0e8619c3e04\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:22.104110 kubelet[3315]: E0430 03:29:22.101413 3315 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c8f2652bf7b95e77a0379e746c629dccf4c3bde4ff433715c03fa0e8619c3e04\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:22.104110 kubelet[3315]: E0430 03:29:22.101477 3315 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c8f2652bf7b95e77a0379e746c629dccf4c3bde4ff433715c03fa0e8619c3e04\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-79d7797bfd-7hhqk" Apr 30 03:29:22.104110 kubelet[3315]: E0430 03:29:22.101505 3315 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c8f2652bf7b95e77a0379e746c629dccf4c3bde4ff433715c03fa0e8619c3e04\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-79d7797bfd-7hhqk" Apr 30 03:29:22.104463 containerd[2020]: time="2025-04-30T03:29:22.102901780Z" level=error msg="Failed to destroy network for sandbox \"2f0063c6c309130807bdf0f8d479490af8b4beb6509250828dfa773ddca5eddf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:22.104749 kubelet[3315]: E0430 03:29:22.101556 3315 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-79d7797bfd-7hhqk_calico-apiserver(6170a3e5-e4fb-4596-abdd-016a02fa9e9d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-79d7797bfd-7hhqk_calico-apiserver(6170a3e5-e4fb-4596-abdd-016a02fa9e9d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c8f2652bf7b95e77a0379e746c629dccf4c3bde4ff433715c03fa0e8619c3e04\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-79d7797bfd-7hhqk" podUID="6170a3e5-e4fb-4596-abdd-016a02fa9e9d" Apr 30 03:29:22.108771 containerd[2020]: time="2025-04-30T03:29:22.108076441Z" level=error msg="encountered an error cleaning up failed sandbox \"2f0063c6c309130807bdf0f8d479490af8b4beb6509250828dfa773ddca5eddf\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:22.108771 containerd[2020]: time="2025-04-30T03:29:22.108328490Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6568d4bb6-sbcbv,Uid:d42df863-30d9-489a-a202-be20feb2d875,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2f0063c6c309130807bdf0f8d479490af8b4beb6509250828dfa773ddca5eddf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:22.108984 kubelet[3315]: E0430 03:29:22.108524 3315 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f0063c6c309130807bdf0f8d479490af8b4beb6509250828dfa773ddca5eddf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:22.108984 kubelet[3315]: E0430 03:29:22.108613 3315 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f0063c6c309130807bdf0f8d479490af8b4beb6509250828dfa773ddca5eddf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6568d4bb6-sbcbv" Apr 30 03:29:22.108984 kubelet[3315]: E0430 03:29:22.108642 3315 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f0063c6c309130807bdf0f8d479490af8b4beb6509250828dfa773ddca5eddf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6568d4bb6-sbcbv" Apr 30 03:29:22.109145 kubelet[3315]: E0430 03:29:22.108688 3315 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6568d4bb6-sbcbv_calico-apiserver(d42df863-30d9-489a-a202-be20feb2d875)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6568d4bb6-sbcbv_calico-apiserver(d42df863-30d9-489a-a202-be20feb2d875)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2f0063c6c309130807bdf0f8d479490af8b4beb6509250828dfa773ddca5eddf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6568d4bb6-sbcbv" podUID="d42df863-30d9-489a-a202-be20feb2d875" Apr 30 03:29:22.110627 containerd[2020]: time="2025-04-30T03:29:22.109371718Z" level=error msg="StopPodSandbox for \"62e57ba3bc99709f661f6600d0fcca08694390b3ad3a9bb90b19e100ecc465e2\" failed" error="failed to destroy network for sandbox \"62e57ba3bc99709f661f6600d0fcca08694390b3ad3a9bb90b19e100ecc465e2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:22.110741 kubelet[3315]: E0430 03:29:22.110374 3315 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"62e57ba3bc99709f661f6600d0fcca08694390b3ad3a9bb90b19e100ecc465e2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="62e57ba3bc99709f661f6600d0fcca08694390b3ad3a9bb90b19e100ecc465e2" Apr 30 03:29:22.111624 kubelet[3315]: E0430 03:29:22.111444 3315 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"62e57ba3bc99709f661f6600d0fcca08694390b3ad3a9bb90b19e100ecc465e2"} Apr 30 03:29:22.111624 kubelet[3315]: E0430 03:29:22.111527 3315 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"83a6b31a-fa39-475d-820a-3c65d1ea9b44\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"62e57ba3bc99709f661f6600d0fcca08694390b3ad3a9bb90b19e100ecc465e2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:29:22.111624 kubelet[3315]: E0430 03:29:22.111586 3315 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"83a6b31a-fa39-475d-820a-3c65d1ea9b44\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"62e57ba3bc99709f661f6600d0fcca08694390b3ad3a9bb90b19e100ecc465e2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5b8dc9df9d-46jp7" podUID="83a6b31a-fa39-475d-820a-3c65d1ea9b44" Apr 30 03:29:22.403606 sshd[4400]: pam_unix(sshd:session): session closed for user core Apr 30 03:29:22.416056 systemd[1]: sshd@7-172.31.23.191:22-147.75.109.163:47512.service: Deactivated successfully. Apr 30 03:29:22.417063 systemd-logind[1998]: Session 8 logged out. Waiting for processes to exit. Apr 30 03:29:22.427297 systemd[1]: session-8.scope: Deactivated successfully. Apr 30 03:29:22.428910 systemd-logind[1998]: Removed session 8. Apr 30 03:29:22.662385 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c8f2652bf7b95e77a0379e746c629dccf4c3bde4ff433715c03fa0e8619c3e04-shm.mount: Deactivated successfully. Apr 30 03:29:22.663810 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2f0063c6c309130807bdf0f8d479490af8b4beb6509250828dfa773ddca5eddf-shm.mount: Deactivated successfully. Apr 30 03:29:22.663969 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-68f40e1cb60f80588c61ce600f3c436b12aceb52142759708197e5d1c8f9fc3e-shm.mount: Deactivated successfully. Apr 30 03:29:22.666714 containerd[2020]: time="2025-04-30T03:29:22.666670853Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lfxjq,Uid:46a42e1d-4f1d-46c0-be67-d687a45629b1,Namespace:calico-system,Attempt:0,}" Apr 30 03:29:22.815744 kubelet[3315]: I0430 03:29:22.814795 3315 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2f0063c6c309130807bdf0f8d479490af8b4beb6509250828dfa773ddca5eddf" Apr 30 03:29:22.817586 containerd[2020]: time="2025-04-30T03:29:22.817138532Z" level=info msg="StopPodSandbox for \"2f0063c6c309130807bdf0f8d479490af8b4beb6509250828dfa773ddca5eddf\"" Apr 30 03:29:22.817586 containerd[2020]: time="2025-04-30T03:29:22.817343886Z" level=info msg="Ensure that sandbox 2f0063c6c309130807bdf0f8d479490af8b4beb6509250828dfa773ddca5eddf in task-service has been cleanup successfully" Apr 30 03:29:22.819660 kubelet[3315]: I0430 03:29:22.818985 3315 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="68f40e1cb60f80588c61ce600f3c436b12aceb52142759708197e5d1c8f9fc3e" Apr 30 03:29:22.823488 containerd[2020]: time="2025-04-30T03:29:22.823443158Z" level=info msg="StopPodSandbox for \"68f40e1cb60f80588c61ce600f3c436b12aceb52142759708197e5d1c8f9fc3e\"" Apr 30 03:29:22.825499 containerd[2020]: time="2025-04-30T03:29:22.824618721Z" level=info msg="Ensure that sandbox 68f40e1cb60f80588c61ce600f3c436b12aceb52142759708197e5d1c8f9fc3e in task-service has been cleanup successfully" Apr 30 03:29:22.827406 kubelet[3315]: I0430 03:29:22.827179 3315 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c8f2652bf7b95e77a0379e746c629dccf4c3bde4ff433715c03fa0e8619c3e04" Apr 30 03:29:22.833493 containerd[2020]: time="2025-04-30T03:29:22.833451547Z" level=info msg="StopPodSandbox for \"c8f2652bf7b95e77a0379e746c629dccf4c3bde4ff433715c03fa0e8619c3e04\"" Apr 30 03:29:22.835293 containerd[2020]: time="2025-04-30T03:29:22.835066701Z" level=info msg="Ensure that sandbox c8f2652bf7b95e77a0379e746c629dccf4c3bde4ff433715c03fa0e8619c3e04 in task-service has been cleanup successfully" Apr 30 03:29:22.922697 containerd[2020]: time="2025-04-30T03:29:22.921993566Z" level=error msg="StopPodSandbox for \"c8f2652bf7b95e77a0379e746c629dccf4c3bde4ff433715c03fa0e8619c3e04\" failed" error="failed to destroy network for sandbox \"c8f2652bf7b95e77a0379e746c629dccf4c3bde4ff433715c03fa0e8619c3e04\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:22.922846 kubelet[3315]: E0430 03:29:22.922274 3315 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c8f2652bf7b95e77a0379e746c629dccf4c3bde4ff433715c03fa0e8619c3e04\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c8f2652bf7b95e77a0379e746c629dccf4c3bde4ff433715c03fa0e8619c3e04" Apr 30 03:29:22.922846 kubelet[3315]: E0430 03:29:22.922329 3315 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c8f2652bf7b95e77a0379e746c629dccf4c3bde4ff433715c03fa0e8619c3e04"} Apr 30 03:29:22.922846 kubelet[3315]: E0430 03:29:22.922376 3315 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6170a3e5-e4fb-4596-abdd-016a02fa9e9d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c8f2652bf7b95e77a0379e746c629dccf4c3bde4ff433715c03fa0e8619c3e04\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:29:22.922846 kubelet[3315]: E0430 03:29:22.922420 3315 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6170a3e5-e4fb-4596-abdd-016a02fa9e9d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c8f2652bf7b95e77a0379e746c629dccf4c3bde4ff433715c03fa0e8619c3e04\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-79d7797bfd-7hhqk" podUID="6170a3e5-e4fb-4596-abdd-016a02fa9e9d" Apr 30 03:29:22.945588 containerd[2020]: time="2025-04-30T03:29:22.942233269Z" level=error msg="Failed to destroy network for sandbox \"fe7e08d10bd1fbd8312406391891a6337478c7ab5867988928bcd1b0c477f46f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:22.945588 containerd[2020]: time="2025-04-30T03:29:22.943842594Z" level=error msg="encountered an error cleaning up failed sandbox \"fe7e08d10bd1fbd8312406391891a6337478c7ab5867988928bcd1b0c477f46f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:22.945588 containerd[2020]: time="2025-04-30T03:29:22.943915290Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lfxjq,Uid:46a42e1d-4f1d-46c0-be67-d687a45629b1,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fe7e08d10bd1fbd8312406391891a6337478c7ab5867988928bcd1b0c477f46f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:22.953314 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fe7e08d10bd1fbd8312406391891a6337478c7ab5867988928bcd1b0c477f46f-shm.mount: Deactivated successfully. Apr 30 03:29:22.958234 kubelet[3315]: E0430 03:29:22.953904 3315 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe7e08d10bd1fbd8312406391891a6337478c7ab5867988928bcd1b0c477f46f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:22.958234 kubelet[3315]: E0430 03:29:22.953990 3315 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe7e08d10bd1fbd8312406391891a6337478c7ab5867988928bcd1b0c477f46f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-lfxjq" Apr 30 03:29:22.958234 kubelet[3315]: E0430 03:29:22.954020 3315 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe7e08d10bd1fbd8312406391891a6337478c7ab5867988928bcd1b0c477f46f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-lfxjq" Apr 30 03:29:22.958424 kubelet[3315]: E0430 03:29:22.954092 3315 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-lfxjq_calico-system(46a42e1d-4f1d-46c0-be67-d687a45629b1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-lfxjq_calico-system(46a42e1d-4f1d-46c0-be67-d687a45629b1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fe7e08d10bd1fbd8312406391891a6337478c7ab5867988928bcd1b0c477f46f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-lfxjq" podUID="46a42e1d-4f1d-46c0-be67-d687a45629b1" Apr 30 03:29:22.973486 containerd[2020]: time="2025-04-30T03:29:22.973090759Z" level=error msg="StopPodSandbox for \"68f40e1cb60f80588c61ce600f3c436b12aceb52142759708197e5d1c8f9fc3e\" failed" error="failed to destroy network for sandbox \"68f40e1cb60f80588c61ce600f3c436b12aceb52142759708197e5d1c8f9fc3e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:22.973486 containerd[2020]: time="2025-04-30T03:29:22.973273698Z" level=error msg="StopPodSandbox for \"2f0063c6c309130807bdf0f8d479490af8b4beb6509250828dfa773ddca5eddf\" failed" error="failed to destroy network for sandbox \"2f0063c6c309130807bdf0f8d479490af8b4beb6509250828dfa773ddca5eddf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:22.975514 kubelet[3315]: E0430 03:29:22.973543 3315 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"68f40e1cb60f80588c61ce600f3c436b12aceb52142759708197e5d1c8f9fc3e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="68f40e1cb60f80588c61ce600f3c436b12aceb52142759708197e5d1c8f9fc3e" Apr 30 03:29:22.975514 kubelet[3315]: E0430 03:29:22.973771 3315 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"68f40e1cb60f80588c61ce600f3c436b12aceb52142759708197e5d1c8f9fc3e"} Apr 30 03:29:22.975514 kubelet[3315]: E0430 03:29:22.973916 3315 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9a75c049-74d5-4f65-bcf2-58f5a64e3866\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"68f40e1cb60f80588c61ce600f3c436b12aceb52142759708197e5d1c8f9fc3e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:29:22.975514 kubelet[3315]: E0430 03:29:22.973949 3315 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9a75c049-74d5-4f65-bcf2-58f5a64e3866\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"68f40e1cb60f80588c61ce600f3c436b12aceb52142759708197e5d1c8f9fc3e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6568d4bb6-l2z69" podUID="9a75c049-74d5-4f65-bcf2-58f5a64e3866" Apr 30 03:29:22.976991 kubelet[3315]: E0430 03:29:22.974131 3315 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2f0063c6c309130807bdf0f8d479490af8b4beb6509250828dfa773ddca5eddf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2f0063c6c309130807bdf0f8d479490af8b4beb6509250828dfa773ddca5eddf" Apr 30 03:29:22.976991 kubelet[3315]: E0430 03:29:22.974190 3315 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2f0063c6c309130807bdf0f8d479490af8b4beb6509250828dfa773ddca5eddf"} Apr 30 03:29:22.976991 kubelet[3315]: E0430 03:29:22.974398 3315 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d42df863-30d9-489a-a202-be20feb2d875\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2f0063c6c309130807bdf0f8d479490af8b4beb6509250828dfa773ddca5eddf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:29:22.976991 kubelet[3315]: E0430 03:29:22.974503 3315 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d42df863-30d9-489a-a202-be20feb2d875\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2f0063c6c309130807bdf0f8d479490af8b4beb6509250828dfa773ddca5eddf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6568d4bb6-sbcbv" podUID="d42df863-30d9-489a-a202-be20feb2d875" Apr 30 03:29:23.533181 systemd-journald[1497]: Under memory pressure, flushing caches. Apr 30 03:29:23.532652 systemd-resolved[1907]: Under memory pressure, flushing caches. Apr 30 03:29:23.532759 systemd-resolved[1907]: Flushed all caches. Apr 30 03:29:23.832408 kubelet[3315]: I0430 03:29:23.831964 3315 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe7e08d10bd1fbd8312406391891a6337478c7ab5867988928bcd1b0c477f46f" Apr 30 03:29:23.834556 containerd[2020]: time="2025-04-30T03:29:23.834028264Z" level=info msg="StopPodSandbox for \"fe7e08d10bd1fbd8312406391891a6337478c7ab5867988928bcd1b0c477f46f\"" Apr 30 03:29:23.834556 containerd[2020]: time="2025-04-30T03:29:23.834259968Z" level=info msg="Ensure that sandbox fe7e08d10bd1fbd8312406391891a6337478c7ab5867988928bcd1b0c477f46f in task-service has been cleanup successfully" Apr 30 03:29:23.897489 containerd[2020]: time="2025-04-30T03:29:23.897431974Z" level=error msg="StopPodSandbox for \"fe7e08d10bd1fbd8312406391891a6337478c7ab5867988928bcd1b0c477f46f\" failed" error="failed to destroy network for sandbox \"fe7e08d10bd1fbd8312406391891a6337478c7ab5867988928bcd1b0c477f46f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:23.899154 kubelet[3315]: E0430 03:29:23.898980 3315 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"fe7e08d10bd1fbd8312406391891a6337478c7ab5867988928bcd1b0c477f46f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="fe7e08d10bd1fbd8312406391891a6337478c7ab5867988928bcd1b0c477f46f" Apr 30 03:29:23.899154 kubelet[3315]: E0430 03:29:23.899038 3315 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"fe7e08d10bd1fbd8312406391891a6337478c7ab5867988928bcd1b0c477f46f"} Apr 30 03:29:23.899154 kubelet[3315]: E0430 03:29:23.899083 3315 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"46a42e1d-4f1d-46c0-be67-d687a45629b1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fe7e08d10bd1fbd8312406391891a6337478c7ab5867988928bcd1b0c477f46f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:29:23.899154 kubelet[3315]: E0430 03:29:23.899114 3315 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"46a42e1d-4f1d-46c0-be67-d687a45629b1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fe7e08d10bd1fbd8312406391891a6337478c7ab5867988928bcd1b0c477f46f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-lfxjq" podUID="46a42e1d-4f1d-46c0-be67-d687a45629b1" Apr 30 03:29:27.338092 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3595348576.mount: Deactivated successfully. Apr 30 03:29:27.429838 containerd[2020]: time="2025-04-30T03:29:27.418039449Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:27.436940 containerd[2020]: time="2025-04-30T03:29:27.436882171Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.3: active requests=0, bytes read=144068748" Apr 30 03:29:27.447374 containerd[2020]: time="2025-04-30T03:29:27.447332247Z" level=info msg="ImageCreate event name:\"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:27.448073 containerd[2020]: time="2025-04-30T03:29:27.448030266Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:27.448739 systemd[1]: Started sshd@8-172.31.23.191:22-147.75.109.163:40434.service - OpenSSH per-connection server daemon (147.75.109.163:40434). Apr 30 03:29:27.449329 containerd[2020]: time="2025-04-30T03:29:27.449173070Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.3\" with image id \"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\", size \"144068610\" in 6.670212628s" Apr 30 03:29:27.449329 containerd[2020]: time="2025-04-30T03:29:27.449204764Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" returns image reference \"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\"" Apr 30 03:29:27.488701 containerd[2020]: time="2025-04-30T03:29:27.488671713Z" level=info msg="CreateContainer within sandbox \"133d75c115f02b8c0cd78e8b89860f7642ef433e9597c210ddf1220d811152f1\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Apr 30 03:29:27.500509 systemd-resolved[1907]: Under memory pressure, flushing caches. Apr 30 03:29:27.500516 systemd-resolved[1907]: Flushed all caches. Apr 30 03:29:27.503651 systemd-journald[1497]: Under memory pressure, flushing caches. Apr 30 03:29:27.554633 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3407626496.mount: Deactivated successfully. Apr 30 03:29:27.577912 containerd[2020]: time="2025-04-30T03:29:27.577719886Z" level=info msg="CreateContainer within sandbox \"133d75c115f02b8c0cd78e8b89860f7642ef433e9597c210ddf1220d811152f1\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"cc1704ed32693bebb4a9793172d3d0f790694d95cccca5919601a62eae50d234\"" Apr 30 03:29:27.578789 containerd[2020]: time="2025-04-30T03:29:27.578450901Z" level=info msg="StartContainer for \"cc1704ed32693bebb4a9793172d3d0f790694d95cccca5919601a62eae50d234\"" Apr 30 03:29:27.714117 containerd[2020]: time="2025-04-30T03:29:27.713563545Z" level=info msg="StartContainer for \"cc1704ed32693bebb4a9793172d3d0f790694d95cccca5919601a62eae50d234\" returns successfully" Apr 30 03:29:27.748306 sshd[4675]: Accepted publickey for core from 147.75.109.163 port 40434 ssh2: RSA SHA256:7ZQea3lKZeIY1pq8546y2SpcWopo7i1peiZKBcYFJ3g Apr 30 03:29:27.751990 sshd[4675]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:29:27.759387 systemd-logind[1998]: New session 9 of user core. Apr 30 03:29:27.765854 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 30 03:29:27.912976 kubelet[3315]: I0430 03:29:27.901706 3315 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-5pb4r" podStartSLOduration=2.086691553 podStartE2EDuration="17.898296828s" podCreationTimestamp="2025-04-30 03:29:10 +0000 UTC" firstStartedPulling="2025-04-30 03:29:11.638452063 +0000 UTC m=+25.164668806" lastFinishedPulling="2025-04-30 03:29:27.45005735 +0000 UTC m=+40.976274081" observedRunningTime="2025-04-30 03:29:27.895756591 +0000 UTC m=+41.421973342" watchObservedRunningTime="2025-04-30 03:29:27.898296828 +0000 UTC m=+41.424513596" Apr 30 03:29:27.990599 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Apr 30 03:29:27.992029 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Apr 30 03:29:28.021205 kubelet[3315]: I0430 03:29:28.020084 3315 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 30 03:29:28.073809 sshd[4675]: pam_unix(sshd:session): session closed for user core Apr 30 03:29:28.084703 systemd[1]: sshd@8-172.31.23.191:22-147.75.109.163:40434.service: Deactivated successfully. Apr 30 03:29:28.100168 systemd[1]: session-9.scope: Deactivated successfully. Apr 30 03:29:28.101996 systemd-logind[1998]: Session 9 logged out. Waiting for processes to exit. Apr 30 03:29:28.105859 systemd-logind[1998]: Removed session 9. Apr 30 03:29:28.864423 kubelet[3315]: I0430 03:29:28.864346 3315 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 30 03:29:29.548600 systemd-journald[1497]: Under memory pressure, flushing caches. Apr 30 03:29:29.549963 systemd-resolved[1907]: Under memory pressure, flushing caches. Apr 30 03:29:29.549997 systemd-resolved[1907]: Flushed all caches. Apr 30 03:29:29.987835 kernel: bpftool[4884]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Apr 30 03:29:30.249023 systemd-networkd[1575]: vxlan.calico: Link UP Apr 30 03:29:30.249031 systemd-networkd[1575]: vxlan.calico: Gained carrier Apr 30 03:29:30.251504 (udev-worker)[4730]: Network interface NamePolicy= disabled on kernel command line. Apr 30 03:29:30.284147 (udev-worker)[4915]: Network interface NamePolicy= disabled on kernel command line. Apr 30 03:29:30.284233 (udev-worker)[4913]: Network interface NamePolicy= disabled on kernel command line. Apr 30 03:29:31.403841 systemd-networkd[1575]: vxlan.calico: Gained IPv6LL Apr 30 03:29:31.461479 kubelet[3315]: I0430 03:29:31.461425 3315 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 30 03:29:31.533193 systemd[1]: run-containerd-runc-k8s.io-cc1704ed32693bebb4a9793172d3d0f790694d95cccca5919601a62eae50d234-runc.my01jx.mount: Deactivated successfully. Apr 30 03:29:32.628743 containerd[2020]: time="2025-04-30T03:29:32.628260212Z" level=info msg="StopPodSandbox for \"4fcd206f431d41d3a3adfa9d1f3ad114bbed19b51472047d4c2f017c7225a66a\"" Apr 30 03:29:33.116494 systemd[1]: Started sshd@9-172.31.23.191:22-147.75.109.163:40450.service - OpenSSH per-connection server daemon (147.75.109.163:40450). Apr 30 03:29:33.127560 containerd[2020]: 2025-04-30 03:29:32.727 [INFO][5014] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4fcd206f431d41d3a3adfa9d1f3ad114bbed19b51472047d4c2f017c7225a66a" Apr 30 03:29:33.127560 containerd[2020]: 2025-04-30 03:29:32.729 [INFO][5014] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4fcd206f431d41d3a3adfa9d1f3ad114bbed19b51472047d4c2f017c7225a66a" iface="eth0" netns="/var/run/netns/cni-a41664bb-184c-acd5-8532-14b15131d788" Apr 30 03:29:33.127560 containerd[2020]: 2025-04-30 03:29:32.730 [INFO][5014] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4fcd206f431d41d3a3adfa9d1f3ad114bbed19b51472047d4c2f017c7225a66a" iface="eth0" netns="/var/run/netns/cni-a41664bb-184c-acd5-8532-14b15131d788" Apr 30 03:29:33.127560 containerd[2020]: 2025-04-30 03:29:32.733 [INFO][5014] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4fcd206f431d41d3a3adfa9d1f3ad114bbed19b51472047d4c2f017c7225a66a" iface="eth0" netns="/var/run/netns/cni-a41664bb-184c-acd5-8532-14b15131d788" Apr 30 03:29:33.127560 containerd[2020]: 2025-04-30 03:29:32.733 [INFO][5014] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4fcd206f431d41d3a3adfa9d1f3ad114bbed19b51472047d4c2f017c7225a66a" Apr 30 03:29:33.127560 containerd[2020]: 2025-04-30 03:29:32.733 [INFO][5014] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4fcd206f431d41d3a3adfa9d1f3ad114bbed19b51472047d4c2f017c7225a66a" Apr 30 03:29:33.127560 containerd[2020]: 2025-04-30 03:29:33.091 [INFO][5021] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4fcd206f431d41d3a3adfa9d1f3ad114bbed19b51472047d4c2f017c7225a66a" HandleID="k8s-pod-network.4fcd206f431d41d3a3adfa9d1f3ad114bbed19b51472047d4c2f017c7225a66a" Workload="ip--172--31--23--191-k8s-coredns--7db6d8ff4d--k4mpk-eth0" Apr 30 03:29:33.127560 containerd[2020]: 2025-04-30 03:29:33.094 [INFO][5021] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:29:33.127560 containerd[2020]: 2025-04-30 03:29:33.097 [INFO][5021] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:29:33.127560 containerd[2020]: 2025-04-30 03:29:33.120 [WARNING][5021] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4fcd206f431d41d3a3adfa9d1f3ad114bbed19b51472047d4c2f017c7225a66a" HandleID="k8s-pod-network.4fcd206f431d41d3a3adfa9d1f3ad114bbed19b51472047d4c2f017c7225a66a" Workload="ip--172--31--23--191-k8s-coredns--7db6d8ff4d--k4mpk-eth0" Apr 30 03:29:33.127560 containerd[2020]: 2025-04-30 03:29:33.120 [INFO][5021] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4fcd206f431d41d3a3adfa9d1f3ad114bbed19b51472047d4c2f017c7225a66a" HandleID="k8s-pod-network.4fcd206f431d41d3a3adfa9d1f3ad114bbed19b51472047d4c2f017c7225a66a" Workload="ip--172--31--23--191-k8s-coredns--7db6d8ff4d--k4mpk-eth0" Apr 30 03:29:33.127560 containerd[2020]: 2025-04-30 03:29:33.122 [INFO][5021] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:29:33.127560 containerd[2020]: 2025-04-30 03:29:33.124 [INFO][5014] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4fcd206f431d41d3a3adfa9d1f3ad114bbed19b51472047d4c2f017c7225a66a" Apr 30 03:29:33.138366 systemd[1]: run-netns-cni\x2da41664bb\x2d184c\x2dacd5\x2d8532\x2d14b15131d788.mount: Deactivated successfully. Apr 30 03:29:33.144425 containerd[2020]: time="2025-04-30T03:29:33.144364814Z" level=info msg="TearDown network for sandbox \"4fcd206f431d41d3a3adfa9d1f3ad114bbed19b51472047d4c2f017c7225a66a\" successfully" Apr 30 03:29:33.144425 containerd[2020]: time="2025-04-30T03:29:33.144412416Z" level=info msg="StopPodSandbox for \"4fcd206f431d41d3a3adfa9d1f3ad114bbed19b51472047d4c2f017c7225a66a\" returns successfully" Apr 30 03:29:33.145538 containerd[2020]: time="2025-04-30T03:29:33.145380635Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-k4mpk,Uid:3d7c4146-df2d-4753-83ec-174f6ff20d4b,Namespace:kube-system,Attempt:1,}" Apr 30 03:29:33.322636 systemd-networkd[1575]: cali5a48f5651dc: Link UP Apr 30 03:29:33.322824 systemd-networkd[1575]: cali5a48f5651dc: Gained carrier Apr 30 03:29:33.341793 containerd[2020]: 2025-04-30 03:29:33.224 [INFO][5031] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--23--191-k8s-coredns--7db6d8ff4d--k4mpk-eth0 coredns-7db6d8ff4d- kube-system 3d7c4146-df2d-4753-83ec-174f6ff20d4b 865 0 2025-04-30 03:29:03 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-23-191 coredns-7db6d8ff4d-k4mpk eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali5a48f5651dc [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="728891ee0cf764f5f90f412a69dfd1a5f4836ffbe2306e73204a187a280c59b7" Namespace="kube-system" Pod="coredns-7db6d8ff4d-k4mpk" WorkloadEndpoint="ip--172--31--23--191-k8s-coredns--7db6d8ff4d--k4mpk-" Apr 30 03:29:33.341793 containerd[2020]: 2025-04-30 03:29:33.224 [INFO][5031] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="728891ee0cf764f5f90f412a69dfd1a5f4836ffbe2306e73204a187a280c59b7" Namespace="kube-system" Pod="coredns-7db6d8ff4d-k4mpk" WorkloadEndpoint="ip--172--31--23--191-k8s-coredns--7db6d8ff4d--k4mpk-eth0" Apr 30 03:29:33.341793 containerd[2020]: 2025-04-30 03:29:33.265 [INFO][5043] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="728891ee0cf764f5f90f412a69dfd1a5f4836ffbe2306e73204a187a280c59b7" HandleID="k8s-pod-network.728891ee0cf764f5f90f412a69dfd1a5f4836ffbe2306e73204a187a280c59b7" Workload="ip--172--31--23--191-k8s-coredns--7db6d8ff4d--k4mpk-eth0" Apr 30 03:29:33.341793 containerd[2020]: 2025-04-30 03:29:33.276 [INFO][5043] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="728891ee0cf764f5f90f412a69dfd1a5f4836ffbe2306e73204a187a280c59b7" HandleID="k8s-pod-network.728891ee0cf764f5f90f412a69dfd1a5f4836ffbe2306e73204a187a280c59b7" Workload="ip--172--31--23--191-k8s-coredns--7db6d8ff4d--k4mpk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000334f40), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-23-191", "pod":"coredns-7db6d8ff4d-k4mpk", "timestamp":"2025-04-30 03:29:33.265244234 +0000 UTC"}, Hostname:"ip-172-31-23-191", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 03:29:33.341793 containerd[2020]: 2025-04-30 03:29:33.276 [INFO][5043] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:29:33.341793 containerd[2020]: 2025-04-30 03:29:33.276 [INFO][5043] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:29:33.341793 containerd[2020]: 2025-04-30 03:29:33.276 [INFO][5043] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-23-191' Apr 30 03:29:33.341793 containerd[2020]: 2025-04-30 03:29:33.278 [INFO][5043] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.728891ee0cf764f5f90f412a69dfd1a5f4836ffbe2306e73204a187a280c59b7" host="ip-172-31-23-191" Apr 30 03:29:33.341793 containerd[2020]: 2025-04-30 03:29:33.286 [INFO][5043] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-23-191" Apr 30 03:29:33.341793 containerd[2020]: 2025-04-30 03:29:33.292 [INFO][5043] ipam/ipam.go 489: Trying affinity for 192.168.9.64/26 host="ip-172-31-23-191" Apr 30 03:29:33.341793 containerd[2020]: 2025-04-30 03:29:33.294 [INFO][5043] ipam/ipam.go 155: Attempting to load block cidr=192.168.9.64/26 host="ip-172-31-23-191" Apr 30 03:29:33.341793 containerd[2020]: 2025-04-30 03:29:33.296 [INFO][5043] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.9.64/26 host="ip-172-31-23-191" Apr 30 03:29:33.341793 containerd[2020]: 2025-04-30 03:29:33.296 [INFO][5043] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.9.64/26 handle="k8s-pod-network.728891ee0cf764f5f90f412a69dfd1a5f4836ffbe2306e73204a187a280c59b7" host="ip-172-31-23-191" Apr 30 03:29:33.341793 containerd[2020]: 2025-04-30 03:29:33.298 [INFO][5043] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.728891ee0cf764f5f90f412a69dfd1a5f4836ffbe2306e73204a187a280c59b7 Apr 30 03:29:33.341793 containerd[2020]: 2025-04-30 03:29:33.305 [INFO][5043] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.9.64/26 handle="k8s-pod-network.728891ee0cf764f5f90f412a69dfd1a5f4836ffbe2306e73204a187a280c59b7" host="ip-172-31-23-191" Apr 30 03:29:33.341793 containerd[2020]: 2025-04-30 03:29:33.314 [INFO][5043] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.9.65/26] block=192.168.9.64/26 handle="k8s-pod-network.728891ee0cf764f5f90f412a69dfd1a5f4836ffbe2306e73204a187a280c59b7" host="ip-172-31-23-191" Apr 30 03:29:33.341793 containerd[2020]: 2025-04-30 03:29:33.314 [INFO][5043] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.9.65/26] handle="k8s-pod-network.728891ee0cf764f5f90f412a69dfd1a5f4836ffbe2306e73204a187a280c59b7" host="ip-172-31-23-191" Apr 30 03:29:33.341793 containerd[2020]: 2025-04-30 03:29:33.314 [INFO][5043] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:29:33.341793 containerd[2020]: 2025-04-30 03:29:33.314 [INFO][5043] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.9.65/26] IPv6=[] ContainerID="728891ee0cf764f5f90f412a69dfd1a5f4836ffbe2306e73204a187a280c59b7" HandleID="k8s-pod-network.728891ee0cf764f5f90f412a69dfd1a5f4836ffbe2306e73204a187a280c59b7" Workload="ip--172--31--23--191-k8s-coredns--7db6d8ff4d--k4mpk-eth0" Apr 30 03:29:33.343307 containerd[2020]: 2025-04-30 03:29:33.318 [INFO][5031] cni-plugin/k8s.go 386: Populated endpoint ContainerID="728891ee0cf764f5f90f412a69dfd1a5f4836ffbe2306e73204a187a280c59b7" Namespace="kube-system" Pod="coredns-7db6d8ff4d-k4mpk" WorkloadEndpoint="ip--172--31--23--191-k8s-coredns--7db6d8ff4d--k4mpk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--191-k8s-coredns--7db6d8ff4d--k4mpk-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"3d7c4146-df2d-4753-83ec-174f6ff20d4b", ResourceVersion:"865", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 29, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-191", ContainerID:"", Pod:"coredns-7db6d8ff4d-k4mpk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.9.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5a48f5651dc", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:29:33.343307 containerd[2020]: 2025-04-30 03:29:33.319 [INFO][5031] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.9.65/32] ContainerID="728891ee0cf764f5f90f412a69dfd1a5f4836ffbe2306e73204a187a280c59b7" Namespace="kube-system" Pod="coredns-7db6d8ff4d-k4mpk" WorkloadEndpoint="ip--172--31--23--191-k8s-coredns--7db6d8ff4d--k4mpk-eth0" Apr 30 03:29:33.343307 containerd[2020]: 2025-04-30 03:29:33.319 [INFO][5031] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5a48f5651dc ContainerID="728891ee0cf764f5f90f412a69dfd1a5f4836ffbe2306e73204a187a280c59b7" Namespace="kube-system" Pod="coredns-7db6d8ff4d-k4mpk" WorkloadEndpoint="ip--172--31--23--191-k8s-coredns--7db6d8ff4d--k4mpk-eth0" Apr 30 03:29:33.343307 containerd[2020]: 2025-04-30 03:29:33.323 [INFO][5031] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="728891ee0cf764f5f90f412a69dfd1a5f4836ffbe2306e73204a187a280c59b7" Namespace="kube-system" Pod="coredns-7db6d8ff4d-k4mpk" WorkloadEndpoint="ip--172--31--23--191-k8s-coredns--7db6d8ff4d--k4mpk-eth0" Apr 30 03:29:33.343307 containerd[2020]: 2025-04-30 03:29:33.324 [INFO][5031] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="728891ee0cf764f5f90f412a69dfd1a5f4836ffbe2306e73204a187a280c59b7" Namespace="kube-system" Pod="coredns-7db6d8ff4d-k4mpk" WorkloadEndpoint="ip--172--31--23--191-k8s-coredns--7db6d8ff4d--k4mpk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--191-k8s-coredns--7db6d8ff4d--k4mpk-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"3d7c4146-df2d-4753-83ec-174f6ff20d4b", ResourceVersion:"865", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 29, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-191", ContainerID:"728891ee0cf764f5f90f412a69dfd1a5f4836ffbe2306e73204a187a280c59b7", Pod:"coredns-7db6d8ff4d-k4mpk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.9.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5a48f5651dc", MAC:"62:4f:62:36:2c:ec", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:29:33.343307 containerd[2020]: 2025-04-30 03:29:33.338 [INFO][5031] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="728891ee0cf764f5f90f412a69dfd1a5f4836ffbe2306e73204a187a280c59b7" Namespace="kube-system" Pod="coredns-7db6d8ff4d-k4mpk" WorkloadEndpoint="ip--172--31--23--191-k8s-coredns--7db6d8ff4d--k4mpk-eth0" Apr 30 03:29:33.387638 containerd[2020]: time="2025-04-30T03:29:33.386933161Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:29:33.387638 containerd[2020]: time="2025-04-30T03:29:33.387318531Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:29:33.387638 containerd[2020]: time="2025-04-30T03:29:33.387335952Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:33.387638 containerd[2020]: time="2025-04-30T03:29:33.387458881Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:33.404747 sshd[5027]: Accepted publickey for core from 147.75.109.163 port 40450 ssh2: RSA SHA256:7ZQea3lKZeIY1pq8546y2SpcWopo7i1peiZKBcYFJ3g Apr 30 03:29:33.407940 sshd[5027]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:29:33.433653 systemd-logind[1998]: New session 10 of user core. Apr 30 03:29:33.439250 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 30 03:29:33.481118 containerd[2020]: time="2025-04-30T03:29:33.481071456Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-k4mpk,Uid:3d7c4146-df2d-4753-83ec-174f6ff20d4b,Namespace:kube-system,Attempt:1,} returns sandbox id \"728891ee0cf764f5f90f412a69dfd1a5f4836ffbe2306e73204a187a280c59b7\"" Apr 30 03:29:33.488885 containerd[2020]: time="2025-04-30T03:29:33.488844358Z" level=info msg="CreateContainer within sandbox \"728891ee0cf764f5f90f412a69dfd1a5f4836ffbe2306e73204a187a280c59b7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 30 03:29:33.521589 containerd[2020]: time="2025-04-30T03:29:33.521526106Z" level=info msg="CreateContainer within sandbox \"728891ee0cf764f5f90f412a69dfd1a5f4836ffbe2306e73204a187a280c59b7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e85761c967ffa5266c2b96210f37cc041281922f81ac3884c0ac45989cea3b32\"" Apr 30 03:29:33.522357 containerd[2020]: time="2025-04-30T03:29:33.522316524Z" level=info msg="StartContainer for \"e85761c967ffa5266c2b96210f37cc041281922f81ac3884c0ac45989cea3b32\"" Apr 30 03:29:33.609238 containerd[2020]: time="2025-04-30T03:29:33.609168672Z" level=info msg="StartContainer for \"e85761c967ffa5266c2b96210f37cc041281922f81ac3884c0ac45989cea3b32\" returns successfully" Apr 30 03:29:33.629715 containerd[2020]: time="2025-04-30T03:29:33.628076414Z" level=info msg="StopPodSandbox for \"c8f2652bf7b95e77a0379e746c629dccf4c3bde4ff433715c03fa0e8619c3e04\"" Apr 30 03:29:33.632658 containerd[2020]: time="2025-04-30T03:29:33.632470481Z" level=info msg="StopPodSandbox for \"c87985201e60124587795a8826d2392ad8ce4fbb57e303ba69bc39ce0e766987\"" Apr 30 03:29:33.817833 containerd[2020]: 2025-04-30 03:29:33.740 [INFO][5165] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c8f2652bf7b95e77a0379e746c629dccf4c3bde4ff433715c03fa0e8619c3e04" Apr 30 03:29:33.817833 containerd[2020]: 2025-04-30 03:29:33.740 [INFO][5165] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c8f2652bf7b95e77a0379e746c629dccf4c3bde4ff433715c03fa0e8619c3e04" iface="eth0" netns="/var/run/netns/cni-d73c52d6-271d-a942-b8f5-7374253d1338" Apr 30 03:29:33.817833 containerd[2020]: 2025-04-30 03:29:33.741 [INFO][5165] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c8f2652bf7b95e77a0379e746c629dccf4c3bde4ff433715c03fa0e8619c3e04" iface="eth0" netns="/var/run/netns/cni-d73c52d6-271d-a942-b8f5-7374253d1338" Apr 30 03:29:33.817833 containerd[2020]: 2025-04-30 03:29:33.742 [INFO][5165] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c8f2652bf7b95e77a0379e746c629dccf4c3bde4ff433715c03fa0e8619c3e04" iface="eth0" netns="/var/run/netns/cni-d73c52d6-271d-a942-b8f5-7374253d1338" Apr 30 03:29:33.817833 containerd[2020]: 2025-04-30 03:29:33.742 [INFO][5165] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c8f2652bf7b95e77a0379e746c629dccf4c3bde4ff433715c03fa0e8619c3e04" Apr 30 03:29:33.817833 containerd[2020]: 2025-04-30 03:29:33.742 [INFO][5165] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c8f2652bf7b95e77a0379e746c629dccf4c3bde4ff433715c03fa0e8619c3e04" Apr 30 03:29:33.817833 containerd[2020]: 2025-04-30 03:29:33.784 [INFO][5184] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c8f2652bf7b95e77a0379e746c629dccf4c3bde4ff433715c03fa0e8619c3e04" HandleID="k8s-pod-network.c8f2652bf7b95e77a0379e746c629dccf4c3bde4ff433715c03fa0e8619c3e04" Workload="ip--172--31--23--191-k8s-calico--apiserver--79d7797bfd--7hhqk-eth0" Apr 30 03:29:33.817833 containerd[2020]: 2025-04-30 03:29:33.784 [INFO][5184] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:29:33.817833 containerd[2020]: 2025-04-30 03:29:33.795 [INFO][5184] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:29:33.817833 containerd[2020]: 2025-04-30 03:29:33.805 [WARNING][5184] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c8f2652bf7b95e77a0379e746c629dccf4c3bde4ff433715c03fa0e8619c3e04" HandleID="k8s-pod-network.c8f2652bf7b95e77a0379e746c629dccf4c3bde4ff433715c03fa0e8619c3e04" Workload="ip--172--31--23--191-k8s-calico--apiserver--79d7797bfd--7hhqk-eth0" Apr 30 03:29:33.817833 containerd[2020]: 2025-04-30 03:29:33.805 [INFO][5184] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c8f2652bf7b95e77a0379e746c629dccf4c3bde4ff433715c03fa0e8619c3e04" HandleID="k8s-pod-network.c8f2652bf7b95e77a0379e746c629dccf4c3bde4ff433715c03fa0e8619c3e04" Workload="ip--172--31--23--191-k8s-calico--apiserver--79d7797bfd--7hhqk-eth0" Apr 30 03:29:33.817833 containerd[2020]: 2025-04-30 03:29:33.808 [INFO][5184] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:29:33.817833 containerd[2020]: 2025-04-30 03:29:33.811 [INFO][5165] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c8f2652bf7b95e77a0379e746c629dccf4c3bde4ff433715c03fa0e8619c3e04" Apr 30 03:29:33.819293 containerd[2020]: 2025-04-30 03:29:33.742 [INFO][5164] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c87985201e60124587795a8826d2392ad8ce4fbb57e303ba69bc39ce0e766987" Apr 30 03:29:33.819293 containerd[2020]: 2025-04-30 03:29:33.743 [INFO][5164] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c87985201e60124587795a8826d2392ad8ce4fbb57e303ba69bc39ce0e766987" iface="eth0" netns="/var/run/netns/cni-044fab0b-edc5-67af-7c9d-ac2a14937cf4" Apr 30 03:29:33.819293 containerd[2020]: 2025-04-30 03:29:33.743 [INFO][5164] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c87985201e60124587795a8826d2392ad8ce4fbb57e303ba69bc39ce0e766987" iface="eth0" netns="/var/run/netns/cni-044fab0b-edc5-67af-7c9d-ac2a14937cf4" Apr 30 03:29:33.819293 containerd[2020]: 2025-04-30 03:29:33.744 [INFO][5164] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c87985201e60124587795a8826d2392ad8ce4fbb57e303ba69bc39ce0e766987" iface="eth0" netns="/var/run/netns/cni-044fab0b-edc5-67af-7c9d-ac2a14937cf4" Apr 30 03:29:33.819293 containerd[2020]: 2025-04-30 03:29:33.744 [INFO][5164] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c87985201e60124587795a8826d2392ad8ce4fbb57e303ba69bc39ce0e766987" Apr 30 03:29:33.819293 containerd[2020]: 2025-04-30 03:29:33.744 [INFO][5164] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c87985201e60124587795a8826d2392ad8ce4fbb57e303ba69bc39ce0e766987" Apr 30 03:29:33.819293 containerd[2020]: 2025-04-30 03:29:33.777 [INFO][5186] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c87985201e60124587795a8826d2392ad8ce4fbb57e303ba69bc39ce0e766987" HandleID="k8s-pod-network.c87985201e60124587795a8826d2392ad8ce4fbb57e303ba69bc39ce0e766987" Workload="ip--172--31--23--191-k8s-coredns--7db6d8ff4d--qrm6j-eth0" Apr 30 03:29:33.819293 containerd[2020]: 2025-04-30 03:29:33.777 [INFO][5186] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:29:33.819293 containerd[2020]: 2025-04-30 03:29:33.778 [INFO][5186] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:29:33.819293 containerd[2020]: 2025-04-30 03:29:33.790 [WARNING][5186] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c87985201e60124587795a8826d2392ad8ce4fbb57e303ba69bc39ce0e766987" HandleID="k8s-pod-network.c87985201e60124587795a8826d2392ad8ce4fbb57e303ba69bc39ce0e766987" Workload="ip--172--31--23--191-k8s-coredns--7db6d8ff4d--qrm6j-eth0" Apr 30 03:29:33.819293 containerd[2020]: 2025-04-30 03:29:33.791 [INFO][5186] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c87985201e60124587795a8826d2392ad8ce4fbb57e303ba69bc39ce0e766987" HandleID="k8s-pod-network.c87985201e60124587795a8826d2392ad8ce4fbb57e303ba69bc39ce0e766987" Workload="ip--172--31--23--191-k8s-coredns--7db6d8ff4d--qrm6j-eth0" Apr 30 03:29:33.819293 containerd[2020]: 2025-04-30 03:29:33.795 [INFO][5186] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:29:33.819293 containerd[2020]: 2025-04-30 03:29:33.812 [INFO][5164] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c87985201e60124587795a8826d2392ad8ce4fbb57e303ba69bc39ce0e766987" Apr 30 03:29:33.819293 containerd[2020]: time="2025-04-30T03:29:33.819069739Z" level=info msg="TearDown network for sandbox \"c87985201e60124587795a8826d2392ad8ce4fbb57e303ba69bc39ce0e766987\" successfully" Apr 30 03:29:33.819293 containerd[2020]: time="2025-04-30T03:29:33.819096446Z" level=info msg="StopPodSandbox for \"c87985201e60124587795a8826d2392ad8ce4fbb57e303ba69bc39ce0e766987\" returns successfully" Apr 30 03:29:33.821360 containerd[2020]: time="2025-04-30T03:29:33.819661089Z" level=info msg="TearDown network for sandbox \"c8f2652bf7b95e77a0379e746c629dccf4c3bde4ff433715c03fa0e8619c3e04\" successfully" Apr 30 03:29:33.821360 containerd[2020]: time="2025-04-30T03:29:33.819699134Z" level=info msg="StopPodSandbox for \"c8f2652bf7b95e77a0379e746c629dccf4c3bde4ff433715c03fa0e8619c3e04\" returns successfully" Apr 30 03:29:33.821360 containerd[2020]: time="2025-04-30T03:29:33.820797555Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79d7797bfd-7hhqk,Uid:6170a3e5-e4fb-4596-abdd-016a02fa9e9d,Namespace:calico-apiserver,Attempt:1,}" Apr 30 03:29:33.822359 containerd[2020]: time="2025-04-30T03:29:33.821984974Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-qrm6j,Uid:c88b3859-8c19-40ee-b3d5-67ca01136bf7,Namespace:kube-system,Attempt:1,}" Apr 30 03:29:33.942185 kubelet[3315]: I0430 03:29:33.940546 3315 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-k4mpk" podStartSLOduration=30.94052557 podStartE2EDuration="30.94052557s" podCreationTimestamp="2025-04-30 03:29:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 03:29:33.940073115 +0000 UTC m=+47.466289866" watchObservedRunningTime="2025-04-30 03:29:33.94052557 +0000 UTC m=+47.466742320" Apr 30 03:29:34.058916 sshd[5027]: pam_unix(sshd:session): session closed for user core Apr 30 03:29:34.066326 systemd[1]: sshd@9-172.31.23.191:22-147.75.109.163:40450.service: Deactivated successfully. Apr 30 03:29:34.067542 systemd-logind[1998]: Session 10 logged out. Waiting for processes to exit. Apr 30 03:29:34.073164 systemd[1]: session-10.scope: Deactivated successfully. Apr 30 03:29:34.074706 systemd-logind[1998]: Removed session 10. Apr 30 03:29:34.106300 systemd[1]: Started sshd@10-172.31.23.191:22-147.75.109.163:40462.service - OpenSSH per-connection server daemon (147.75.109.163:40462). Apr 30 03:29:34.119219 systemd-networkd[1575]: cali87dc7e85d3a: Link UP Apr 30 03:29:34.120857 systemd-networkd[1575]: cali87dc7e85d3a: Gained carrier Apr 30 03:29:34.145702 containerd[2020]: 2025-04-30 03:29:33.955 [INFO][5201] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--23--191-k8s-coredns--7db6d8ff4d--qrm6j-eth0 coredns-7db6d8ff4d- kube-system c88b3859-8c19-40ee-b3d5-67ca01136bf7 879 0 2025-04-30 03:29:03 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-23-191 coredns-7db6d8ff4d-qrm6j eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali87dc7e85d3a [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="319362539630c6bc97e7c4e0e29b566e6066c0c2cd8efddcc265b8dc11b23466" Namespace="kube-system" Pod="coredns-7db6d8ff4d-qrm6j" WorkloadEndpoint="ip--172--31--23--191-k8s-coredns--7db6d8ff4d--qrm6j-" Apr 30 03:29:34.145702 containerd[2020]: 2025-04-30 03:29:33.955 [INFO][5201] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="319362539630c6bc97e7c4e0e29b566e6066c0c2cd8efddcc265b8dc11b23466" Namespace="kube-system" Pod="coredns-7db6d8ff4d-qrm6j" WorkloadEndpoint="ip--172--31--23--191-k8s-coredns--7db6d8ff4d--qrm6j-eth0" Apr 30 03:29:34.145702 containerd[2020]: 2025-04-30 03:29:34.025 [INFO][5226] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="319362539630c6bc97e7c4e0e29b566e6066c0c2cd8efddcc265b8dc11b23466" HandleID="k8s-pod-network.319362539630c6bc97e7c4e0e29b566e6066c0c2cd8efddcc265b8dc11b23466" Workload="ip--172--31--23--191-k8s-coredns--7db6d8ff4d--qrm6j-eth0" Apr 30 03:29:34.145702 containerd[2020]: 2025-04-30 03:29:34.037 [INFO][5226] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="319362539630c6bc97e7c4e0e29b566e6066c0c2cd8efddcc265b8dc11b23466" HandleID="k8s-pod-network.319362539630c6bc97e7c4e0e29b566e6066c0c2cd8efddcc265b8dc11b23466" Workload="ip--172--31--23--191-k8s-coredns--7db6d8ff4d--qrm6j-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00031d290), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-23-191", "pod":"coredns-7db6d8ff4d-qrm6j", "timestamp":"2025-04-30 03:29:34.025750524 +0000 UTC"}, Hostname:"ip-172-31-23-191", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 03:29:34.145702 containerd[2020]: 2025-04-30 03:29:34.037 [INFO][5226] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:29:34.145702 containerd[2020]: 2025-04-30 03:29:34.038 [INFO][5226] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:29:34.145702 containerd[2020]: 2025-04-30 03:29:34.038 [INFO][5226] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-23-191' Apr 30 03:29:34.145702 containerd[2020]: 2025-04-30 03:29:34.040 [INFO][5226] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.319362539630c6bc97e7c4e0e29b566e6066c0c2cd8efddcc265b8dc11b23466" host="ip-172-31-23-191" Apr 30 03:29:34.145702 containerd[2020]: 2025-04-30 03:29:34.046 [INFO][5226] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-23-191" Apr 30 03:29:34.145702 containerd[2020]: 2025-04-30 03:29:34.053 [INFO][5226] ipam/ipam.go 489: Trying affinity for 192.168.9.64/26 host="ip-172-31-23-191" Apr 30 03:29:34.145702 containerd[2020]: 2025-04-30 03:29:34.056 [INFO][5226] ipam/ipam.go 155: Attempting to load block cidr=192.168.9.64/26 host="ip-172-31-23-191" Apr 30 03:29:34.145702 containerd[2020]: 2025-04-30 03:29:34.060 [INFO][5226] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.9.64/26 host="ip-172-31-23-191" Apr 30 03:29:34.145702 containerd[2020]: 2025-04-30 03:29:34.061 [INFO][5226] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.9.64/26 handle="k8s-pod-network.319362539630c6bc97e7c4e0e29b566e6066c0c2cd8efddcc265b8dc11b23466" host="ip-172-31-23-191" Apr 30 03:29:34.145702 containerd[2020]: 2025-04-30 03:29:34.065 [INFO][5226] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.319362539630c6bc97e7c4e0e29b566e6066c0c2cd8efddcc265b8dc11b23466 Apr 30 03:29:34.145702 containerd[2020]: 2025-04-30 03:29:34.084 [INFO][5226] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.9.64/26 handle="k8s-pod-network.319362539630c6bc97e7c4e0e29b566e6066c0c2cd8efddcc265b8dc11b23466" host="ip-172-31-23-191" Apr 30 03:29:34.145702 containerd[2020]: 2025-04-30 03:29:34.096 [INFO][5226] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.9.66/26] block=192.168.9.64/26 handle="k8s-pod-network.319362539630c6bc97e7c4e0e29b566e6066c0c2cd8efddcc265b8dc11b23466" host="ip-172-31-23-191" Apr 30 03:29:34.145702 containerd[2020]: 2025-04-30 03:29:34.096 [INFO][5226] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.9.66/26] handle="k8s-pod-network.319362539630c6bc97e7c4e0e29b566e6066c0c2cd8efddcc265b8dc11b23466" host="ip-172-31-23-191" Apr 30 03:29:34.145702 containerd[2020]: 2025-04-30 03:29:34.096 [INFO][5226] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:29:34.145702 containerd[2020]: 2025-04-30 03:29:34.096 [INFO][5226] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.9.66/26] IPv6=[] ContainerID="319362539630c6bc97e7c4e0e29b566e6066c0c2cd8efddcc265b8dc11b23466" HandleID="k8s-pod-network.319362539630c6bc97e7c4e0e29b566e6066c0c2cd8efddcc265b8dc11b23466" Workload="ip--172--31--23--191-k8s-coredns--7db6d8ff4d--qrm6j-eth0" Apr 30 03:29:34.154897 containerd[2020]: 2025-04-30 03:29:34.103 [INFO][5201] cni-plugin/k8s.go 386: Populated endpoint ContainerID="319362539630c6bc97e7c4e0e29b566e6066c0c2cd8efddcc265b8dc11b23466" Namespace="kube-system" Pod="coredns-7db6d8ff4d-qrm6j" WorkloadEndpoint="ip--172--31--23--191-k8s-coredns--7db6d8ff4d--qrm6j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--191-k8s-coredns--7db6d8ff4d--qrm6j-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"c88b3859-8c19-40ee-b3d5-67ca01136bf7", ResourceVersion:"879", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 29, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-191", ContainerID:"", Pod:"coredns-7db6d8ff4d-qrm6j", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.9.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali87dc7e85d3a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:29:34.154897 containerd[2020]: 2025-04-30 03:29:34.103 [INFO][5201] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.9.66/32] ContainerID="319362539630c6bc97e7c4e0e29b566e6066c0c2cd8efddcc265b8dc11b23466" Namespace="kube-system" Pod="coredns-7db6d8ff4d-qrm6j" WorkloadEndpoint="ip--172--31--23--191-k8s-coredns--7db6d8ff4d--qrm6j-eth0" Apr 30 03:29:34.154897 containerd[2020]: 2025-04-30 03:29:34.104 [INFO][5201] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali87dc7e85d3a ContainerID="319362539630c6bc97e7c4e0e29b566e6066c0c2cd8efddcc265b8dc11b23466" Namespace="kube-system" Pod="coredns-7db6d8ff4d-qrm6j" WorkloadEndpoint="ip--172--31--23--191-k8s-coredns--7db6d8ff4d--qrm6j-eth0" Apr 30 03:29:34.154897 containerd[2020]: 2025-04-30 03:29:34.109 [INFO][5201] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="319362539630c6bc97e7c4e0e29b566e6066c0c2cd8efddcc265b8dc11b23466" Namespace="kube-system" Pod="coredns-7db6d8ff4d-qrm6j" WorkloadEndpoint="ip--172--31--23--191-k8s-coredns--7db6d8ff4d--qrm6j-eth0" Apr 30 03:29:34.154897 containerd[2020]: 2025-04-30 03:29:34.110 [INFO][5201] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="319362539630c6bc97e7c4e0e29b566e6066c0c2cd8efddcc265b8dc11b23466" Namespace="kube-system" Pod="coredns-7db6d8ff4d-qrm6j" WorkloadEndpoint="ip--172--31--23--191-k8s-coredns--7db6d8ff4d--qrm6j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--191-k8s-coredns--7db6d8ff4d--qrm6j-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"c88b3859-8c19-40ee-b3d5-67ca01136bf7", ResourceVersion:"879", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 29, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-191", ContainerID:"319362539630c6bc97e7c4e0e29b566e6066c0c2cd8efddcc265b8dc11b23466", Pod:"coredns-7db6d8ff4d-qrm6j", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.9.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali87dc7e85d3a", MAC:"ea:0d:91:30:fa:d7", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:29:34.154897 containerd[2020]: 2025-04-30 03:29:34.129 [INFO][5201] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="319362539630c6bc97e7c4e0e29b566e6066c0c2cd8efddcc265b8dc11b23466" Namespace="kube-system" Pod="coredns-7db6d8ff4d-qrm6j" WorkloadEndpoint="ip--172--31--23--191-k8s-coredns--7db6d8ff4d--qrm6j-eth0" Apr 30 03:29:34.157705 systemd[1]: run-netns-cni\x2dd73c52d6\x2d271d\x2da942\x2db8f5\x2d7374253d1338.mount: Deactivated successfully. Apr 30 03:29:34.157970 systemd[1]: run-netns-cni\x2d044fab0b\x2dedc5\x2d67af\x2d7c9d\x2dac2a14937cf4.mount: Deactivated successfully. Apr 30 03:29:34.228982 containerd[2020]: time="2025-04-30T03:29:34.226411164Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:29:34.228982 containerd[2020]: time="2025-04-30T03:29:34.228924829Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:29:34.228982 containerd[2020]: time="2025-04-30T03:29:34.228947561Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:34.229292 containerd[2020]: time="2025-04-30T03:29:34.229078854Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:34.246893 systemd-networkd[1575]: cali79f8f63194b: Link UP Apr 30 03:29:34.249277 systemd-networkd[1575]: cali79f8f63194b: Gained carrier Apr 30 03:29:34.288886 containerd[2020]: 2025-04-30 03:29:33.954 [INFO][5200] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--23--191-k8s-calico--apiserver--79d7797bfd--7hhqk-eth0 calico-apiserver-79d7797bfd- calico-apiserver 6170a3e5-e4fb-4596-abdd-016a02fa9e9d 878 0 2025-04-30 03:29:11 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:79d7797bfd projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-23-191 calico-apiserver-79d7797bfd-7hhqk eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali79f8f63194b [] []}} ContainerID="8cdf8bbdca24af919eb8228a49ac81f21e2bf3f02f73d39324a0adb17fd2feb9" Namespace="calico-apiserver" Pod="calico-apiserver-79d7797bfd-7hhqk" WorkloadEndpoint="ip--172--31--23--191-k8s-calico--apiserver--79d7797bfd--7hhqk-" Apr 30 03:29:34.288886 containerd[2020]: 2025-04-30 03:29:33.955 [INFO][5200] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="8cdf8bbdca24af919eb8228a49ac81f21e2bf3f02f73d39324a0adb17fd2feb9" Namespace="calico-apiserver" Pod="calico-apiserver-79d7797bfd-7hhqk" WorkloadEndpoint="ip--172--31--23--191-k8s-calico--apiserver--79d7797bfd--7hhqk-eth0" Apr 30 03:29:34.288886 containerd[2020]: 2025-04-30 03:29:34.044 [INFO][5231] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8cdf8bbdca24af919eb8228a49ac81f21e2bf3f02f73d39324a0adb17fd2feb9" HandleID="k8s-pod-network.8cdf8bbdca24af919eb8228a49ac81f21e2bf3f02f73d39324a0adb17fd2feb9" Workload="ip--172--31--23--191-k8s-calico--apiserver--79d7797bfd--7hhqk-eth0" Apr 30 03:29:34.288886 containerd[2020]: 2025-04-30 03:29:34.055 [INFO][5231] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8cdf8bbdca24af919eb8228a49ac81f21e2bf3f02f73d39324a0adb17fd2feb9" HandleID="k8s-pod-network.8cdf8bbdca24af919eb8228a49ac81f21e2bf3f02f73d39324a0adb17fd2feb9" Workload="ip--172--31--23--191-k8s-calico--apiserver--79d7797bfd--7hhqk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000103040), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-23-191", "pod":"calico-apiserver-79d7797bfd-7hhqk", "timestamp":"2025-04-30 03:29:34.044439451 +0000 UTC"}, Hostname:"ip-172-31-23-191", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 03:29:34.288886 containerd[2020]: 2025-04-30 03:29:34.055 [INFO][5231] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:29:34.288886 containerd[2020]: 2025-04-30 03:29:34.096 [INFO][5231] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:29:34.288886 containerd[2020]: 2025-04-30 03:29:34.096 [INFO][5231] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-23-191' Apr 30 03:29:34.288886 containerd[2020]: 2025-04-30 03:29:34.102 [INFO][5231] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.8cdf8bbdca24af919eb8228a49ac81f21e2bf3f02f73d39324a0adb17fd2feb9" host="ip-172-31-23-191" Apr 30 03:29:34.288886 containerd[2020]: 2025-04-30 03:29:34.138 [INFO][5231] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-23-191" Apr 30 03:29:34.288886 containerd[2020]: 2025-04-30 03:29:34.193 [INFO][5231] ipam/ipam.go 489: Trying affinity for 192.168.9.64/26 host="ip-172-31-23-191" Apr 30 03:29:34.288886 containerd[2020]: 2025-04-30 03:29:34.197 [INFO][5231] ipam/ipam.go 155: Attempting to load block cidr=192.168.9.64/26 host="ip-172-31-23-191" Apr 30 03:29:34.288886 containerd[2020]: 2025-04-30 03:29:34.204 [INFO][5231] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.9.64/26 host="ip-172-31-23-191" Apr 30 03:29:34.288886 containerd[2020]: 2025-04-30 03:29:34.204 [INFO][5231] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.9.64/26 handle="k8s-pod-network.8cdf8bbdca24af919eb8228a49ac81f21e2bf3f02f73d39324a0adb17fd2feb9" host="ip-172-31-23-191" Apr 30 03:29:34.288886 containerd[2020]: 2025-04-30 03:29:34.207 [INFO][5231] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.8cdf8bbdca24af919eb8228a49ac81f21e2bf3f02f73d39324a0adb17fd2feb9 Apr 30 03:29:34.288886 containerd[2020]: 2025-04-30 03:29:34.215 [INFO][5231] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.9.64/26 handle="k8s-pod-network.8cdf8bbdca24af919eb8228a49ac81f21e2bf3f02f73d39324a0adb17fd2feb9" host="ip-172-31-23-191" Apr 30 03:29:34.288886 containerd[2020]: 2025-04-30 03:29:34.226 [INFO][5231] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.9.67/26] block=192.168.9.64/26 handle="k8s-pod-network.8cdf8bbdca24af919eb8228a49ac81f21e2bf3f02f73d39324a0adb17fd2feb9" host="ip-172-31-23-191" Apr 30 03:29:34.288886 containerd[2020]: 2025-04-30 03:29:34.227 [INFO][5231] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.9.67/26] handle="k8s-pod-network.8cdf8bbdca24af919eb8228a49ac81f21e2bf3f02f73d39324a0adb17fd2feb9" host="ip-172-31-23-191" Apr 30 03:29:34.288886 containerd[2020]: 2025-04-30 03:29:34.227 [INFO][5231] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:29:34.288886 containerd[2020]: 2025-04-30 03:29:34.228 [INFO][5231] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.9.67/26] IPv6=[] ContainerID="8cdf8bbdca24af919eb8228a49ac81f21e2bf3f02f73d39324a0adb17fd2feb9" HandleID="k8s-pod-network.8cdf8bbdca24af919eb8228a49ac81f21e2bf3f02f73d39324a0adb17fd2feb9" Workload="ip--172--31--23--191-k8s-calico--apiserver--79d7797bfd--7hhqk-eth0" Apr 30 03:29:34.289636 containerd[2020]: 2025-04-30 03:29:34.236 [INFO][5200] cni-plugin/k8s.go 386: Populated endpoint ContainerID="8cdf8bbdca24af919eb8228a49ac81f21e2bf3f02f73d39324a0adb17fd2feb9" Namespace="calico-apiserver" Pod="calico-apiserver-79d7797bfd-7hhqk" WorkloadEndpoint="ip--172--31--23--191-k8s-calico--apiserver--79d7797bfd--7hhqk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--191-k8s-calico--apiserver--79d7797bfd--7hhqk-eth0", GenerateName:"calico-apiserver-79d7797bfd-", Namespace:"calico-apiserver", SelfLink:"", UID:"6170a3e5-e4fb-4596-abdd-016a02fa9e9d", ResourceVersion:"878", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 29, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"79d7797bfd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-191", ContainerID:"", Pod:"calico-apiserver-79d7797bfd-7hhqk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.9.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali79f8f63194b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:29:34.289636 containerd[2020]: 2025-04-30 03:29:34.236 [INFO][5200] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.9.67/32] ContainerID="8cdf8bbdca24af919eb8228a49ac81f21e2bf3f02f73d39324a0adb17fd2feb9" Namespace="calico-apiserver" Pod="calico-apiserver-79d7797bfd-7hhqk" WorkloadEndpoint="ip--172--31--23--191-k8s-calico--apiserver--79d7797bfd--7hhqk-eth0" Apr 30 03:29:34.289636 containerd[2020]: 2025-04-30 03:29:34.236 [INFO][5200] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali79f8f63194b ContainerID="8cdf8bbdca24af919eb8228a49ac81f21e2bf3f02f73d39324a0adb17fd2feb9" Namespace="calico-apiserver" Pod="calico-apiserver-79d7797bfd-7hhqk" WorkloadEndpoint="ip--172--31--23--191-k8s-calico--apiserver--79d7797bfd--7hhqk-eth0" Apr 30 03:29:34.289636 containerd[2020]: 2025-04-30 03:29:34.255 [INFO][5200] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8cdf8bbdca24af919eb8228a49ac81f21e2bf3f02f73d39324a0adb17fd2feb9" Namespace="calico-apiserver" Pod="calico-apiserver-79d7797bfd-7hhqk" WorkloadEndpoint="ip--172--31--23--191-k8s-calico--apiserver--79d7797bfd--7hhqk-eth0" Apr 30 03:29:34.289636 containerd[2020]: 2025-04-30 03:29:34.258 [INFO][5200] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="8cdf8bbdca24af919eb8228a49ac81f21e2bf3f02f73d39324a0adb17fd2feb9" Namespace="calico-apiserver" Pod="calico-apiserver-79d7797bfd-7hhqk" WorkloadEndpoint="ip--172--31--23--191-k8s-calico--apiserver--79d7797bfd--7hhqk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--191-k8s-calico--apiserver--79d7797bfd--7hhqk-eth0", GenerateName:"calico-apiserver-79d7797bfd-", Namespace:"calico-apiserver", SelfLink:"", UID:"6170a3e5-e4fb-4596-abdd-016a02fa9e9d", ResourceVersion:"878", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 29, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"79d7797bfd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-191", ContainerID:"8cdf8bbdca24af919eb8228a49ac81f21e2bf3f02f73d39324a0adb17fd2feb9", Pod:"calico-apiserver-79d7797bfd-7hhqk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.9.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali79f8f63194b", MAC:"46:fb:4d:6e:94:36", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:29:34.289636 containerd[2020]: 2025-04-30 03:29:34.285 [INFO][5200] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="8cdf8bbdca24af919eb8228a49ac81f21e2bf3f02f73d39324a0adb17fd2feb9" Namespace="calico-apiserver" Pod="calico-apiserver-79d7797bfd-7hhqk" WorkloadEndpoint="ip--172--31--23--191-k8s-calico--apiserver--79d7797bfd--7hhqk-eth0" Apr 30 03:29:34.327425 containerd[2020]: time="2025-04-30T03:29:34.326991479Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:29:34.327425 containerd[2020]: time="2025-04-30T03:29:34.327061198Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:29:34.327425 containerd[2020]: time="2025-04-30T03:29:34.327098034Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:34.327425 containerd[2020]: time="2025-04-30T03:29:34.327229029Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:34.373273 containerd[2020]: time="2025-04-30T03:29:34.373235383Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-qrm6j,Uid:c88b3859-8c19-40ee-b3d5-67ca01136bf7,Namespace:kube-system,Attempt:1,} returns sandbox id \"319362539630c6bc97e7c4e0e29b566e6066c0c2cd8efddcc265b8dc11b23466\"" Apr 30 03:29:34.389313 containerd[2020]: time="2025-04-30T03:29:34.389212763Z" level=info msg="CreateContainer within sandbox \"319362539630c6bc97e7c4e0e29b566e6066c0c2cd8efddcc265b8dc11b23466\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 30 03:29:34.415195 containerd[2020]: time="2025-04-30T03:29:34.415146712Z" level=info msg="CreateContainer within sandbox \"319362539630c6bc97e7c4e0e29b566e6066c0c2cd8efddcc265b8dc11b23466\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"dbc607f5ed7d9d3b2287b1ddc4f1d7fb42df0a35d3be75d02ab8afc0b7bd3cea\"" Apr 30 03:29:34.416668 containerd[2020]: time="2025-04-30T03:29:34.416158914Z" level=info msg="StartContainer for \"dbc607f5ed7d9d3b2287b1ddc4f1d7fb42df0a35d3be75d02ab8afc0b7bd3cea\"" Apr 30 03:29:34.431022 sshd[5242]: Accepted publickey for core from 147.75.109.163 port 40462 ssh2: RSA SHA256:7ZQea3lKZeIY1pq8546y2SpcWopo7i1peiZKBcYFJ3g Apr 30 03:29:34.434256 sshd[5242]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:29:34.448489 systemd-logind[1998]: New session 11 of user core. Apr 30 03:29:34.455734 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 30 03:29:34.483421 containerd[2020]: time="2025-04-30T03:29:34.482909140Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79d7797bfd-7hhqk,Uid:6170a3e5-e4fb-4596-abdd-016a02fa9e9d,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"8cdf8bbdca24af919eb8228a49ac81f21e2bf3f02f73d39324a0adb17fd2feb9\"" Apr 30 03:29:34.487724 containerd[2020]: time="2025-04-30T03:29:34.487446774Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" Apr 30 03:29:34.523777 containerd[2020]: time="2025-04-30T03:29:34.520582341Z" level=info msg="StartContainer for \"dbc607f5ed7d9d3b2287b1ddc4f1d7fb42df0a35d3be75d02ab8afc0b7bd3cea\" returns successfully" Apr 30 03:29:34.630611 containerd[2020]: time="2025-04-30T03:29:34.630037879Z" level=info msg="StopPodSandbox for \"fe7e08d10bd1fbd8312406391891a6337478c7ab5867988928bcd1b0c477f46f\"" Apr 30 03:29:34.633086 containerd[2020]: time="2025-04-30T03:29:34.633055002Z" level=info msg="StopPodSandbox for \"62e57ba3bc99709f661f6600d0fcca08694390b3ad3a9bb90b19e100ecc465e2\"" Apr 30 03:29:34.867967 containerd[2020]: 2025-04-30 03:29:34.779 [INFO][5423] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="62e57ba3bc99709f661f6600d0fcca08694390b3ad3a9bb90b19e100ecc465e2" Apr 30 03:29:34.867967 containerd[2020]: 2025-04-30 03:29:34.780 [INFO][5423] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="62e57ba3bc99709f661f6600d0fcca08694390b3ad3a9bb90b19e100ecc465e2" iface="eth0" netns="/var/run/netns/cni-3362c026-bde0-f5d8-1267-04cb8e21a58b" Apr 30 03:29:34.867967 containerd[2020]: 2025-04-30 03:29:34.781 [INFO][5423] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="62e57ba3bc99709f661f6600d0fcca08694390b3ad3a9bb90b19e100ecc465e2" iface="eth0" netns="/var/run/netns/cni-3362c026-bde0-f5d8-1267-04cb8e21a58b" Apr 30 03:29:34.867967 containerd[2020]: 2025-04-30 03:29:34.781 [INFO][5423] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="62e57ba3bc99709f661f6600d0fcca08694390b3ad3a9bb90b19e100ecc465e2" iface="eth0" netns="/var/run/netns/cni-3362c026-bde0-f5d8-1267-04cb8e21a58b" Apr 30 03:29:34.867967 containerd[2020]: 2025-04-30 03:29:34.781 [INFO][5423] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="62e57ba3bc99709f661f6600d0fcca08694390b3ad3a9bb90b19e100ecc465e2" Apr 30 03:29:34.867967 containerd[2020]: 2025-04-30 03:29:34.781 [INFO][5423] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="62e57ba3bc99709f661f6600d0fcca08694390b3ad3a9bb90b19e100ecc465e2" Apr 30 03:29:34.867967 containerd[2020]: 2025-04-30 03:29:34.840 [INFO][5438] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="62e57ba3bc99709f661f6600d0fcca08694390b3ad3a9bb90b19e100ecc465e2" HandleID="k8s-pod-network.62e57ba3bc99709f661f6600d0fcca08694390b3ad3a9bb90b19e100ecc465e2" Workload="ip--172--31--23--191-k8s-calico--kube--controllers--5b8dc9df9d--46jp7-eth0" Apr 30 03:29:34.867967 containerd[2020]: 2025-04-30 03:29:34.840 [INFO][5438] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:29:34.867967 containerd[2020]: 2025-04-30 03:29:34.842 [INFO][5438] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:29:34.867967 containerd[2020]: 2025-04-30 03:29:34.854 [WARNING][5438] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="62e57ba3bc99709f661f6600d0fcca08694390b3ad3a9bb90b19e100ecc465e2" HandleID="k8s-pod-network.62e57ba3bc99709f661f6600d0fcca08694390b3ad3a9bb90b19e100ecc465e2" Workload="ip--172--31--23--191-k8s-calico--kube--controllers--5b8dc9df9d--46jp7-eth0" Apr 30 03:29:34.867967 containerd[2020]: 2025-04-30 03:29:34.854 [INFO][5438] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="62e57ba3bc99709f661f6600d0fcca08694390b3ad3a9bb90b19e100ecc465e2" HandleID="k8s-pod-network.62e57ba3bc99709f661f6600d0fcca08694390b3ad3a9bb90b19e100ecc465e2" Workload="ip--172--31--23--191-k8s-calico--kube--controllers--5b8dc9df9d--46jp7-eth0" Apr 30 03:29:34.867967 containerd[2020]: 2025-04-30 03:29:34.857 [INFO][5438] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:29:34.867967 containerd[2020]: 2025-04-30 03:29:34.862 [INFO][5423] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="62e57ba3bc99709f661f6600d0fcca08694390b3ad3a9bb90b19e100ecc465e2" Apr 30 03:29:34.870451 containerd[2020]: time="2025-04-30T03:29:34.869098582Z" level=info msg="TearDown network for sandbox \"62e57ba3bc99709f661f6600d0fcca08694390b3ad3a9bb90b19e100ecc465e2\" successfully" Apr 30 03:29:34.870451 containerd[2020]: time="2025-04-30T03:29:34.869133723Z" level=info msg="StopPodSandbox for \"62e57ba3bc99709f661f6600d0fcca08694390b3ad3a9bb90b19e100ecc465e2\" returns successfully" Apr 30 03:29:34.871134 containerd[2020]: time="2025-04-30T03:29:34.871104415Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5b8dc9df9d-46jp7,Uid:83a6b31a-fa39-475d-820a-3c65d1ea9b44,Namespace:calico-system,Attempt:1,}" Apr 30 03:29:34.946818 sshd[5242]: pam_unix(sshd:session): session closed for user core Apr 30 03:29:34.972379 systemd[1]: sshd@10-172.31.23.191:22-147.75.109.163:40462.service: Deactivated successfully. Apr 30 03:29:34.986972 systemd-logind[1998]: Session 11 logged out. Waiting for processes to exit. Apr 30 03:29:34.988306 systemd-networkd[1575]: cali5a48f5651dc: Gained IPv6LL Apr 30 03:29:34.997832 containerd[2020]: 2025-04-30 03:29:34.805 [INFO][5425] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="fe7e08d10bd1fbd8312406391891a6337478c7ab5867988928bcd1b0c477f46f" Apr 30 03:29:34.997832 containerd[2020]: 2025-04-30 03:29:34.805 [INFO][5425] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="fe7e08d10bd1fbd8312406391891a6337478c7ab5867988928bcd1b0c477f46f" iface="eth0" netns="/var/run/netns/cni-e5ed16db-3954-04ab-f7fc-9c9c64b9e6da" Apr 30 03:29:34.997832 containerd[2020]: 2025-04-30 03:29:34.805 [INFO][5425] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="fe7e08d10bd1fbd8312406391891a6337478c7ab5867988928bcd1b0c477f46f" iface="eth0" netns="/var/run/netns/cni-e5ed16db-3954-04ab-f7fc-9c9c64b9e6da" Apr 30 03:29:34.997832 containerd[2020]: 2025-04-30 03:29:34.806 [INFO][5425] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="fe7e08d10bd1fbd8312406391891a6337478c7ab5867988928bcd1b0c477f46f" iface="eth0" netns="/var/run/netns/cni-e5ed16db-3954-04ab-f7fc-9c9c64b9e6da" Apr 30 03:29:34.997832 containerd[2020]: 2025-04-30 03:29:34.806 [INFO][5425] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="fe7e08d10bd1fbd8312406391891a6337478c7ab5867988928bcd1b0c477f46f" Apr 30 03:29:34.997832 containerd[2020]: 2025-04-30 03:29:34.806 [INFO][5425] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fe7e08d10bd1fbd8312406391891a6337478c7ab5867988928bcd1b0c477f46f" Apr 30 03:29:34.997832 containerd[2020]: 2025-04-30 03:29:34.894 [INFO][5444] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fe7e08d10bd1fbd8312406391891a6337478c7ab5867988928bcd1b0c477f46f" HandleID="k8s-pod-network.fe7e08d10bd1fbd8312406391891a6337478c7ab5867988928bcd1b0c477f46f" Workload="ip--172--31--23--191-k8s-csi--node--driver--lfxjq-eth0" Apr 30 03:29:34.997832 containerd[2020]: 2025-04-30 03:29:34.895 [INFO][5444] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:29:34.997832 containerd[2020]: 2025-04-30 03:29:34.896 [INFO][5444] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:29:34.997832 containerd[2020]: 2025-04-30 03:29:34.917 [WARNING][5444] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fe7e08d10bd1fbd8312406391891a6337478c7ab5867988928bcd1b0c477f46f" HandleID="k8s-pod-network.fe7e08d10bd1fbd8312406391891a6337478c7ab5867988928bcd1b0c477f46f" Workload="ip--172--31--23--191-k8s-csi--node--driver--lfxjq-eth0" Apr 30 03:29:34.997832 containerd[2020]: 2025-04-30 03:29:34.919 [INFO][5444] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fe7e08d10bd1fbd8312406391891a6337478c7ab5867988928bcd1b0c477f46f" HandleID="k8s-pod-network.fe7e08d10bd1fbd8312406391891a6337478c7ab5867988928bcd1b0c477f46f" Workload="ip--172--31--23--191-k8s-csi--node--driver--lfxjq-eth0" Apr 30 03:29:34.997832 containerd[2020]: 2025-04-30 03:29:34.925 [INFO][5444] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:29:34.997832 containerd[2020]: 2025-04-30 03:29:34.958 [INFO][5425] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="fe7e08d10bd1fbd8312406391891a6337478c7ab5867988928bcd1b0c477f46f" Apr 30 03:29:34.997832 containerd[2020]: time="2025-04-30T03:29:34.995434647Z" level=info msg="TearDown network for sandbox \"fe7e08d10bd1fbd8312406391891a6337478c7ab5867988928bcd1b0c477f46f\" successfully" Apr 30 03:29:34.997832 containerd[2020]: time="2025-04-30T03:29:34.995466913Z" level=info msg="StopPodSandbox for \"fe7e08d10bd1fbd8312406391891a6337478c7ab5867988928bcd1b0c477f46f\" returns successfully" Apr 30 03:29:34.999007 systemd[1]: Started sshd@11-172.31.23.191:22-147.75.109.163:40464.service - OpenSSH per-connection server daemon (147.75.109.163:40464). Apr 30 03:29:35.008271 kubelet[3315]: I0430 03:29:35.002332 3315 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-qrm6j" podStartSLOduration=32.002307632 podStartE2EDuration="32.002307632s" podCreationTimestamp="2025-04-30 03:29:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 03:29:35.001868101 +0000 UTC m=+48.528084848" watchObservedRunningTime="2025-04-30 03:29:35.002307632 +0000 UTC m=+48.528524383" Apr 30 03:29:34.999494 systemd[1]: session-11.scope: Deactivated successfully. Apr 30 03:29:35.018925 systemd-logind[1998]: Removed session 11. Apr 30 03:29:35.046689 containerd[2020]: time="2025-04-30T03:29:35.045819785Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lfxjq,Uid:46a42e1d-4f1d-46c0-be67-d687a45629b1,Namespace:calico-system,Attempt:1,}" Apr 30 03:29:35.159800 systemd[1]: run-netns-cni\x2de5ed16db\x2d3954\x2d04ab\x2df7fc\x2d9c9c64b9e6da.mount: Deactivated successfully. Apr 30 03:29:35.160012 systemd[1]: run-netns-cni\x2d3362c026\x2dbde0\x2df5d8\x2d1267\x2d04cb8e21a58b.mount: Deactivated successfully. Apr 30 03:29:35.308988 systemd-networkd[1575]: cali87dc7e85d3a: Gained IPv6LL Apr 30 03:29:35.337916 systemd-networkd[1575]: cali5c97a8169dc: Link UP Apr 30 03:29:35.338702 sshd[5466]: Accepted publickey for core from 147.75.109.163 port 40464 ssh2: RSA SHA256:7ZQea3lKZeIY1pq8546y2SpcWopo7i1peiZKBcYFJ3g Apr 30 03:29:35.339738 systemd-networkd[1575]: cali5c97a8169dc: Gained carrier Apr 30 03:29:35.341697 sshd[5466]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:29:35.353778 systemd-logind[1998]: New session 12 of user core. Apr 30 03:29:35.355838 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 30 03:29:35.379220 containerd[2020]: 2025-04-30 03:29:35.195 [INFO][5453] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--23--191-k8s-calico--kube--controllers--5b8dc9df9d--46jp7-eth0 calico-kube-controllers-5b8dc9df9d- calico-system 83a6b31a-fa39-475d-820a-3c65d1ea9b44 897 0 2025-04-30 03:29:11 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5b8dc9df9d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-23-191 calico-kube-controllers-5b8dc9df9d-46jp7 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali5c97a8169dc [] []}} ContainerID="a36ca6d06ac6dfac0d7879cb314ae0a5a0b9bc8894d32dc5be5c1dfc54b42e36" Namespace="calico-system" Pod="calico-kube-controllers-5b8dc9df9d-46jp7" WorkloadEndpoint="ip--172--31--23--191-k8s-calico--kube--controllers--5b8dc9df9d--46jp7-" Apr 30 03:29:35.379220 containerd[2020]: 2025-04-30 03:29:35.195 [INFO][5453] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a36ca6d06ac6dfac0d7879cb314ae0a5a0b9bc8894d32dc5be5c1dfc54b42e36" Namespace="calico-system" Pod="calico-kube-controllers-5b8dc9df9d-46jp7" WorkloadEndpoint="ip--172--31--23--191-k8s-calico--kube--controllers--5b8dc9df9d--46jp7-eth0" Apr 30 03:29:35.379220 containerd[2020]: 2025-04-30 03:29:35.266 [INFO][5487] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a36ca6d06ac6dfac0d7879cb314ae0a5a0b9bc8894d32dc5be5c1dfc54b42e36" HandleID="k8s-pod-network.a36ca6d06ac6dfac0d7879cb314ae0a5a0b9bc8894d32dc5be5c1dfc54b42e36" Workload="ip--172--31--23--191-k8s-calico--kube--controllers--5b8dc9df9d--46jp7-eth0" Apr 30 03:29:35.379220 containerd[2020]: 2025-04-30 03:29:35.282 [INFO][5487] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a36ca6d06ac6dfac0d7879cb314ae0a5a0b9bc8894d32dc5be5c1dfc54b42e36" HandleID="k8s-pod-network.a36ca6d06ac6dfac0d7879cb314ae0a5a0b9bc8894d32dc5be5c1dfc54b42e36" Workload="ip--172--31--23--191-k8s-calico--kube--controllers--5b8dc9df9d--46jp7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ece40), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-23-191", "pod":"calico-kube-controllers-5b8dc9df9d-46jp7", "timestamp":"2025-04-30 03:29:35.266305694 +0000 UTC"}, Hostname:"ip-172-31-23-191", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 03:29:35.379220 containerd[2020]: 2025-04-30 03:29:35.283 [INFO][5487] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:29:35.379220 containerd[2020]: 2025-04-30 03:29:35.283 [INFO][5487] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:29:35.379220 containerd[2020]: 2025-04-30 03:29:35.283 [INFO][5487] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-23-191' Apr 30 03:29:35.379220 containerd[2020]: 2025-04-30 03:29:35.286 [INFO][5487] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a36ca6d06ac6dfac0d7879cb314ae0a5a0b9bc8894d32dc5be5c1dfc54b42e36" host="ip-172-31-23-191" Apr 30 03:29:35.379220 containerd[2020]: 2025-04-30 03:29:35.293 [INFO][5487] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-23-191" Apr 30 03:29:35.379220 containerd[2020]: 2025-04-30 03:29:35.300 [INFO][5487] ipam/ipam.go 489: Trying affinity for 192.168.9.64/26 host="ip-172-31-23-191" Apr 30 03:29:35.379220 containerd[2020]: 2025-04-30 03:29:35.302 [INFO][5487] ipam/ipam.go 155: Attempting to load block cidr=192.168.9.64/26 host="ip-172-31-23-191" Apr 30 03:29:35.379220 containerd[2020]: 2025-04-30 03:29:35.305 [INFO][5487] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.9.64/26 host="ip-172-31-23-191" Apr 30 03:29:35.379220 containerd[2020]: 2025-04-30 03:29:35.305 [INFO][5487] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.9.64/26 handle="k8s-pod-network.a36ca6d06ac6dfac0d7879cb314ae0a5a0b9bc8894d32dc5be5c1dfc54b42e36" host="ip-172-31-23-191" Apr 30 03:29:35.379220 containerd[2020]: 2025-04-30 03:29:35.313 [INFO][5487] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.a36ca6d06ac6dfac0d7879cb314ae0a5a0b9bc8894d32dc5be5c1dfc54b42e36 Apr 30 03:29:35.379220 containerd[2020]: 2025-04-30 03:29:35.322 [INFO][5487] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.9.64/26 handle="k8s-pod-network.a36ca6d06ac6dfac0d7879cb314ae0a5a0b9bc8894d32dc5be5c1dfc54b42e36" host="ip-172-31-23-191" Apr 30 03:29:35.379220 containerd[2020]: 2025-04-30 03:29:35.328 [INFO][5487] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.9.68/26] block=192.168.9.64/26 handle="k8s-pod-network.a36ca6d06ac6dfac0d7879cb314ae0a5a0b9bc8894d32dc5be5c1dfc54b42e36" host="ip-172-31-23-191" Apr 30 03:29:35.379220 containerd[2020]: 2025-04-30 03:29:35.328 [INFO][5487] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.9.68/26] handle="k8s-pod-network.a36ca6d06ac6dfac0d7879cb314ae0a5a0b9bc8894d32dc5be5c1dfc54b42e36" host="ip-172-31-23-191" Apr 30 03:29:35.379220 containerd[2020]: 2025-04-30 03:29:35.328 [INFO][5487] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:29:35.379220 containerd[2020]: 2025-04-30 03:29:35.328 [INFO][5487] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.9.68/26] IPv6=[] ContainerID="a36ca6d06ac6dfac0d7879cb314ae0a5a0b9bc8894d32dc5be5c1dfc54b42e36" HandleID="k8s-pod-network.a36ca6d06ac6dfac0d7879cb314ae0a5a0b9bc8894d32dc5be5c1dfc54b42e36" Workload="ip--172--31--23--191-k8s-calico--kube--controllers--5b8dc9df9d--46jp7-eth0" Apr 30 03:29:35.379953 containerd[2020]: 2025-04-30 03:29:35.332 [INFO][5453] cni-plugin/k8s.go 386: Populated endpoint ContainerID="a36ca6d06ac6dfac0d7879cb314ae0a5a0b9bc8894d32dc5be5c1dfc54b42e36" Namespace="calico-system" Pod="calico-kube-controllers-5b8dc9df9d-46jp7" WorkloadEndpoint="ip--172--31--23--191-k8s-calico--kube--controllers--5b8dc9df9d--46jp7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--191-k8s-calico--kube--controllers--5b8dc9df9d--46jp7-eth0", GenerateName:"calico-kube-controllers-5b8dc9df9d-", Namespace:"calico-system", SelfLink:"", UID:"83a6b31a-fa39-475d-820a-3c65d1ea9b44", ResourceVersion:"897", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 29, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5b8dc9df9d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-191", ContainerID:"", Pod:"calico-kube-controllers-5b8dc9df9d-46jp7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.9.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5c97a8169dc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:29:35.379953 containerd[2020]: 2025-04-30 03:29:35.332 [INFO][5453] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.9.68/32] ContainerID="a36ca6d06ac6dfac0d7879cb314ae0a5a0b9bc8894d32dc5be5c1dfc54b42e36" Namespace="calico-system" Pod="calico-kube-controllers-5b8dc9df9d-46jp7" WorkloadEndpoint="ip--172--31--23--191-k8s-calico--kube--controllers--5b8dc9df9d--46jp7-eth0" Apr 30 03:29:35.379953 containerd[2020]: 2025-04-30 03:29:35.332 [INFO][5453] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5c97a8169dc ContainerID="a36ca6d06ac6dfac0d7879cb314ae0a5a0b9bc8894d32dc5be5c1dfc54b42e36" Namespace="calico-system" Pod="calico-kube-controllers-5b8dc9df9d-46jp7" WorkloadEndpoint="ip--172--31--23--191-k8s-calico--kube--controllers--5b8dc9df9d--46jp7-eth0" Apr 30 03:29:35.379953 containerd[2020]: 2025-04-30 03:29:35.340 [INFO][5453] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a36ca6d06ac6dfac0d7879cb314ae0a5a0b9bc8894d32dc5be5c1dfc54b42e36" Namespace="calico-system" Pod="calico-kube-controllers-5b8dc9df9d-46jp7" WorkloadEndpoint="ip--172--31--23--191-k8s-calico--kube--controllers--5b8dc9df9d--46jp7-eth0" Apr 30 03:29:35.379953 containerd[2020]: 2025-04-30 03:29:35.341 [INFO][5453] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a36ca6d06ac6dfac0d7879cb314ae0a5a0b9bc8894d32dc5be5c1dfc54b42e36" Namespace="calico-system" Pod="calico-kube-controllers-5b8dc9df9d-46jp7" WorkloadEndpoint="ip--172--31--23--191-k8s-calico--kube--controllers--5b8dc9df9d--46jp7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--191-k8s-calico--kube--controllers--5b8dc9df9d--46jp7-eth0", GenerateName:"calico-kube-controllers-5b8dc9df9d-", Namespace:"calico-system", SelfLink:"", UID:"83a6b31a-fa39-475d-820a-3c65d1ea9b44", ResourceVersion:"897", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 29, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5b8dc9df9d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-191", ContainerID:"a36ca6d06ac6dfac0d7879cb314ae0a5a0b9bc8894d32dc5be5c1dfc54b42e36", Pod:"calico-kube-controllers-5b8dc9df9d-46jp7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.9.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5c97a8169dc", MAC:"32:7d:c3:41:7a:43", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:29:35.379953 containerd[2020]: 2025-04-30 03:29:35.366 [INFO][5453] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="a36ca6d06ac6dfac0d7879cb314ae0a5a0b9bc8894d32dc5be5c1dfc54b42e36" Namespace="calico-system" Pod="calico-kube-controllers-5b8dc9df9d-46jp7" WorkloadEndpoint="ip--172--31--23--191-k8s-calico--kube--controllers--5b8dc9df9d--46jp7-eth0" Apr 30 03:29:35.414757 containerd[2020]: time="2025-04-30T03:29:35.414290584Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:29:35.415385 containerd[2020]: time="2025-04-30T03:29:35.415216465Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:29:35.415385 containerd[2020]: time="2025-04-30T03:29:35.415275611Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:35.415886 containerd[2020]: time="2025-04-30T03:29:35.415698503Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:35.416241 systemd-networkd[1575]: calic6ac652d3b4: Link UP Apr 30 03:29:35.420136 systemd-networkd[1575]: calic6ac652d3b4: Gained carrier Apr 30 03:29:35.458885 containerd[2020]: 2025-04-30 03:29:35.246 [INFO][5472] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--23--191-k8s-csi--node--driver--lfxjq-eth0 csi-node-driver- calico-system 46a42e1d-4f1d-46c0-be67-d687a45629b1 898 0 2025-04-30 03:29:11 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b7b4b9d k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-23-191 csi-node-driver-lfxjq eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calic6ac652d3b4 [] []}} ContainerID="c0e9f0687b6c4db508577d7155e3b5b3f4ef4228349b0d3e49d3628daa73dc83" Namespace="calico-system" Pod="csi-node-driver-lfxjq" WorkloadEndpoint="ip--172--31--23--191-k8s-csi--node--driver--lfxjq-" Apr 30 03:29:35.458885 containerd[2020]: 2025-04-30 03:29:35.246 [INFO][5472] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c0e9f0687b6c4db508577d7155e3b5b3f4ef4228349b0d3e49d3628daa73dc83" Namespace="calico-system" Pod="csi-node-driver-lfxjq" WorkloadEndpoint="ip--172--31--23--191-k8s-csi--node--driver--lfxjq-eth0" Apr 30 03:29:35.458885 containerd[2020]: 2025-04-30 03:29:35.306 [INFO][5497] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c0e9f0687b6c4db508577d7155e3b5b3f4ef4228349b0d3e49d3628daa73dc83" HandleID="k8s-pod-network.c0e9f0687b6c4db508577d7155e3b5b3f4ef4228349b0d3e49d3628daa73dc83" Workload="ip--172--31--23--191-k8s-csi--node--driver--lfxjq-eth0" Apr 30 03:29:35.458885 containerd[2020]: 2025-04-30 03:29:35.324 [INFO][5497] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c0e9f0687b6c4db508577d7155e3b5b3f4ef4228349b0d3e49d3628daa73dc83" HandleID="k8s-pod-network.c0e9f0687b6c4db508577d7155e3b5b3f4ef4228349b0d3e49d3628daa73dc83" Workload="ip--172--31--23--191-k8s-csi--node--driver--lfxjq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003baba0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-23-191", "pod":"csi-node-driver-lfxjq", "timestamp":"2025-04-30 03:29:35.306981049 +0000 UTC"}, Hostname:"ip-172-31-23-191", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 03:29:35.458885 containerd[2020]: 2025-04-30 03:29:35.324 [INFO][5497] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:29:35.458885 containerd[2020]: 2025-04-30 03:29:35.328 [INFO][5497] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:29:35.458885 containerd[2020]: 2025-04-30 03:29:35.328 [INFO][5497] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-23-191' Apr 30 03:29:35.458885 containerd[2020]: 2025-04-30 03:29:35.331 [INFO][5497] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c0e9f0687b6c4db508577d7155e3b5b3f4ef4228349b0d3e49d3628daa73dc83" host="ip-172-31-23-191" Apr 30 03:29:35.458885 containerd[2020]: 2025-04-30 03:29:35.340 [INFO][5497] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-23-191" Apr 30 03:29:35.458885 containerd[2020]: 2025-04-30 03:29:35.361 [INFO][5497] ipam/ipam.go 489: Trying affinity for 192.168.9.64/26 host="ip-172-31-23-191" Apr 30 03:29:35.458885 containerd[2020]: 2025-04-30 03:29:35.371 [INFO][5497] ipam/ipam.go 155: Attempting to load block cidr=192.168.9.64/26 host="ip-172-31-23-191" Apr 30 03:29:35.458885 containerd[2020]: 2025-04-30 03:29:35.380 [INFO][5497] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.9.64/26 host="ip-172-31-23-191" Apr 30 03:29:35.458885 containerd[2020]: 2025-04-30 03:29:35.380 [INFO][5497] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.9.64/26 handle="k8s-pod-network.c0e9f0687b6c4db508577d7155e3b5b3f4ef4228349b0d3e49d3628daa73dc83" host="ip-172-31-23-191" Apr 30 03:29:35.458885 containerd[2020]: 2025-04-30 03:29:35.385 [INFO][5497] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.c0e9f0687b6c4db508577d7155e3b5b3f4ef4228349b0d3e49d3628daa73dc83 Apr 30 03:29:35.458885 containerd[2020]: 2025-04-30 03:29:35.394 [INFO][5497] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.9.64/26 handle="k8s-pod-network.c0e9f0687b6c4db508577d7155e3b5b3f4ef4228349b0d3e49d3628daa73dc83" host="ip-172-31-23-191" Apr 30 03:29:35.458885 containerd[2020]: 2025-04-30 03:29:35.405 [INFO][5497] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.9.69/26] block=192.168.9.64/26 handle="k8s-pod-network.c0e9f0687b6c4db508577d7155e3b5b3f4ef4228349b0d3e49d3628daa73dc83" host="ip-172-31-23-191" Apr 30 03:29:35.458885 containerd[2020]: 2025-04-30 03:29:35.405 [INFO][5497] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.9.69/26] handle="k8s-pod-network.c0e9f0687b6c4db508577d7155e3b5b3f4ef4228349b0d3e49d3628daa73dc83" host="ip-172-31-23-191" Apr 30 03:29:35.458885 containerd[2020]: 2025-04-30 03:29:35.405 [INFO][5497] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:29:35.458885 containerd[2020]: 2025-04-30 03:29:35.406 [INFO][5497] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.9.69/26] IPv6=[] ContainerID="c0e9f0687b6c4db508577d7155e3b5b3f4ef4228349b0d3e49d3628daa73dc83" HandleID="k8s-pod-network.c0e9f0687b6c4db508577d7155e3b5b3f4ef4228349b0d3e49d3628daa73dc83" Workload="ip--172--31--23--191-k8s-csi--node--driver--lfxjq-eth0" Apr 30 03:29:35.459836 containerd[2020]: 2025-04-30 03:29:35.410 [INFO][5472] cni-plugin/k8s.go 386: Populated endpoint ContainerID="c0e9f0687b6c4db508577d7155e3b5b3f4ef4228349b0d3e49d3628daa73dc83" Namespace="calico-system" Pod="csi-node-driver-lfxjq" WorkloadEndpoint="ip--172--31--23--191-k8s-csi--node--driver--lfxjq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--191-k8s-csi--node--driver--lfxjq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"46a42e1d-4f1d-46c0-be67-d687a45629b1", ResourceVersion:"898", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 29, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-191", ContainerID:"", Pod:"csi-node-driver-lfxjq", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.9.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic6ac652d3b4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:29:35.459836 containerd[2020]: 2025-04-30 03:29:35.410 [INFO][5472] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.9.69/32] ContainerID="c0e9f0687b6c4db508577d7155e3b5b3f4ef4228349b0d3e49d3628daa73dc83" Namespace="calico-system" Pod="csi-node-driver-lfxjq" WorkloadEndpoint="ip--172--31--23--191-k8s-csi--node--driver--lfxjq-eth0" Apr 30 03:29:35.459836 containerd[2020]: 2025-04-30 03:29:35.410 [INFO][5472] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic6ac652d3b4 ContainerID="c0e9f0687b6c4db508577d7155e3b5b3f4ef4228349b0d3e49d3628daa73dc83" Namespace="calico-system" Pod="csi-node-driver-lfxjq" WorkloadEndpoint="ip--172--31--23--191-k8s-csi--node--driver--lfxjq-eth0" Apr 30 03:29:35.459836 containerd[2020]: 2025-04-30 03:29:35.429 [INFO][5472] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c0e9f0687b6c4db508577d7155e3b5b3f4ef4228349b0d3e49d3628daa73dc83" Namespace="calico-system" Pod="csi-node-driver-lfxjq" WorkloadEndpoint="ip--172--31--23--191-k8s-csi--node--driver--lfxjq-eth0" Apr 30 03:29:35.459836 containerd[2020]: 2025-04-30 03:29:35.432 [INFO][5472] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c0e9f0687b6c4db508577d7155e3b5b3f4ef4228349b0d3e49d3628daa73dc83" Namespace="calico-system" Pod="csi-node-driver-lfxjq" WorkloadEndpoint="ip--172--31--23--191-k8s-csi--node--driver--lfxjq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--191-k8s-csi--node--driver--lfxjq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"46a42e1d-4f1d-46c0-be67-d687a45629b1", ResourceVersion:"898", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 29, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-191", ContainerID:"c0e9f0687b6c4db508577d7155e3b5b3f4ef4228349b0d3e49d3628daa73dc83", Pod:"csi-node-driver-lfxjq", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.9.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic6ac652d3b4", MAC:"c6:6e:35:1c:97:b2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:29:35.459836 containerd[2020]: 2025-04-30 03:29:35.453 [INFO][5472] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="c0e9f0687b6c4db508577d7155e3b5b3f4ef4228349b0d3e49d3628daa73dc83" Namespace="calico-system" Pod="csi-node-driver-lfxjq" WorkloadEndpoint="ip--172--31--23--191-k8s-csi--node--driver--lfxjq-eth0" Apr 30 03:29:35.505779 systemd-journald[1497]: Under memory pressure, flushing caches. Apr 30 03:29:35.505498 systemd-resolved[1907]: Under memory pressure, flushing caches. Apr 30 03:29:35.505523 systemd-resolved[1907]: Flushed all caches. Apr 30 03:29:35.542865 containerd[2020]: time="2025-04-30T03:29:35.542744527Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:29:35.542865 containerd[2020]: time="2025-04-30T03:29:35.542834676Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:29:35.543787 containerd[2020]: time="2025-04-30T03:29:35.543085284Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:35.543787 containerd[2020]: time="2025-04-30T03:29:35.543234853Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:35.612386 containerd[2020]: time="2025-04-30T03:29:35.611331442Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5b8dc9df9d-46jp7,Uid:83a6b31a-fa39-475d-820a-3c65d1ea9b44,Namespace:calico-system,Attempt:1,} returns sandbox id \"a36ca6d06ac6dfac0d7879cb314ae0a5a0b9bc8894d32dc5be5c1dfc54b42e36\"" Apr 30 03:29:35.630262 containerd[2020]: time="2025-04-30T03:29:35.627972013Z" level=info msg="StopPodSandbox for \"2f0063c6c309130807bdf0f8d479490af8b4beb6509250828dfa773ddca5eddf\"" Apr 30 03:29:35.663487 containerd[2020]: time="2025-04-30T03:29:35.663436244Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lfxjq,Uid:46a42e1d-4f1d-46c0-be67-d687a45629b1,Namespace:calico-system,Attempt:1,} returns sandbox id \"c0e9f0687b6c4db508577d7155e3b5b3f4ef4228349b0d3e49d3628daa73dc83\"" Apr 30 03:29:35.755749 systemd-networkd[1575]: cali79f8f63194b: Gained IPv6LL Apr 30 03:29:35.761820 sshd[5466]: pam_unix(sshd:session): session closed for user core Apr 30 03:29:35.769995 systemd[1]: sshd@11-172.31.23.191:22-147.75.109.163:40464.service: Deactivated successfully. Apr 30 03:29:35.775902 systemd[1]: session-12.scope: Deactivated successfully. Apr 30 03:29:35.777237 systemd-logind[1998]: Session 12 logged out. Waiting for processes to exit. Apr 30 03:29:35.779929 systemd-logind[1998]: Removed session 12. Apr 30 03:29:35.806472 containerd[2020]: 2025-04-30 03:29:35.743 [INFO][5639] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2f0063c6c309130807bdf0f8d479490af8b4beb6509250828dfa773ddca5eddf" Apr 30 03:29:35.806472 containerd[2020]: 2025-04-30 03:29:35.745 [INFO][5639] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2f0063c6c309130807bdf0f8d479490af8b4beb6509250828dfa773ddca5eddf" iface="eth0" netns="/var/run/netns/cni-b6d7787f-05fe-faf4-7733-4a66191fd90d" Apr 30 03:29:35.806472 containerd[2020]: 2025-04-30 03:29:35.745 [INFO][5639] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2f0063c6c309130807bdf0f8d479490af8b4beb6509250828dfa773ddca5eddf" iface="eth0" netns="/var/run/netns/cni-b6d7787f-05fe-faf4-7733-4a66191fd90d" Apr 30 03:29:35.806472 containerd[2020]: 2025-04-30 03:29:35.746 [INFO][5639] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2f0063c6c309130807bdf0f8d479490af8b4beb6509250828dfa773ddca5eddf" iface="eth0" netns="/var/run/netns/cni-b6d7787f-05fe-faf4-7733-4a66191fd90d" Apr 30 03:29:35.806472 containerd[2020]: 2025-04-30 03:29:35.746 [INFO][5639] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2f0063c6c309130807bdf0f8d479490af8b4beb6509250828dfa773ddca5eddf" Apr 30 03:29:35.806472 containerd[2020]: 2025-04-30 03:29:35.746 [INFO][5639] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2f0063c6c309130807bdf0f8d479490af8b4beb6509250828dfa773ddca5eddf" Apr 30 03:29:35.806472 containerd[2020]: 2025-04-30 03:29:35.793 [INFO][5646] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2f0063c6c309130807bdf0f8d479490af8b4beb6509250828dfa773ddca5eddf" HandleID="k8s-pod-network.2f0063c6c309130807bdf0f8d479490af8b4beb6509250828dfa773ddca5eddf" Workload="ip--172--31--23--191-k8s-calico--apiserver--6568d4bb6--sbcbv-eth0" Apr 30 03:29:35.806472 containerd[2020]: 2025-04-30 03:29:35.793 [INFO][5646] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:29:35.806472 containerd[2020]: 2025-04-30 03:29:35.793 [INFO][5646] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:29:35.806472 containerd[2020]: 2025-04-30 03:29:35.800 [WARNING][5646] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2f0063c6c309130807bdf0f8d479490af8b4beb6509250828dfa773ddca5eddf" HandleID="k8s-pod-network.2f0063c6c309130807bdf0f8d479490af8b4beb6509250828dfa773ddca5eddf" Workload="ip--172--31--23--191-k8s-calico--apiserver--6568d4bb6--sbcbv-eth0" Apr 30 03:29:35.806472 containerd[2020]: 2025-04-30 03:29:35.800 [INFO][5646] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2f0063c6c309130807bdf0f8d479490af8b4beb6509250828dfa773ddca5eddf" HandleID="k8s-pod-network.2f0063c6c309130807bdf0f8d479490af8b4beb6509250828dfa773ddca5eddf" Workload="ip--172--31--23--191-k8s-calico--apiserver--6568d4bb6--sbcbv-eth0" Apr 30 03:29:35.806472 containerd[2020]: 2025-04-30 03:29:35.802 [INFO][5646] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:29:35.806472 containerd[2020]: 2025-04-30 03:29:35.804 [INFO][5639] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2f0063c6c309130807bdf0f8d479490af8b4beb6509250828dfa773ddca5eddf" Apr 30 03:29:35.807537 containerd[2020]: time="2025-04-30T03:29:35.806693058Z" level=info msg="TearDown network for sandbox \"2f0063c6c309130807bdf0f8d479490af8b4beb6509250828dfa773ddca5eddf\" successfully" Apr 30 03:29:35.807537 containerd[2020]: time="2025-04-30T03:29:35.806726512Z" level=info msg="StopPodSandbox for \"2f0063c6c309130807bdf0f8d479490af8b4beb6509250828dfa773ddca5eddf\" returns successfully" Apr 30 03:29:35.807537 containerd[2020]: time="2025-04-30T03:29:35.807448333Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6568d4bb6-sbcbv,Uid:d42df863-30d9-489a-a202-be20feb2d875,Namespace:calico-apiserver,Attempt:1,}" Apr 30 03:29:35.970207 systemd-networkd[1575]: cali873d24510a8: Link UP Apr 30 03:29:35.972398 systemd-networkd[1575]: cali873d24510a8: Gained carrier Apr 30 03:29:35.999950 containerd[2020]: 2025-04-30 03:29:35.872 [INFO][5655] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--23--191-k8s-calico--apiserver--6568d4bb6--sbcbv-eth0 calico-apiserver-6568d4bb6- calico-apiserver d42df863-30d9-489a-a202-be20feb2d875 939 0 2025-04-30 03:29:10 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6568d4bb6 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-23-191 calico-apiserver-6568d4bb6-sbcbv eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali873d24510a8 [] []}} ContainerID="38b1ce5941485eaf885876f899286740159554a128d971352bc521823b61eb93" Namespace="calico-apiserver" Pod="calico-apiserver-6568d4bb6-sbcbv" WorkloadEndpoint="ip--172--31--23--191-k8s-calico--apiserver--6568d4bb6--sbcbv-" Apr 30 03:29:35.999950 containerd[2020]: 2025-04-30 03:29:35.873 [INFO][5655] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="38b1ce5941485eaf885876f899286740159554a128d971352bc521823b61eb93" Namespace="calico-apiserver" Pod="calico-apiserver-6568d4bb6-sbcbv" WorkloadEndpoint="ip--172--31--23--191-k8s-calico--apiserver--6568d4bb6--sbcbv-eth0" Apr 30 03:29:35.999950 containerd[2020]: 2025-04-30 03:29:35.916 [INFO][5667] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="38b1ce5941485eaf885876f899286740159554a128d971352bc521823b61eb93" HandleID="k8s-pod-network.38b1ce5941485eaf885876f899286740159554a128d971352bc521823b61eb93" Workload="ip--172--31--23--191-k8s-calico--apiserver--6568d4bb6--sbcbv-eth0" Apr 30 03:29:35.999950 containerd[2020]: 2025-04-30 03:29:35.929 [INFO][5667] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="38b1ce5941485eaf885876f899286740159554a128d971352bc521823b61eb93" HandleID="k8s-pod-network.38b1ce5941485eaf885876f899286740159554a128d971352bc521823b61eb93" Workload="ip--172--31--23--191-k8s-calico--apiserver--6568d4bb6--sbcbv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000335a20), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-23-191", "pod":"calico-apiserver-6568d4bb6-sbcbv", "timestamp":"2025-04-30 03:29:35.915993428 +0000 UTC"}, Hostname:"ip-172-31-23-191", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 03:29:35.999950 containerd[2020]: 2025-04-30 03:29:35.929 [INFO][5667] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:29:35.999950 containerd[2020]: 2025-04-30 03:29:35.929 [INFO][5667] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:29:35.999950 containerd[2020]: 2025-04-30 03:29:35.929 [INFO][5667] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-23-191' Apr 30 03:29:35.999950 containerd[2020]: 2025-04-30 03:29:35.931 [INFO][5667] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.38b1ce5941485eaf885876f899286740159554a128d971352bc521823b61eb93" host="ip-172-31-23-191" Apr 30 03:29:35.999950 containerd[2020]: 2025-04-30 03:29:35.935 [INFO][5667] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-23-191" Apr 30 03:29:35.999950 containerd[2020]: 2025-04-30 03:29:35.939 [INFO][5667] ipam/ipam.go 489: Trying affinity for 192.168.9.64/26 host="ip-172-31-23-191" Apr 30 03:29:35.999950 containerd[2020]: 2025-04-30 03:29:35.941 [INFO][5667] ipam/ipam.go 155: Attempting to load block cidr=192.168.9.64/26 host="ip-172-31-23-191" Apr 30 03:29:35.999950 containerd[2020]: 2025-04-30 03:29:35.944 [INFO][5667] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.9.64/26 host="ip-172-31-23-191" Apr 30 03:29:35.999950 containerd[2020]: 2025-04-30 03:29:35.944 [INFO][5667] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.9.64/26 handle="k8s-pod-network.38b1ce5941485eaf885876f899286740159554a128d971352bc521823b61eb93" host="ip-172-31-23-191" Apr 30 03:29:35.999950 containerd[2020]: 2025-04-30 03:29:35.946 [INFO][5667] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.38b1ce5941485eaf885876f899286740159554a128d971352bc521823b61eb93 Apr 30 03:29:35.999950 containerd[2020]: 2025-04-30 03:29:35.953 [INFO][5667] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.9.64/26 handle="k8s-pod-network.38b1ce5941485eaf885876f899286740159554a128d971352bc521823b61eb93" host="ip-172-31-23-191" Apr 30 03:29:35.999950 containerd[2020]: 2025-04-30 03:29:35.963 [INFO][5667] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.9.70/26] block=192.168.9.64/26 handle="k8s-pod-network.38b1ce5941485eaf885876f899286740159554a128d971352bc521823b61eb93" host="ip-172-31-23-191" Apr 30 03:29:35.999950 containerd[2020]: 2025-04-30 03:29:35.964 [INFO][5667] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.9.70/26] handle="k8s-pod-network.38b1ce5941485eaf885876f899286740159554a128d971352bc521823b61eb93" host="ip-172-31-23-191" Apr 30 03:29:35.999950 containerd[2020]: 2025-04-30 03:29:35.964 [INFO][5667] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:29:35.999950 containerd[2020]: 2025-04-30 03:29:35.964 [INFO][5667] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.9.70/26] IPv6=[] ContainerID="38b1ce5941485eaf885876f899286740159554a128d971352bc521823b61eb93" HandleID="k8s-pod-network.38b1ce5941485eaf885876f899286740159554a128d971352bc521823b61eb93" Workload="ip--172--31--23--191-k8s-calico--apiserver--6568d4bb6--sbcbv-eth0" Apr 30 03:29:36.003451 containerd[2020]: 2025-04-30 03:29:35.967 [INFO][5655] cni-plugin/k8s.go 386: Populated endpoint ContainerID="38b1ce5941485eaf885876f899286740159554a128d971352bc521823b61eb93" Namespace="calico-apiserver" Pod="calico-apiserver-6568d4bb6-sbcbv" WorkloadEndpoint="ip--172--31--23--191-k8s-calico--apiserver--6568d4bb6--sbcbv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--191-k8s-calico--apiserver--6568d4bb6--sbcbv-eth0", GenerateName:"calico-apiserver-6568d4bb6-", Namespace:"calico-apiserver", SelfLink:"", UID:"d42df863-30d9-489a-a202-be20feb2d875", ResourceVersion:"939", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 29, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6568d4bb6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-191", ContainerID:"", Pod:"calico-apiserver-6568d4bb6-sbcbv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.9.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali873d24510a8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:29:36.003451 containerd[2020]: 2025-04-30 03:29:35.967 [INFO][5655] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.9.70/32] ContainerID="38b1ce5941485eaf885876f899286740159554a128d971352bc521823b61eb93" Namespace="calico-apiserver" Pod="calico-apiserver-6568d4bb6-sbcbv" WorkloadEndpoint="ip--172--31--23--191-k8s-calico--apiserver--6568d4bb6--sbcbv-eth0" Apr 30 03:29:36.003451 containerd[2020]: 2025-04-30 03:29:35.967 [INFO][5655] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali873d24510a8 ContainerID="38b1ce5941485eaf885876f899286740159554a128d971352bc521823b61eb93" Namespace="calico-apiserver" Pod="calico-apiserver-6568d4bb6-sbcbv" WorkloadEndpoint="ip--172--31--23--191-k8s-calico--apiserver--6568d4bb6--sbcbv-eth0" Apr 30 03:29:36.003451 containerd[2020]: 2025-04-30 03:29:35.973 [INFO][5655] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="38b1ce5941485eaf885876f899286740159554a128d971352bc521823b61eb93" Namespace="calico-apiserver" Pod="calico-apiserver-6568d4bb6-sbcbv" WorkloadEndpoint="ip--172--31--23--191-k8s-calico--apiserver--6568d4bb6--sbcbv-eth0" Apr 30 03:29:36.003451 containerd[2020]: 2025-04-30 03:29:35.973 [INFO][5655] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="38b1ce5941485eaf885876f899286740159554a128d971352bc521823b61eb93" Namespace="calico-apiserver" Pod="calico-apiserver-6568d4bb6-sbcbv" WorkloadEndpoint="ip--172--31--23--191-k8s-calico--apiserver--6568d4bb6--sbcbv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--191-k8s-calico--apiserver--6568d4bb6--sbcbv-eth0", GenerateName:"calico-apiserver-6568d4bb6-", Namespace:"calico-apiserver", SelfLink:"", UID:"d42df863-30d9-489a-a202-be20feb2d875", ResourceVersion:"939", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 29, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6568d4bb6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-191", ContainerID:"38b1ce5941485eaf885876f899286740159554a128d971352bc521823b61eb93", Pod:"calico-apiserver-6568d4bb6-sbcbv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.9.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali873d24510a8", MAC:"0e:80:32:88:22:36", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:29:36.003451 containerd[2020]: 2025-04-30 03:29:35.994 [INFO][5655] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="38b1ce5941485eaf885876f899286740159554a128d971352bc521823b61eb93" Namespace="calico-apiserver" Pod="calico-apiserver-6568d4bb6-sbcbv" WorkloadEndpoint="ip--172--31--23--191-k8s-calico--apiserver--6568d4bb6--sbcbv-eth0" Apr 30 03:29:36.048871 containerd[2020]: time="2025-04-30T03:29:36.048654419Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:29:36.049488 containerd[2020]: time="2025-04-30T03:29:36.049412127Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:29:36.049488 containerd[2020]: time="2025-04-30T03:29:36.049439867Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:36.049801 containerd[2020]: time="2025-04-30T03:29:36.049562275Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:36.133556 containerd[2020]: time="2025-04-30T03:29:36.133504799Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6568d4bb6-sbcbv,Uid:d42df863-30d9-489a-a202-be20feb2d875,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"38b1ce5941485eaf885876f899286740159554a128d971352bc521823b61eb93\"" Apr 30 03:29:36.147880 systemd[1]: run-netns-cni\x2db6d7787f\x2d05fe\x2dfaf4\x2d7733\x2d4a66191fd90d.mount: Deactivated successfully. Apr 30 03:29:36.652791 systemd-networkd[1575]: cali5c97a8169dc: Gained IPv6LL Apr 30 03:29:37.356279 systemd-networkd[1575]: calic6ac652d3b4: Gained IPv6LL Apr 30 03:29:37.419752 systemd-networkd[1575]: cali873d24510a8: Gained IPv6LL Apr 30 03:29:37.628231 containerd[2020]: time="2025-04-30T03:29:37.628028834Z" level=info msg="StopPodSandbox for \"68f40e1cb60f80588c61ce600f3c436b12aceb52142759708197e5d1c8f9fc3e\"" Apr 30 03:29:37.735585 containerd[2020]: time="2025-04-30T03:29:37.732769357Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:37.735585 containerd[2020]: time="2025-04-30T03:29:37.734844761Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=43021437" Apr 30 03:29:37.738175 containerd[2020]: time="2025-04-30T03:29:37.737551360Z" level=info msg="ImageCreate event name:\"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:37.743278 containerd[2020]: time="2025-04-30T03:29:37.743217694Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:37.747911 containerd[2020]: time="2025-04-30T03:29:37.747449052Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"44514075\" in 3.259947189s" Apr 30 03:29:37.747911 containerd[2020]: time="2025-04-30T03:29:37.747494583Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\"" Apr 30 03:29:37.755948 containerd[2020]: time="2025-04-30T03:29:37.755911258Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\"" Apr 30 03:29:37.757067 containerd[2020]: time="2025-04-30T03:29:37.757027750Z" level=info msg="CreateContainer within sandbox \"8cdf8bbdca24af919eb8228a49ac81f21e2bf3f02f73d39324a0adb17fd2feb9\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 30 03:29:37.784142 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2509303392.mount: Deactivated successfully. Apr 30 03:29:37.789702 containerd[2020]: 2025-04-30 03:29:37.732 [INFO][5749] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="68f40e1cb60f80588c61ce600f3c436b12aceb52142759708197e5d1c8f9fc3e" Apr 30 03:29:37.789702 containerd[2020]: 2025-04-30 03:29:37.732 [INFO][5749] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="68f40e1cb60f80588c61ce600f3c436b12aceb52142759708197e5d1c8f9fc3e" iface="eth0" netns="/var/run/netns/cni-d9988fda-d50c-c7ba-7eef-7f3ea0163f56" Apr 30 03:29:37.789702 containerd[2020]: 2025-04-30 03:29:37.732 [INFO][5749] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="68f40e1cb60f80588c61ce600f3c436b12aceb52142759708197e5d1c8f9fc3e" iface="eth0" netns="/var/run/netns/cni-d9988fda-d50c-c7ba-7eef-7f3ea0163f56" Apr 30 03:29:37.789702 containerd[2020]: 2025-04-30 03:29:37.736 [INFO][5749] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="68f40e1cb60f80588c61ce600f3c436b12aceb52142759708197e5d1c8f9fc3e" iface="eth0" netns="/var/run/netns/cni-d9988fda-d50c-c7ba-7eef-7f3ea0163f56" Apr 30 03:29:37.789702 containerd[2020]: 2025-04-30 03:29:37.736 [INFO][5749] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="68f40e1cb60f80588c61ce600f3c436b12aceb52142759708197e5d1c8f9fc3e" Apr 30 03:29:37.789702 containerd[2020]: 2025-04-30 03:29:37.736 [INFO][5749] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="68f40e1cb60f80588c61ce600f3c436b12aceb52142759708197e5d1c8f9fc3e" Apr 30 03:29:37.789702 containerd[2020]: 2025-04-30 03:29:37.768 [INFO][5761] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="68f40e1cb60f80588c61ce600f3c436b12aceb52142759708197e5d1c8f9fc3e" HandleID="k8s-pod-network.68f40e1cb60f80588c61ce600f3c436b12aceb52142759708197e5d1c8f9fc3e" Workload="ip--172--31--23--191-k8s-calico--apiserver--6568d4bb6--l2z69-eth0" Apr 30 03:29:37.789702 containerd[2020]: 2025-04-30 03:29:37.768 [INFO][5761] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:29:37.789702 containerd[2020]: 2025-04-30 03:29:37.768 [INFO][5761] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:29:37.789702 containerd[2020]: 2025-04-30 03:29:37.775 [WARNING][5761] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="68f40e1cb60f80588c61ce600f3c436b12aceb52142759708197e5d1c8f9fc3e" HandleID="k8s-pod-network.68f40e1cb60f80588c61ce600f3c436b12aceb52142759708197e5d1c8f9fc3e" Workload="ip--172--31--23--191-k8s-calico--apiserver--6568d4bb6--l2z69-eth0" Apr 30 03:29:37.789702 containerd[2020]: 2025-04-30 03:29:37.775 [INFO][5761] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="68f40e1cb60f80588c61ce600f3c436b12aceb52142759708197e5d1c8f9fc3e" HandleID="k8s-pod-network.68f40e1cb60f80588c61ce600f3c436b12aceb52142759708197e5d1c8f9fc3e" Workload="ip--172--31--23--191-k8s-calico--apiserver--6568d4bb6--l2z69-eth0" Apr 30 03:29:37.789702 containerd[2020]: 2025-04-30 03:29:37.778 [INFO][5761] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:29:37.789702 containerd[2020]: 2025-04-30 03:29:37.786 [INFO][5749] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="68f40e1cb60f80588c61ce600f3c436b12aceb52142759708197e5d1c8f9fc3e" Apr 30 03:29:37.791009 containerd[2020]: time="2025-04-30T03:29:37.790678251Z" level=info msg="TearDown network for sandbox \"68f40e1cb60f80588c61ce600f3c436b12aceb52142759708197e5d1c8f9fc3e\" successfully" Apr 30 03:29:37.791009 containerd[2020]: time="2025-04-30T03:29:37.790704003Z" level=info msg="StopPodSandbox for \"68f40e1cb60f80588c61ce600f3c436b12aceb52142759708197e5d1c8f9fc3e\" returns successfully" Apr 30 03:29:37.791349 containerd[2020]: time="2025-04-30T03:29:37.791272924Z" level=info msg="CreateContainer within sandbox \"8cdf8bbdca24af919eb8228a49ac81f21e2bf3f02f73d39324a0adb17fd2feb9\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"c4e025efc5fdf6ae6d3bfe09cb92e7250a63cdab502a1ef6a2737421ac80dccf\"" Apr 30 03:29:37.794207 containerd[2020]: time="2025-04-30T03:29:37.794179308Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6568d4bb6-l2z69,Uid:9a75c049-74d5-4f65-bcf2-58f5a64e3866,Namespace:calico-apiserver,Attempt:1,}" Apr 30 03:29:37.796784 systemd[1]: run-netns-cni\x2dd9988fda\x2dd50c\x2dc7ba\x2d7eef\x2d7f3ea0163f56.mount: Deactivated successfully. Apr 30 03:29:37.797540 containerd[2020]: time="2025-04-30T03:29:37.797478107Z" level=info msg="StartContainer for \"c4e025efc5fdf6ae6d3bfe09cb92e7250a63cdab502a1ef6a2737421ac80dccf\"" Apr 30 03:29:37.937127 containerd[2020]: time="2025-04-30T03:29:37.936637277Z" level=info msg="StartContainer for \"c4e025efc5fdf6ae6d3bfe09cb92e7250a63cdab502a1ef6a2737421ac80dccf\" returns successfully" Apr 30 03:29:38.023511 systemd-networkd[1575]: cali12100dc134f: Link UP Apr 30 03:29:38.026097 systemd-networkd[1575]: cali12100dc134f: Gained carrier Apr 30 03:29:38.036747 kubelet[3315]: I0430 03:29:38.036131 3315 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-79d7797bfd-7hhqk" podStartSLOduration=23.76675176 podStartE2EDuration="27.035719514s" podCreationTimestamp="2025-04-30 03:29:11 +0000 UTC" firstStartedPulling="2025-04-30 03:29:34.485081118 +0000 UTC m=+48.011297853" lastFinishedPulling="2025-04-30 03:29:37.754048876 +0000 UTC m=+51.280265607" observedRunningTime="2025-04-30 03:29:38.027923704 +0000 UTC m=+51.554140454" watchObservedRunningTime="2025-04-30 03:29:38.035719514 +0000 UTC m=+51.561936267" Apr 30 03:29:38.078309 containerd[2020]: 2025-04-30 03:29:37.876 [INFO][5781] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--23--191-k8s-calico--apiserver--6568d4bb6--l2z69-eth0 calico-apiserver-6568d4bb6- calico-apiserver 9a75c049-74d5-4f65-bcf2-58f5a64e3866 950 0 2025-04-30 03:29:10 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6568d4bb6 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-23-191 calico-apiserver-6568d4bb6-l2z69 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali12100dc134f [] []}} ContainerID="bdc26b9286d4335c13a5ffd9eba14573f4c3b8285f62fe28f181f2f63cbef0c1" Namespace="calico-apiserver" Pod="calico-apiserver-6568d4bb6-l2z69" WorkloadEndpoint="ip--172--31--23--191-k8s-calico--apiserver--6568d4bb6--l2z69-" Apr 30 03:29:38.078309 containerd[2020]: 2025-04-30 03:29:37.876 [INFO][5781] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="bdc26b9286d4335c13a5ffd9eba14573f4c3b8285f62fe28f181f2f63cbef0c1" Namespace="calico-apiserver" Pod="calico-apiserver-6568d4bb6-l2z69" WorkloadEndpoint="ip--172--31--23--191-k8s-calico--apiserver--6568d4bb6--l2z69-eth0" Apr 30 03:29:38.078309 containerd[2020]: 2025-04-30 03:29:37.942 [INFO][5805] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bdc26b9286d4335c13a5ffd9eba14573f4c3b8285f62fe28f181f2f63cbef0c1" HandleID="k8s-pod-network.bdc26b9286d4335c13a5ffd9eba14573f4c3b8285f62fe28f181f2f63cbef0c1" Workload="ip--172--31--23--191-k8s-calico--apiserver--6568d4bb6--l2z69-eth0" Apr 30 03:29:38.078309 containerd[2020]: 2025-04-30 03:29:37.957 [INFO][5805] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="bdc26b9286d4335c13a5ffd9eba14573f4c3b8285f62fe28f181f2f63cbef0c1" HandleID="k8s-pod-network.bdc26b9286d4335c13a5ffd9eba14573f4c3b8285f62fe28f181f2f63cbef0c1" Workload="ip--172--31--23--191-k8s-calico--apiserver--6568d4bb6--l2z69-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000332d50), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-23-191", "pod":"calico-apiserver-6568d4bb6-l2z69", "timestamp":"2025-04-30 03:29:37.942926207 +0000 UTC"}, Hostname:"ip-172-31-23-191", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 03:29:38.078309 containerd[2020]: 2025-04-30 03:29:37.957 [INFO][5805] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:29:38.078309 containerd[2020]: 2025-04-30 03:29:37.957 [INFO][5805] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:29:38.078309 containerd[2020]: 2025-04-30 03:29:37.957 [INFO][5805] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-23-191' Apr 30 03:29:38.078309 containerd[2020]: 2025-04-30 03:29:37.959 [INFO][5805] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.bdc26b9286d4335c13a5ffd9eba14573f4c3b8285f62fe28f181f2f63cbef0c1" host="ip-172-31-23-191" Apr 30 03:29:38.078309 containerd[2020]: 2025-04-30 03:29:37.963 [INFO][5805] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-23-191" Apr 30 03:29:38.078309 containerd[2020]: 2025-04-30 03:29:37.971 [INFO][5805] ipam/ipam.go 489: Trying affinity for 192.168.9.64/26 host="ip-172-31-23-191" Apr 30 03:29:38.078309 containerd[2020]: 2025-04-30 03:29:37.973 [INFO][5805] ipam/ipam.go 155: Attempting to load block cidr=192.168.9.64/26 host="ip-172-31-23-191" Apr 30 03:29:38.078309 containerd[2020]: 2025-04-30 03:29:37.978 [INFO][5805] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.9.64/26 host="ip-172-31-23-191" Apr 30 03:29:38.078309 containerd[2020]: 2025-04-30 03:29:37.978 [INFO][5805] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.9.64/26 handle="k8s-pod-network.bdc26b9286d4335c13a5ffd9eba14573f4c3b8285f62fe28f181f2f63cbef0c1" host="ip-172-31-23-191" Apr 30 03:29:38.078309 containerd[2020]: 2025-04-30 03:29:37.985 [INFO][5805] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.bdc26b9286d4335c13a5ffd9eba14573f4c3b8285f62fe28f181f2f63cbef0c1 Apr 30 03:29:38.078309 containerd[2020]: 2025-04-30 03:29:37.997 [INFO][5805] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.9.64/26 handle="k8s-pod-network.bdc26b9286d4335c13a5ffd9eba14573f4c3b8285f62fe28f181f2f63cbef0c1" host="ip-172-31-23-191" Apr 30 03:29:38.078309 containerd[2020]: 2025-04-30 03:29:38.012 [INFO][5805] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.9.71/26] block=192.168.9.64/26 handle="k8s-pod-network.bdc26b9286d4335c13a5ffd9eba14573f4c3b8285f62fe28f181f2f63cbef0c1" host="ip-172-31-23-191" Apr 30 03:29:38.078309 containerd[2020]: 2025-04-30 03:29:38.012 [INFO][5805] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.9.71/26] handle="k8s-pod-network.bdc26b9286d4335c13a5ffd9eba14573f4c3b8285f62fe28f181f2f63cbef0c1" host="ip-172-31-23-191" Apr 30 03:29:38.078309 containerd[2020]: 2025-04-30 03:29:38.012 [INFO][5805] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:29:38.078309 containerd[2020]: 2025-04-30 03:29:38.013 [INFO][5805] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.9.71/26] IPv6=[] ContainerID="bdc26b9286d4335c13a5ffd9eba14573f4c3b8285f62fe28f181f2f63cbef0c1" HandleID="k8s-pod-network.bdc26b9286d4335c13a5ffd9eba14573f4c3b8285f62fe28f181f2f63cbef0c1" Workload="ip--172--31--23--191-k8s-calico--apiserver--6568d4bb6--l2z69-eth0" Apr 30 03:29:38.079958 containerd[2020]: 2025-04-30 03:29:38.016 [INFO][5781] cni-plugin/k8s.go 386: Populated endpoint ContainerID="bdc26b9286d4335c13a5ffd9eba14573f4c3b8285f62fe28f181f2f63cbef0c1" Namespace="calico-apiserver" Pod="calico-apiserver-6568d4bb6-l2z69" WorkloadEndpoint="ip--172--31--23--191-k8s-calico--apiserver--6568d4bb6--l2z69-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--191-k8s-calico--apiserver--6568d4bb6--l2z69-eth0", GenerateName:"calico-apiserver-6568d4bb6-", Namespace:"calico-apiserver", SelfLink:"", UID:"9a75c049-74d5-4f65-bcf2-58f5a64e3866", ResourceVersion:"950", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 29, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6568d4bb6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-191", ContainerID:"", Pod:"calico-apiserver-6568d4bb6-l2z69", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.9.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali12100dc134f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:29:38.079958 containerd[2020]: 2025-04-30 03:29:38.016 [INFO][5781] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.9.71/32] ContainerID="bdc26b9286d4335c13a5ffd9eba14573f4c3b8285f62fe28f181f2f63cbef0c1" Namespace="calico-apiserver" Pod="calico-apiserver-6568d4bb6-l2z69" WorkloadEndpoint="ip--172--31--23--191-k8s-calico--apiserver--6568d4bb6--l2z69-eth0" Apr 30 03:29:38.079958 containerd[2020]: 2025-04-30 03:29:38.017 [INFO][5781] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali12100dc134f ContainerID="bdc26b9286d4335c13a5ffd9eba14573f4c3b8285f62fe28f181f2f63cbef0c1" Namespace="calico-apiserver" Pod="calico-apiserver-6568d4bb6-l2z69" WorkloadEndpoint="ip--172--31--23--191-k8s-calico--apiserver--6568d4bb6--l2z69-eth0" Apr 30 03:29:38.079958 containerd[2020]: 2025-04-30 03:29:38.023 [INFO][5781] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bdc26b9286d4335c13a5ffd9eba14573f4c3b8285f62fe28f181f2f63cbef0c1" Namespace="calico-apiserver" Pod="calico-apiserver-6568d4bb6-l2z69" WorkloadEndpoint="ip--172--31--23--191-k8s-calico--apiserver--6568d4bb6--l2z69-eth0" Apr 30 03:29:38.079958 containerd[2020]: 2025-04-30 03:29:38.026 [INFO][5781] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="bdc26b9286d4335c13a5ffd9eba14573f4c3b8285f62fe28f181f2f63cbef0c1" Namespace="calico-apiserver" Pod="calico-apiserver-6568d4bb6-l2z69" WorkloadEndpoint="ip--172--31--23--191-k8s-calico--apiserver--6568d4bb6--l2z69-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--191-k8s-calico--apiserver--6568d4bb6--l2z69-eth0", GenerateName:"calico-apiserver-6568d4bb6-", Namespace:"calico-apiserver", SelfLink:"", UID:"9a75c049-74d5-4f65-bcf2-58f5a64e3866", ResourceVersion:"950", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 29, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6568d4bb6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-191", ContainerID:"bdc26b9286d4335c13a5ffd9eba14573f4c3b8285f62fe28f181f2f63cbef0c1", Pod:"calico-apiserver-6568d4bb6-l2z69", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.9.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali12100dc134f", MAC:"ca:8e:55:ec:4a:05", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:29:38.079958 containerd[2020]: 2025-04-30 03:29:38.066 [INFO][5781] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="bdc26b9286d4335c13a5ffd9eba14573f4c3b8285f62fe28f181f2f63cbef0c1" Namespace="calico-apiserver" Pod="calico-apiserver-6568d4bb6-l2z69" WorkloadEndpoint="ip--172--31--23--191-k8s-calico--apiserver--6568d4bb6--l2z69-eth0" Apr 30 03:29:38.138924 containerd[2020]: time="2025-04-30T03:29:38.138707777Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:29:38.139336 containerd[2020]: time="2025-04-30T03:29:38.139190429Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:29:38.139336 containerd[2020]: time="2025-04-30T03:29:38.139231069Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:38.139781 containerd[2020]: time="2025-04-30T03:29:38.139702090Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:38.247896 containerd[2020]: time="2025-04-30T03:29:38.247738440Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6568d4bb6-l2z69,Uid:9a75c049-74d5-4f65-bcf2-58f5a64e3866,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"bdc26b9286d4335c13a5ffd9eba14573f4c3b8285f62fe28f181f2f63cbef0c1\"" Apr 30 03:29:38.253556 containerd[2020]: time="2025-04-30T03:29:38.253355998Z" level=info msg="CreateContainer within sandbox \"bdc26b9286d4335c13a5ffd9eba14573f4c3b8285f62fe28f181f2f63cbef0c1\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 30 03:29:38.280432 containerd[2020]: time="2025-04-30T03:29:38.279128184Z" level=info msg="CreateContainer within sandbox \"bdc26b9286d4335c13a5ffd9eba14573f4c3b8285f62fe28f181f2f63cbef0c1\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"1a6dd8040c172be2f5b61d3ab4eb784c4ffc016ed9d92b0d13a4d9cd0175b855\"" Apr 30 03:29:38.283386 containerd[2020]: time="2025-04-30T03:29:38.283343495Z" level=info msg="StartContainer for \"1a6dd8040c172be2f5b61d3ab4eb784c4ffc016ed9d92b0d13a4d9cd0175b855\"" Apr 30 03:29:38.422758 containerd[2020]: time="2025-04-30T03:29:38.422711156Z" level=info msg="StartContainer for \"1a6dd8040c172be2f5b61d3ab4eb784c4ffc016ed9d92b0d13a4d9cd0175b855\" returns successfully" Apr 30 03:29:39.011331 kubelet[3315]: I0430 03:29:39.011290 3315 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 30 03:29:39.085594 systemd-networkd[1575]: cali12100dc134f: Gained IPv6LL Apr 30 03:29:39.108006 kubelet[3315]: I0430 03:29:39.107943 3315 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6568d4bb6-l2z69" podStartSLOduration=29.107917272 podStartE2EDuration="29.107917272s" podCreationTimestamp="2025-04-30 03:29:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 03:29:39.094883289 +0000 UTC m=+52.621100043" watchObservedRunningTime="2025-04-30 03:29:39.107917272 +0000 UTC m=+52.634134027" Apr 30 03:29:40.807992 systemd[1]: Started sshd@12-172.31.23.191:22-147.75.109.163:44940.service - OpenSSH per-connection server daemon (147.75.109.163:44940). Apr 30 03:29:40.994415 containerd[2020]: time="2025-04-30T03:29:40.994252616Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:40.996576 containerd[2020]: time="2025-04-30T03:29:40.996443175Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.3: active requests=0, bytes read=34789138" Apr 30 03:29:40.999242 containerd[2020]: time="2025-04-30T03:29:40.999178460Z" level=info msg="ImageCreate event name:\"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:41.015794 containerd[2020]: time="2025-04-30T03:29:41.015745231Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:41.016616 containerd[2020]: time="2025-04-30T03:29:41.016360346Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" with image id \"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\", size \"36281728\" in 3.260303465s" Apr 30 03:29:41.016616 containerd[2020]: time="2025-04-30T03:29:41.016394588Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" returns image reference \"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\"" Apr 30 03:29:41.018949 containerd[2020]: time="2025-04-30T03:29:41.018782922Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\"" Apr 30 03:29:41.039773 containerd[2020]: time="2025-04-30T03:29:41.039535378Z" level=info msg="CreateContainer within sandbox \"a36ca6d06ac6dfac0d7879cb314ae0a5a0b9bc8894d32dc5be5c1dfc54b42e36\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Apr 30 03:29:41.083832 containerd[2020]: time="2025-04-30T03:29:41.083560183Z" level=info msg="CreateContainer within sandbox \"a36ca6d06ac6dfac0d7879cb314ae0a5a0b9bc8894d32dc5be5c1dfc54b42e36\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"d7d5646d72647ba708f44178a6a74894343ab4ffec3b563e7615ddb336f9263a\"" Apr 30 03:29:41.086708 containerd[2020]: time="2025-04-30T03:29:41.086666552Z" level=info msg="StartContainer for \"d7d5646d72647ba708f44178a6a74894343ab4ffec3b563e7615ddb336f9263a\"" Apr 30 03:29:41.138897 sshd[5937]: Accepted publickey for core from 147.75.109.163 port 44940 ssh2: RSA SHA256:7ZQea3lKZeIY1pq8546y2SpcWopo7i1peiZKBcYFJ3g Apr 30 03:29:41.142939 sshd[5937]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:29:41.161243 systemd-logind[1998]: New session 13 of user core. Apr 30 03:29:41.166664 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 30 03:29:41.280459 containerd[2020]: time="2025-04-30T03:29:41.279905690Z" level=info msg="StartContainer for \"d7d5646d72647ba708f44178a6a74894343ab4ffec3b563e7615ddb336f9263a\" returns successfully" Apr 30 03:29:41.518607 systemd-journald[1497]: Under memory pressure, flushing caches. Apr 30 03:29:41.515929 systemd-resolved[1907]: Under memory pressure, flushing caches. Apr 30 03:29:41.515971 systemd-resolved[1907]: Flushed all caches. Apr 30 03:29:41.544515 ntpd[1983]: Listen normally on 6 vxlan.calico 192.168.9.64:123 Apr 30 03:29:41.544672 ntpd[1983]: Listen normally on 7 vxlan.calico [fe80::64d4:a7ff:fea9:73a1%4]:123 Apr 30 03:29:41.548881 ntpd[1983]: 30 Apr 03:29:41 ntpd[1983]: Listen normally on 6 vxlan.calico 192.168.9.64:123 Apr 30 03:29:41.548881 ntpd[1983]: 30 Apr 03:29:41 ntpd[1983]: Listen normally on 7 vxlan.calico [fe80::64d4:a7ff:fea9:73a1%4]:123 Apr 30 03:29:41.548881 ntpd[1983]: 30 Apr 03:29:41 ntpd[1983]: Listen normally on 8 cali5a48f5651dc [fe80::ecee:eeff:feee:eeee%7]:123 Apr 30 03:29:41.548881 ntpd[1983]: 30 Apr 03:29:41 ntpd[1983]: Listen normally on 9 cali87dc7e85d3a [fe80::ecee:eeff:feee:eeee%8]:123 Apr 30 03:29:41.548881 ntpd[1983]: 30 Apr 03:29:41 ntpd[1983]: Listen normally on 10 cali79f8f63194b [fe80::ecee:eeff:feee:eeee%9]:123 Apr 30 03:29:41.548881 ntpd[1983]: 30 Apr 03:29:41 ntpd[1983]: Listen normally on 11 cali5c97a8169dc [fe80::ecee:eeff:feee:eeee%10]:123 Apr 30 03:29:41.548881 ntpd[1983]: 30 Apr 03:29:41 ntpd[1983]: Listen normally on 12 calic6ac652d3b4 [fe80::ecee:eeff:feee:eeee%11]:123 Apr 30 03:29:41.548881 ntpd[1983]: 30 Apr 03:29:41 ntpd[1983]: Listen normally on 13 cali873d24510a8 [fe80::ecee:eeff:feee:eeee%12]:123 Apr 30 03:29:41.548881 ntpd[1983]: 30 Apr 03:29:41 ntpd[1983]: Listen normally on 14 cali12100dc134f [fe80::ecee:eeff:feee:eeee%13]:123 Apr 30 03:29:41.544739 ntpd[1983]: Listen normally on 8 cali5a48f5651dc [fe80::ecee:eeff:feee:eeee%7]:123 Apr 30 03:29:41.544808 ntpd[1983]: Listen normally on 9 cali87dc7e85d3a [fe80::ecee:eeff:feee:eeee%8]:123 Apr 30 03:29:41.544850 ntpd[1983]: Listen normally on 10 cali79f8f63194b [fe80::ecee:eeff:feee:eeee%9]:123 Apr 30 03:29:41.544888 ntpd[1983]: Listen normally on 11 cali5c97a8169dc [fe80::ecee:eeff:feee:eeee%10]:123 Apr 30 03:29:41.544927 ntpd[1983]: Listen normally on 12 calic6ac652d3b4 [fe80::ecee:eeff:feee:eeee%11]:123 Apr 30 03:29:41.544965 ntpd[1983]: Listen normally on 13 cali873d24510a8 [fe80::ecee:eeff:feee:eeee%12]:123 Apr 30 03:29:41.545003 ntpd[1983]: Listen normally on 14 cali12100dc134f [fe80::ecee:eeff:feee:eeee%13]:123 Apr 30 03:29:42.240454 kubelet[3315]: I0430 03:29:42.240183 3315 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5b8dc9df9d-46jp7" podStartSLOduration=25.837036663 podStartE2EDuration="31.24015182s" podCreationTimestamp="2025-04-30 03:29:11 +0000 UTC" firstStartedPulling="2025-04-30 03:29:35.615398893 +0000 UTC m=+49.141615632" lastFinishedPulling="2025-04-30 03:29:41.018514058 +0000 UTC m=+54.544730789" observedRunningTime="2025-04-30 03:29:42.064699675 +0000 UTC m=+55.590916426" watchObservedRunningTime="2025-04-30 03:29:42.24015182 +0000 UTC m=+55.766368572" Apr 30 03:29:42.289209 sshd[5937]: pam_unix(sshd:session): session closed for user core Apr 30 03:29:42.294592 systemd[1]: sshd@12-172.31.23.191:22-147.75.109.163:44940.service: Deactivated successfully. Apr 30 03:29:42.298526 systemd-logind[1998]: Session 13 logged out. Waiting for processes to exit. Apr 30 03:29:42.299807 systemd[1]: session-13.scope: Deactivated successfully. Apr 30 03:29:42.301233 systemd-logind[1998]: Removed session 13. Apr 30 03:29:42.595614 containerd[2020]: time="2025-04-30T03:29:42.595235047Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:42.596778 containerd[2020]: time="2025-04-30T03:29:42.596693760Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.3: active requests=0, bytes read=7912898" Apr 30 03:29:42.597694 containerd[2020]: time="2025-04-30T03:29:42.597598443Z" level=info msg="ImageCreate event name:\"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:42.600826 containerd[2020]: time="2025-04-30T03:29:42.600075047Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:42.600826 containerd[2020]: time="2025-04-30T03:29:42.600715380Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.3\" with image id \"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\", size \"9405520\" in 1.581901856s" Apr 30 03:29:42.600826 containerd[2020]: time="2025-04-30T03:29:42.600743402Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\" returns image reference \"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\"" Apr 30 03:29:42.601861 containerd[2020]: time="2025-04-30T03:29:42.601841193Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" Apr 30 03:29:42.604489 containerd[2020]: time="2025-04-30T03:29:42.604459160Z" level=info msg="CreateContainer within sandbox \"c0e9f0687b6c4db508577d7155e3b5b3f4ef4228349b0d3e49d3628daa73dc83\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Apr 30 03:29:42.676904 containerd[2020]: time="2025-04-30T03:29:42.676853221Z" level=info msg="CreateContainer within sandbox \"c0e9f0687b6c4db508577d7155e3b5b3f4ef4228349b0d3e49d3628daa73dc83\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"d91f52795f40695b5b634fd5a84b3b470904353a351040c9e05661fac6384d34\"" Apr 30 03:29:42.677878 containerd[2020]: time="2025-04-30T03:29:42.677851272Z" level=info msg="StartContainer for \"d91f52795f40695b5b634fd5a84b3b470904353a351040c9e05661fac6384d34\"" Apr 30 03:29:42.769821 containerd[2020]: time="2025-04-30T03:29:42.769698938Z" level=info msg="StartContainer for \"d91f52795f40695b5b634fd5a84b3b470904353a351040c9e05661fac6384d34\" returns successfully" Apr 30 03:29:42.889271 containerd[2020]: time="2025-04-30T03:29:42.889149456Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:42.890249 containerd[2020]: time="2025-04-30T03:29:42.890188874Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=77" Apr 30 03:29:42.892401 containerd[2020]: time="2025-04-30T03:29:42.892362475Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"44514075\" in 290.395854ms" Apr 30 03:29:42.892401 containerd[2020]: time="2025-04-30T03:29:42.892399204Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\"" Apr 30 03:29:42.893683 containerd[2020]: time="2025-04-30T03:29:42.893292935Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\"" Apr 30 03:29:42.895846 containerd[2020]: time="2025-04-30T03:29:42.895761326Z" level=info msg="CreateContainer within sandbox \"38b1ce5941485eaf885876f899286740159554a128d971352bc521823b61eb93\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 30 03:29:42.923207 containerd[2020]: time="2025-04-30T03:29:42.923156778Z" level=info msg="CreateContainer within sandbox \"38b1ce5941485eaf885876f899286740159554a128d971352bc521823b61eb93\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"3fd0fdc53e04bad651a67e699bd12a40d094ef424a31e37da5e4ce215fd038ee\"" Apr 30 03:29:42.924613 containerd[2020]: time="2025-04-30T03:29:42.923908604Z" level=info msg="StartContainer for \"3fd0fdc53e04bad651a67e699bd12a40d094ef424a31e37da5e4ce215fd038ee\"" Apr 30 03:29:43.023794 containerd[2020]: time="2025-04-30T03:29:43.023751999Z" level=info msg="StartContainer for \"3fd0fdc53e04bad651a67e699bd12a40d094ef424a31e37da5e4ce215fd038ee\" returns successfully" Apr 30 03:29:43.563651 systemd-resolved[1907]: Under memory pressure, flushing caches. Apr 30 03:29:43.565661 systemd-journald[1497]: Under memory pressure, flushing caches. Apr 30 03:29:43.563690 systemd-resolved[1907]: Flushed all caches. Apr 30 03:29:44.187597 kubelet[3315]: I0430 03:29:44.186093 3315 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6568d4bb6-sbcbv" podStartSLOduration=27.430915911 podStartE2EDuration="34.186076808s" podCreationTimestamp="2025-04-30 03:29:10 +0000 UTC" firstStartedPulling="2025-04-30 03:29:36.137941076 +0000 UTC m=+49.664157810" lastFinishedPulling="2025-04-30 03:29:42.893101977 +0000 UTC m=+56.419318707" observedRunningTime="2025-04-30 03:29:43.095885391 +0000 UTC m=+56.622102151" watchObservedRunningTime="2025-04-30 03:29:44.186076808 +0000 UTC m=+57.712293569" Apr 30 03:29:44.443636 containerd[2020]: time="2025-04-30T03:29:44.443311177Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:44.445749 containerd[2020]: time="2025-04-30T03:29:44.445670599Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3: active requests=0, bytes read=13991773" Apr 30 03:29:44.448730 containerd[2020]: time="2025-04-30T03:29:44.447930453Z" level=info msg="ImageCreate event name:\"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:44.464155 containerd[2020]: time="2025-04-30T03:29:44.463385189Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:44.464518 containerd[2020]: time="2025-04-30T03:29:44.464488834Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" with image id \"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\", size \"15484347\" in 1.571147752s" Apr 30 03:29:44.464643 containerd[2020]: time="2025-04-30T03:29:44.464612317Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" returns image reference \"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\"" Apr 30 03:29:44.467220 containerd[2020]: time="2025-04-30T03:29:44.467191113Z" level=info msg="CreateContainer within sandbox \"c0e9f0687b6c4db508577d7155e3b5b3f4ef4228349b0d3e49d3628daa73dc83\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Apr 30 03:29:44.493442 containerd[2020]: time="2025-04-30T03:29:44.493265228Z" level=info msg="CreateContainer within sandbox \"c0e9f0687b6c4db508577d7155e3b5b3f4ef4228349b0d3e49d3628daa73dc83\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"03d9358276979b29859475ae23fb3bb7562dd17cf85bff4a577525d2066f8f76\"" Apr 30 03:29:44.494673 containerd[2020]: time="2025-04-30T03:29:44.494235588Z" level=info msg="StartContainer for \"03d9358276979b29859475ae23fb3bb7562dd17cf85bff4a577525d2066f8f76\"" Apr 30 03:29:44.625607 containerd[2020]: time="2025-04-30T03:29:44.625533206Z" level=info msg="StartContainer for \"03d9358276979b29859475ae23fb3bb7562dd17cf85bff4a577525d2066f8f76\" returns successfully" Apr 30 03:29:45.114613 kubelet[3315]: I0430 03:29:45.114545 3315 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Apr 30 03:29:45.119910 kubelet[3315]: I0430 03:29:45.119867 3315 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Apr 30 03:29:46.734965 containerd[2020]: time="2025-04-30T03:29:46.734868920Z" level=info msg="StopPodSandbox for \"fe7e08d10bd1fbd8312406391891a6337478c7ab5867988928bcd1b0c477f46f\"" Apr 30 03:29:47.072715 containerd[2020]: 2025-04-30 03:29:46.960 [WARNING][6147] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fe7e08d10bd1fbd8312406391891a6337478c7ab5867988928bcd1b0c477f46f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--191-k8s-csi--node--driver--lfxjq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"46a42e1d-4f1d-46c0-be67-d687a45629b1", ResourceVersion:"1029", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 29, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-191", ContainerID:"c0e9f0687b6c4db508577d7155e3b5b3f4ef4228349b0d3e49d3628daa73dc83", Pod:"csi-node-driver-lfxjq", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.9.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic6ac652d3b4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:29:47.072715 containerd[2020]: 2025-04-30 03:29:46.963 [INFO][6147] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="fe7e08d10bd1fbd8312406391891a6337478c7ab5867988928bcd1b0c477f46f" Apr 30 03:29:47.072715 containerd[2020]: 2025-04-30 03:29:46.963 [INFO][6147] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fe7e08d10bd1fbd8312406391891a6337478c7ab5867988928bcd1b0c477f46f" iface="eth0" netns="" Apr 30 03:29:47.072715 containerd[2020]: 2025-04-30 03:29:46.963 [INFO][6147] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="fe7e08d10bd1fbd8312406391891a6337478c7ab5867988928bcd1b0c477f46f" Apr 30 03:29:47.072715 containerd[2020]: 2025-04-30 03:29:46.963 [INFO][6147] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fe7e08d10bd1fbd8312406391891a6337478c7ab5867988928bcd1b0c477f46f" Apr 30 03:29:47.072715 containerd[2020]: 2025-04-30 03:29:47.056 [INFO][6154] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fe7e08d10bd1fbd8312406391891a6337478c7ab5867988928bcd1b0c477f46f" HandleID="k8s-pod-network.fe7e08d10bd1fbd8312406391891a6337478c7ab5867988928bcd1b0c477f46f" Workload="ip--172--31--23--191-k8s-csi--node--driver--lfxjq-eth0" Apr 30 03:29:47.072715 containerd[2020]: 2025-04-30 03:29:47.056 [INFO][6154] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:29:47.072715 containerd[2020]: 2025-04-30 03:29:47.057 [INFO][6154] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:29:47.072715 containerd[2020]: 2025-04-30 03:29:47.066 [WARNING][6154] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fe7e08d10bd1fbd8312406391891a6337478c7ab5867988928bcd1b0c477f46f" HandleID="k8s-pod-network.fe7e08d10bd1fbd8312406391891a6337478c7ab5867988928bcd1b0c477f46f" Workload="ip--172--31--23--191-k8s-csi--node--driver--lfxjq-eth0" Apr 30 03:29:47.072715 containerd[2020]: 2025-04-30 03:29:47.066 [INFO][6154] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fe7e08d10bd1fbd8312406391891a6337478c7ab5867988928bcd1b0c477f46f" HandleID="k8s-pod-network.fe7e08d10bd1fbd8312406391891a6337478c7ab5867988928bcd1b0c477f46f" Workload="ip--172--31--23--191-k8s-csi--node--driver--lfxjq-eth0" Apr 30 03:29:47.072715 containerd[2020]: 2025-04-30 03:29:47.067 [INFO][6154] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:29:47.072715 containerd[2020]: 2025-04-30 03:29:47.070 [INFO][6147] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="fe7e08d10bd1fbd8312406391891a6337478c7ab5867988928bcd1b0c477f46f" Apr 30 03:29:47.082022 containerd[2020]: time="2025-04-30T03:29:47.081539380Z" level=info msg="TearDown network for sandbox \"fe7e08d10bd1fbd8312406391891a6337478c7ab5867988928bcd1b0c477f46f\" successfully" Apr 30 03:29:47.082022 containerd[2020]: time="2025-04-30T03:29:47.081594672Z" level=info msg="StopPodSandbox for \"fe7e08d10bd1fbd8312406391891a6337478c7ab5867988928bcd1b0c477f46f\" returns successfully" Apr 30 03:29:47.186911 containerd[2020]: time="2025-04-30T03:29:47.186852795Z" level=info msg="RemovePodSandbox for \"fe7e08d10bd1fbd8312406391891a6337478c7ab5867988928bcd1b0c477f46f\"" Apr 30 03:29:47.186911 containerd[2020]: time="2025-04-30T03:29:47.186916292Z" level=info msg="Forcibly stopping sandbox \"fe7e08d10bd1fbd8312406391891a6337478c7ab5867988928bcd1b0c477f46f\"" Apr 30 03:29:47.274589 containerd[2020]: 2025-04-30 03:29:47.231 [WARNING][6172] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fe7e08d10bd1fbd8312406391891a6337478c7ab5867988928bcd1b0c477f46f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--191-k8s-csi--node--driver--lfxjq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"46a42e1d-4f1d-46c0-be67-d687a45629b1", ResourceVersion:"1029", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 29, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-191", ContainerID:"c0e9f0687b6c4db508577d7155e3b5b3f4ef4228349b0d3e49d3628daa73dc83", Pod:"csi-node-driver-lfxjq", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.9.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic6ac652d3b4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:29:47.274589 containerd[2020]: 2025-04-30 03:29:47.231 [INFO][6172] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="fe7e08d10bd1fbd8312406391891a6337478c7ab5867988928bcd1b0c477f46f" Apr 30 03:29:47.274589 containerd[2020]: 2025-04-30 03:29:47.231 [INFO][6172] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fe7e08d10bd1fbd8312406391891a6337478c7ab5867988928bcd1b0c477f46f" iface="eth0" netns="" Apr 30 03:29:47.274589 containerd[2020]: 2025-04-30 03:29:47.231 [INFO][6172] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="fe7e08d10bd1fbd8312406391891a6337478c7ab5867988928bcd1b0c477f46f" Apr 30 03:29:47.274589 containerd[2020]: 2025-04-30 03:29:47.231 [INFO][6172] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fe7e08d10bd1fbd8312406391891a6337478c7ab5867988928bcd1b0c477f46f" Apr 30 03:29:47.274589 containerd[2020]: 2025-04-30 03:29:47.259 [INFO][6179] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fe7e08d10bd1fbd8312406391891a6337478c7ab5867988928bcd1b0c477f46f" HandleID="k8s-pod-network.fe7e08d10bd1fbd8312406391891a6337478c7ab5867988928bcd1b0c477f46f" Workload="ip--172--31--23--191-k8s-csi--node--driver--lfxjq-eth0" Apr 30 03:29:47.274589 containerd[2020]: 2025-04-30 03:29:47.259 [INFO][6179] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:29:47.274589 containerd[2020]: 2025-04-30 03:29:47.259 [INFO][6179] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:29:47.274589 containerd[2020]: 2025-04-30 03:29:47.265 [WARNING][6179] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fe7e08d10bd1fbd8312406391891a6337478c7ab5867988928bcd1b0c477f46f" HandleID="k8s-pod-network.fe7e08d10bd1fbd8312406391891a6337478c7ab5867988928bcd1b0c477f46f" Workload="ip--172--31--23--191-k8s-csi--node--driver--lfxjq-eth0" Apr 30 03:29:47.274589 containerd[2020]: 2025-04-30 03:29:47.265 [INFO][6179] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fe7e08d10bd1fbd8312406391891a6337478c7ab5867988928bcd1b0c477f46f" HandleID="k8s-pod-network.fe7e08d10bd1fbd8312406391891a6337478c7ab5867988928bcd1b0c477f46f" Workload="ip--172--31--23--191-k8s-csi--node--driver--lfxjq-eth0" Apr 30 03:29:47.274589 containerd[2020]: 2025-04-30 03:29:47.270 [INFO][6179] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:29:47.274589 containerd[2020]: 2025-04-30 03:29:47.272 [INFO][6172] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="fe7e08d10bd1fbd8312406391891a6337478c7ab5867988928bcd1b0c477f46f" Apr 30 03:29:47.274589 containerd[2020]: time="2025-04-30T03:29:47.274519886Z" level=info msg="TearDown network for sandbox \"fe7e08d10bd1fbd8312406391891a6337478c7ab5867988928bcd1b0c477f46f\" successfully" Apr 30 03:29:47.310094 containerd[2020]: time="2025-04-30T03:29:47.309963950Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fe7e08d10bd1fbd8312406391891a6337478c7ab5867988928bcd1b0c477f46f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 03:29:47.339159 systemd[1]: Started sshd@13-172.31.23.191:22-147.75.109.163:45770.service - OpenSSH per-connection server daemon (147.75.109.163:45770). Apr 30 03:29:47.347193 containerd[2020]: time="2025-04-30T03:29:47.347039539Z" level=info msg="RemovePodSandbox \"fe7e08d10bd1fbd8312406391891a6337478c7ab5867988928bcd1b0c477f46f\" returns successfully" Apr 30 03:29:47.358230 containerd[2020]: time="2025-04-30T03:29:47.358188317Z" level=info msg="StopPodSandbox for \"c87985201e60124587795a8826d2392ad8ce4fbb57e303ba69bc39ce0e766987\"" Apr 30 03:29:47.444824 containerd[2020]: 2025-04-30 03:29:47.405 [WARNING][6198] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c87985201e60124587795a8826d2392ad8ce4fbb57e303ba69bc39ce0e766987" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--191-k8s-coredns--7db6d8ff4d--qrm6j-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"c88b3859-8c19-40ee-b3d5-67ca01136bf7", ResourceVersion:"920", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 29, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-191", ContainerID:"319362539630c6bc97e7c4e0e29b566e6066c0c2cd8efddcc265b8dc11b23466", Pod:"coredns-7db6d8ff4d-qrm6j", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.9.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali87dc7e85d3a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:29:47.444824 containerd[2020]: 2025-04-30 03:29:47.406 [INFO][6198] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c87985201e60124587795a8826d2392ad8ce4fbb57e303ba69bc39ce0e766987" Apr 30 03:29:47.444824 containerd[2020]: 2025-04-30 03:29:47.406 [INFO][6198] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c87985201e60124587795a8826d2392ad8ce4fbb57e303ba69bc39ce0e766987" iface="eth0" netns="" Apr 30 03:29:47.444824 containerd[2020]: 2025-04-30 03:29:47.406 [INFO][6198] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c87985201e60124587795a8826d2392ad8ce4fbb57e303ba69bc39ce0e766987" Apr 30 03:29:47.444824 containerd[2020]: 2025-04-30 03:29:47.406 [INFO][6198] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c87985201e60124587795a8826d2392ad8ce4fbb57e303ba69bc39ce0e766987" Apr 30 03:29:47.444824 containerd[2020]: 2025-04-30 03:29:47.432 [INFO][6206] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c87985201e60124587795a8826d2392ad8ce4fbb57e303ba69bc39ce0e766987" HandleID="k8s-pod-network.c87985201e60124587795a8826d2392ad8ce4fbb57e303ba69bc39ce0e766987" Workload="ip--172--31--23--191-k8s-coredns--7db6d8ff4d--qrm6j-eth0" Apr 30 03:29:47.444824 containerd[2020]: 2025-04-30 03:29:47.432 [INFO][6206] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:29:47.444824 containerd[2020]: 2025-04-30 03:29:47.432 [INFO][6206] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:29:47.444824 containerd[2020]: 2025-04-30 03:29:47.438 [WARNING][6206] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c87985201e60124587795a8826d2392ad8ce4fbb57e303ba69bc39ce0e766987" HandleID="k8s-pod-network.c87985201e60124587795a8826d2392ad8ce4fbb57e303ba69bc39ce0e766987" Workload="ip--172--31--23--191-k8s-coredns--7db6d8ff4d--qrm6j-eth0" Apr 30 03:29:47.444824 containerd[2020]: 2025-04-30 03:29:47.438 [INFO][6206] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c87985201e60124587795a8826d2392ad8ce4fbb57e303ba69bc39ce0e766987" HandleID="k8s-pod-network.c87985201e60124587795a8826d2392ad8ce4fbb57e303ba69bc39ce0e766987" Workload="ip--172--31--23--191-k8s-coredns--7db6d8ff4d--qrm6j-eth0" Apr 30 03:29:47.444824 containerd[2020]: 2025-04-30 03:29:47.440 [INFO][6206] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:29:47.444824 containerd[2020]: 2025-04-30 03:29:47.443 [INFO][6198] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c87985201e60124587795a8826d2392ad8ce4fbb57e303ba69bc39ce0e766987" Apr 30 03:29:47.446139 containerd[2020]: time="2025-04-30T03:29:47.444862301Z" level=info msg="TearDown network for sandbox \"c87985201e60124587795a8826d2392ad8ce4fbb57e303ba69bc39ce0e766987\" successfully" Apr 30 03:29:47.446139 containerd[2020]: time="2025-04-30T03:29:47.444897085Z" level=info msg="StopPodSandbox for \"c87985201e60124587795a8826d2392ad8ce4fbb57e303ba69bc39ce0e766987\" returns successfully" Apr 30 03:29:47.446139 containerd[2020]: time="2025-04-30T03:29:47.445456372Z" level=info msg="RemovePodSandbox for \"c87985201e60124587795a8826d2392ad8ce4fbb57e303ba69bc39ce0e766987\"" Apr 30 03:29:47.446139 containerd[2020]: time="2025-04-30T03:29:47.445487000Z" level=info msg="Forcibly stopping sandbox \"c87985201e60124587795a8826d2392ad8ce4fbb57e303ba69bc39ce0e766987\"" Apr 30 03:29:47.540424 containerd[2020]: 2025-04-30 03:29:47.500 [WARNING][6224] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c87985201e60124587795a8826d2392ad8ce4fbb57e303ba69bc39ce0e766987" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--191-k8s-coredns--7db6d8ff4d--qrm6j-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"c88b3859-8c19-40ee-b3d5-67ca01136bf7", ResourceVersion:"920", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 29, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-191", ContainerID:"319362539630c6bc97e7c4e0e29b566e6066c0c2cd8efddcc265b8dc11b23466", Pod:"coredns-7db6d8ff4d-qrm6j", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.9.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali87dc7e85d3a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:29:47.540424 containerd[2020]: 2025-04-30 03:29:47.500 [INFO][6224] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c87985201e60124587795a8826d2392ad8ce4fbb57e303ba69bc39ce0e766987" Apr 30 03:29:47.540424 containerd[2020]: 2025-04-30 03:29:47.500 [INFO][6224] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c87985201e60124587795a8826d2392ad8ce4fbb57e303ba69bc39ce0e766987" iface="eth0" netns="" Apr 30 03:29:47.540424 containerd[2020]: 2025-04-30 03:29:47.500 [INFO][6224] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c87985201e60124587795a8826d2392ad8ce4fbb57e303ba69bc39ce0e766987" Apr 30 03:29:47.540424 containerd[2020]: 2025-04-30 03:29:47.500 [INFO][6224] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c87985201e60124587795a8826d2392ad8ce4fbb57e303ba69bc39ce0e766987" Apr 30 03:29:47.540424 containerd[2020]: 2025-04-30 03:29:47.527 [INFO][6231] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c87985201e60124587795a8826d2392ad8ce4fbb57e303ba69bc39ce0e766987" HandleID="k8s-pod-network.c87985201e60124587795a8826d2392ad8ce4fbb57e303ba69bc39ce0e766987" Workload="ip--172--31--23--191-k8s-coredns--7db6d8ff4d--qrm6j-eth0" Apr 30 03:29:47.540424 containerd[2020]: 2025-04-30 03:29:47.527 [INFO][6231] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:29:47.540424 containerd[2020]: 2025-04-30 03:29:47.527 [INFO][6231] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:29:47.540424 containerd[2020]: 2025-04-30 03:29:47.535 [WARNING][6231] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c87985201e60124587795a8826d2392ad8ce4fbb57e303ba69bc39ce0e766987" HandleID="k8s-pod-network.c87985201e60124587795a8826d2392ad8ce4fbb57e303ba69bc39ce0e766987" Workload="ip--172--31--23--191-k8s-coredns--7db6d8ff4d--qrm6j-eth0" Apr 30 03:29:47.540424 containerd[2020]: 2025-04-30 03:29:47.535 [INFO][6231] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c87985201e60124587795a8826d2392ad8ce4fbb57e303ba69bc39ce0e766987" HandleID="k8s-pod-network.c87985201e60124587795a8826d2392ad8ce4fbb57e303ba69bc39ce0e766987" Workload="ip--172--31--23--191-k8s-coredns--7db6d8ff4d--qrm6j-eth0" Apr 30 03:29:47.540424 containerd[2020]: 2025-04-30 03:29:47.537 [INFO][6231] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:29:47.540424 containerd[2020]: 2025-04-30 03:29:47.538 [INFO][6224] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c87985201e60124587795a8826d2392ad8ce4fbb57e303ba69bc39ce0e766987" Apr 30 03:29:47.541123 containerd[2020]: time="2025-04-30T03:29:47.540467626Z" level=info msg="TearDown network for sandbox \"c87985201e60124587795a8826d2392ad8ce4fbb57e303ba69bc39ce0e766987\" successfully" Apr 30 03:29:47.546160 containerd[2020]: time="2025-04-30T03:29:47.546112272Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c87985201e60124587795a8826d2392ad8ce4fbb57e303ba69bc39ce0e766987\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 03:29:47.546287 containerd[2020]: time="2025-04-30T03:29:47.546191152Z" level=info msg="RemovePodSandbox \"c87985201e60124587795a8826d2392ad8ce4fbb57e303ba69bc39ce0e766987\" returns successfully" Apr 30 03:29:47.546823 containerd[2020]: time="2025-04-30T03:29:47.546705213Z" level=info msg="StopPodSandbox for \"2f0063c6c309130807bdf0f8d479490af8b4beb6509250828dfa773ddca5eddf\"" Apr 30 03:29:47.640696 sshd[6185]: Accepted publickey for core from 147.75.109.163 port 45770 ssh2: RSA SHA256:7ZQea3lKZeIY1pq8546y2SpcWopo7i1peiZKBcYFJ3g Apr 30 03:29:47.641922 sshd[6185]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:29:47.654290 systemd-logind[1998]: New session 14 of user core. Apr 30 03:29:47.657925 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 30 03:29:47.661546 containerd[2020]: 2025-04-30 03:29:47.596 [WARNING][6250] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2f0063c6c309130807bdf0f8d479490af8b4beb6509250828dfa773ddca5eddf" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--191-k8s-calico--apiserver--6568d4bb6--sbcbv-eth0", GenerateName:"calico-apiserver-6568d4bb6-", Namespace:"calico-apiserver", SelfLink:"", UID:"d42df863-30d9-489a-a202-be20feb2d875", ResourceVersion:"1017", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 29, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6568d4bb6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-191", ContainerID:"38b1ce5941485eaf885876f899286740159554a128d971352bc521823b61eb93", Pod:"calico-apiserver-6568d4bb6-sbcbv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.9.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali873d24510a8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:29:47.661546 containerd[2020]: 2025-04-30 03:29:47.597 [INFO][6250] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2f0063c6c309130807bdf0f8d479490af8b4beb6509250828dfa773ddca5eddf" Apr 30 03:29:47.661546 containerd[2020]: 2025-04-30 03:29:47.597 [INFO][6250] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2f0063c6c309130807bdf0f8d479490af8b4beb6509250828dfa773ddca5eddf" iface="eth0" netns="" Apr 30 03:29:47.661546 containerd[2020]: 2025-04-30 03:29:47.601 [INFO][6250] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2f0063c6c309130807bdf0f8d479490af8b4beb6509250828dfa773ddca5eddf" Apr 30 03:29:47.661546 containerd[2020]: 2025-04-30 03:29:47.602 [INFO][6250] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2f0063c6c309130807bdf0f8d479490af8b4beb6509250828dfa773ddca5eddf" Apr 30 03:29:47.661546 containerd[2020]: 2025-04-30 03:29:47.643 [INFO][6258] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2f0063c6c309130807bdf0f8d479490af8b4beb6509250828dfa773ddca5eddf" HandleID="k8s-pod-network.2f0063c6c309130807bdf0f8d479490af8b4beb6509250828dfa773ddca5eddf" Workload="ip--172--31--23--191-k8s-calico--apiserver--6568d4bb6--sbcbv-eth0" Apr 30 03:29:47.661546 containerd[2020]: 2025-04-30 03:29:47.644 [INFO][6258] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:29:47.661546 containerd[2020]: 2025-04-30 03:29:47.644 [INFO][6258] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:29:47.661546 containerd[2020]: 2025-04-30 03:29:47.651 [WARNING][6258] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2f0063c6c309130807bdf0f8d479490af8b4beb6509250828dfa773ddca5eddf" HandleID="k8s-pod-network.2f0063c6c309130807bdf0f8d479490af8b4beb6509250828dfa773ddca5eddf" Workload="ip--172--31--23--191-k8s-calico--apiserver--6568d4bb6--sbcbv-eth0" Apr 30 03:29:47.661546 containerd[2020]: 2025-04-30 03:29:47.651 [INFO][6258] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2f0063c6c309130807bdf0f8d479490af8b4beb6509250828dfa773ddca5eddf" HandleID="k8s-pod-network.2f0063c6c309130807bdf0f8d479490af8b4beb6509250828dfa773ddca5eddf" Workload="ip--172--31--23--191-k8s-calico--apiserver--6568d4bb6--sbcbv-eth0" Apr 30 03:29:47.661546 containerd[2020]: 2025-04-30 03:29:47.654 [INFO][6258] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:29:47.661546 containerd[2020]: 2025-04-30 03:29:47.656 [INFO][6250] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2f0063c6c309130807bdf0f8d479490af8b4beb6509250828dfa773ddca5eddf" Apr 30 03:29:47.661546 containerd[2020]: time="2025-04-30T03:29:47.661486328Z" level=info msg="TearDown network for sandbox \"2f0063c6c309130807bdf0f8d479490af8b4beb6509250828dfa773ddca5eddf\" successfully" Apr 30 03:29:47.661546 containerd[2020]: time="2025-04-30T03:29:47.661518122Z" level=info msg="StopPodSandbox for \"2f0063c6c309130807bdf0f8d479490af8b4beb6509250828dfa773ddca5eddf\" returns successfully" Apr 30 03:29:47.667126 containerd[2020]: time="2025-04-30T03:29:47.662530377Z" level=info msg="RemovePodSandbox for \"2f0063c6c309130807bdf0f8d479490af8b4beb6509250828dfa773ddca5eddf\"" Apr 30 03:29:47.667126 containerd[2020]: time="2025-04-30T03:29:47.662599514Z" level=info msg="Forcibly stopping sandbox \"2f0063c6c309130807bdf0f8d479490af8b4beb6509250828dfa773ddca5eddf\"" Apr 30 03:29:47.745076 containerd[2020]: 2025-04-30 03:29:47.705 [WARNING][6278] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2f0063c6c309130807bdf0f8d479490af8b4beb6509250828dfa773ddca5eddf" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--191-k8s-calico--apiserver--6568d4bb6--sbcbv-eth0", GenerateName:"calico-apiserver-6568d4bb6-", Namespace:"calico-apiserver", SelfLink:"", UID:"d42df863-30d9-489a-a202-be20feb2d875", ResourceVersion:"1017", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 29, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6568d4bb6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-191", ContainerID:"38b1ce5941485eaf885876f899286740159554a128d971352bc521823b61eb93", Pod:"calico-apiserver-6568d4bb6-sbcbv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.9.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali873d24510a8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:29:47.745076 containerd[2020]: 2025-04-30 03:29:47.706 [INFO][6278] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2f0063c6c309130807bdf0f8d479490af8b4beb6509250828dfa773ddca5eddf" Apr 30 03:29:47.745076 containerd[2020]: 2025-04-30 03:29:47.706 [INFO][6278] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2f0063c6c309130807bdf0f8d479490af8b4beb6509250828dfa773ddca5eddf" iface="eth0" netns="" Apr 30 03:29:47.745076 containerd[2020]: 2025-04-30 03:29:47.706 [INFO][6278] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2f0063c6c309130807bdf0f8d479490af8b4beb6509250828dfa773ddca5eddf" Apr 30 03:29:47.745076 containerd[2020]: 2025-04-30 03:29:47.706 [INFO][6278] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2f0063c6c309130807bdf0f8d479490af8b4beb6509250828dfa773ddca5eddf" Apr 30 03:29:47.745076 containerd[2020]: 2025-04-30 03:29:47.732 [INFO][6285] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2f0063c6c309130807bdf0f8d479490af8b4beb6509250828dfa773ddca5eddf" HandleID="k8s-pod-network.2f0063c6c309130807bdf0f8d479490af8b4beb6509250828dfa773ddca5eddf" Workload="ip--172--31--23--191-k8s-calico--apiserver--6568d4bb6--sbcbv-eth0" Apr 30 03:29:47.745076 containerd[2020]: 2025-04-30 03:29:47.733 [INFO][6285] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:29:47.745076 containerd[2020]: 2025-04-30 03:29:47.733 [INFO][6285] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:29:47.745076 containerd[2020]: 2025-04-30 03:29:47.738 [WARNING][6285] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2f0063c6c309130807bdf0f8d479490af8b4beb6509250828dfa773ddca5eddf" HandleID="k8s-pod-network.2f0063c6c309130807bdf0f8d479490af8b4beb6509250828dfa773ddca5eddf" Workload="ip--172--31--23--191-k8s-calico--apiserver--6568d4bb6--sbcbv-eth0" Apr 30 03:29:47.745076 containerd[2020]: 2025-04-30 03:29:47.738 [INFO][6285] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2f0063c6c309130807bdf0f8d479490af8b4beb6509250828dfa773ddca5eddf" HandleID="k8s-pod-network.2f0063c6c309130807bdf0f8d479490af8b4beb6509250828dfa773ddca5eddf" Workload="ip--172--31--23--191-k8s-calico--apiserver--6568d4bb6--sbcbv-eth0" Apr 30 03:29:47.745076 containerd[2020]: 2025-04-30 03:29:47.740 [INFO][6285] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:29:47.745076 containerd[2020]: 2025-04-30 03:29:47.741 [INFO][6278] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2f0063c6c309130807bdf0f8d479490af8b4beb6509250828dfa773ddca5eddf" Apr 30 03:29:47.745076 containerd[2020]: time="2025-04-30T03:29:47.744281662Z" level=info msg="TearDown network for sandbox \"2f0063c6c309130807bdf0f8d479490af8b4beb6509250828dfa773ddca5eddf\" successfully" Apr 30 03:29:47.751274 containerd[2020]: time="2025-04-30T03:29:47.751205068Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2f0063c6c309130807bdf0f8d479490af8b4beb6509250828dfa773ddca5eddf\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 03:29:47.766074 containerd[2020]: time="2025-04-30T03:29:47.766020714Z" level=info msg="RemovePodSandbox \"2f0063c6c309130807bdf0f8d479490af8b4beb6509250828dfa773ddca5eddf\" returns successfully" Apr 30 03:29:47.766882 containerd[2020]: time="2025-04-30T03:29:47.766701618Z" level=info msg="StopPodSandbox for \"4fcd206f431d41d3a3adfa9d1f3ad114bbed19b51472047d4c2f017c7225a66a\"" Apr 30 03:29:47.875056 containerd[2020]: 2025-04-30 03:29:47.821 [WARNING][6303] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4fcd206f431d41d3a3adfa9d1f3ad114bbed19b51472047d4c2f017c7225a66a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--191-k8s-coredns--7db6d8ff4d--k4mpk-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"3d7c4146-df2d-4753-83ec-174f6ff20d4b", ResourceVersion:"916", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 29, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-191", ContainerID:"728891ee0cf764f5f90f412a69dfd1a5f4836ffbe2306e73204a187a280c59b7", Pod:"coredns-7db6d8ff4d-k4mpk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.9.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5a48f5651dc", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:29:47.875056 containerd[2020]: 2025-04-30 03:29:47.821 [INFO][6303] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4fcd206f431d41d3a3adfa9d1f3ad114bbed19b51472047d4c2f017c7225a66a" Apr 30 03:29:47.875056 containerd[2020]: 2025-04-30 03:29:47.821 [INFO][6303] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4fcd206f431d41d3a3adfa9d1f3ad114bbed19b51472047d4c2f017c7225a66a" iface="eth0" netns="" Apr 30 03:29:47.875056 containerd[2020]: 2025-04-30 03:29:47.821 [INFO][6303] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4fcd206f431d41d3a3adfa9d1f3ad114bbed19b51472047d4c2f017c7225a66a" Apr 30 03:29:47.875056 containerd[2020]: 2025-04-30 03:29:47.822 [INFO][6303] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4fcd206f431d41d3a3adfa9d1f3ad114bbed19b51472047d4c2f017c7225a66a" Apr 30 03:29:47.875056 containerd[2020]: 2025-04-30 03:29:47.854 [INFO][6314] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4fcd206f431d41d3a3adfa9d1f3ad114bbed19b51472047d4c2f017c7225a66a" HandleID="k8s-pod-network.4fcd206f431d41d3a3adfa9d1f3ad114bbed19b51472047d4c2f017c7225a66a" Workload="ip--172--31--23--191-k8s-coredns--7db6d8ff4d--k4mpk-eth0" Apr 30 03:29:47.875056 containerd[2020]: 2025-04-30 03:29:47.855 [INFO][6314] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:29:47.875056 containerd[2020]: 2025-04-30 03:29:47.855 [INFO][6314] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:29:47.875056 containerd[2020]: 2025-04-30 03:29:47.866 [WARNING][6314] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4fcd206f431d41d3a3adfa9d1f3ad114bbed19b51472047d4c2f017c7225a66a" HandleID="k8s-pod-network.4fcd206f431d41d3a3adfa9d1f3ad114bbed19b51472047d4c2f017c7225a66a" Workload="ip--172--31--23--191-k8s-coredns--7db6d8ff4d--k4mpk-eth0" Apr 30 03:29:47.875056 containerd[2020]: 2025-04-30 03:29:47.866 [INFO][6314] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4fcd206f431d41d3a3adfa9d1f3ad114bbed19b51472047d4c2f017c7225a66a" HandleID="k8s-pod-network.4fcd206f431d41d3a3adfa9d1f3ad114bbed19b51472047d4c2f017c7225a66a" Workload="ip--172--31--23--191-k8s-coredns--7db6d8ff4d--k4mpk-eth0" Apr 30 03:29:47.875056 containerd[2020]: 2025-04-30 03:29:47.870 [INFO][6314] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:29:47.875056 containerd[2020]: 2025-04-30 03:29:47.872 [INFO][6303] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4fcd206f431d41d3a3adfa9d1f3ad114bbed19b51472047d4c2f017c7225a66a" Apr 30 03:29:47.876596 containerd[2020]: time="2025-04-30T03:29:47.875350965Z" level=info msg="TearDown network for sandbox \"4fcd206f431d41d3a3adfa9d1f3ad114bbed19b51472047d4c2f017c7225a66a\" successfully" Apr 30 03:29:47.876596 containerd[2020]: time="2025-04-30T03:29:47.875375654Z" level=info msg="StopPodSandbox for \"4fcd206f431d41d3a3adfa9d1f3ad114bbed19b51472047d4c2f017c7225a66a\" returns successfully" Apr 30 03:29:47.876596 containerd[2020]: time="2025-04-30T03:29:47.875881922Z" level=info msg="RemovePodSandbox for \"4fcd206f431d41d3a3adfa9d1f3ad114bbed19b51472047d4c2f017c7225a66a\"" Apr 30 03:29:47.876596 containerd[2020]: time="2025-04-30T03:29:47.875913916Z" level=info msg="Forcibly stopping sandbox \"4fcd206f431d41d3a3adfa9d1f3ad114bbed19b51472047d4c2f017c7225a66a\"" Apr 30 03:29:48.016395 containerd[2020]: 2025-04-30 03:29:47.938 [WARNING][6332] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4fcd206f431d41d3a3adfa9d1f3ad114bbed19b51472047d4c2f017c7225a66a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--191-k8s-coredns--7db6d8ff4d--k4mpk-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"3d7c4146-df2d-4753-83ec-174f6ff20d4b", ResourceVersion:"916", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 29, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-191", ContainerID:"728891ee0cf764f5f90f412a69dfd1a5f4836ffbe2306e73204a187a280c59b7", Pod:"coredns-7db6d8ff4d-k4mpk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.9.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5a48f5651dc", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:29:48.016395 containerd[2020]: 2025-04-30 03:29:47.939 [INFO][6332] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4fcd206f431d41d3a3adfa9d1f3ad114bbed19b51472047d4c2f017c7225a66a" Apr 30 03:29:48.016395 containerd[2020]: 2025-04-30 03:29:47.939 [INFO][6332] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4fcd206f431d41d3a3adfa9d1f3ad114bbed19b51472047d4c2f017c7225a66a" iface="eth0" netns="" Apr 30 03:29:48.016395 containerd[2020]: 2025-04-30 03:29:47.939 [INFO][6332] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4fcd206f431d41d3a3adfa9d1f3ad114bbed19b51472047d4c2f017c7225a66a" Apr 30 03:29:48.016395 containerd[2020]: 2025-04-30 03:29:47.940 [INFO][6332] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4fcd206f431d41d3a3adfa9d1f3ad114bbed19b51472047d4c2f017c7225a66a" Apr 30 03:29:48.016395 containerd[2020]: 2025-04-30 03:29:47.994 [INFO][6339] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4fcd206f431d41d3a3adfa9d1f3ad114bbed19b51472047d4c2f017c7225a66a" HandleID="k8s-pod-network.4fcd206f431d41d3a3adfa9d1f3ad114bbed19b51472047d4c2f017c7225a66a" Workload="ip--172--31--23--191-k8s-coredns--7db6d8ff4d--k4mpk-eth0" Apr 30 03:29:48.016395 containerd[2020]: 2025-04-30 03:29:47.994 [INFO][6339] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:29:48.016395 containerd[2020]: 2025-04-30 03:29:47.994 [INFO][6339] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:29:48.016395 containerd[2020]: 2025-04-30 03:29:48.006 [WARNING][6339] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4fcd206f431d41d3a3adfa9d1f3ad114bbed19b51472047d4c2f017c7225a66a" HandleID="k8s-pod-network.4fcd206f431d41d3a3adfa9d1f3ad114bbed19b51472047d4c2f017c7225a66a" Workload="ip--172--31--23--191-k8s-coredns--7db6d8ff4d--k4mpk-eth0" Apr 30 03:29:48.016395 containerd[2020]: 2025-04-30 03:29:48.007 [INFO][6339] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4fcd206f431d41d3a3adfa9d1f3ad114bbed19b51472047d4c2f017c7225a66a" HandleID="k8s-pod-network.4fcd206f431d41d3a3adfa9d1f3ad114bbed19b51472047d4c2f017c7225a66a" Workload="ip--172--31--23--191-k8s-coredns--7db6d8ff4d--k4mpk-eth0" Apr 30 03:29:48.016395 containerd[2020]: 2025-04-30 03:29:48.012 [INFO][6339] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:29:48.016395 containerd[2020]: 2025-04-30 03:29:48.014 [INFO][6332] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4fcd206f431d41d3a3adfa9d1f3ad114bbed19b51472047d4c2f017c7225a66a" Apr 30 03:29:48.017333 containerd[2020]: time="2025-04-30T03:29:48.016441367Z" level=info msg="TearDown network for sandbox \"4fcd206f431d41d3a3adfa9d1f3ad114bbed19b51472047d4c2f017c7225a66a\" successfully" Apr 30 03:29:48.042836 containerd[2020]: time="2025-04-30T03:29:48.042544453Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4fcd206f431d41d3a3adfa9d1f3ad114bbed19b51472047d4c2f017c7225a66a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 03:29:48.042836 containerd[2020]: time="2025-04-30T03:29:48.042625784Z" level=info msg="RemovePodSandbox \"4fcd206f431d41d3a3adfa9d1f3ad114bbed19b51472047d4c2f017c7225a66a\" returns successfully" Apr 30 03:29:48.043206 containerd[2020]: time="2025-04-30T03:29:48.043177780Z" level=info msg="StopPodSandbox for \"62e57ba3bc99709f661f6600d0fcca08694390b3ad3a9bb90b19e100ecc465e2\"" Apr 30 03:29:48.162357 containerd[2020]: 2025-04-30 03:29:48.090 [WARNING][6361] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="62e57ba3bc99709f661f6600d0fcca08694390b3ad3a9bb90b19e100ecc465e2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--191-k8s-calico--kube--controllers--5b8dc9df9d--46jp7-eth0", GenerateName:"calico-kube-controllers-5b8dc9df9d-", Namespace:"calico-system", SelfLink:"", UID:"83a6b31a-fa39-475d-820a-3c65d1ea9b44", ResourceVersion:"1000", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 29, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5b8dc9df9d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-191", ContainerID:"a36ca6d06ac6dfac0d7879cb314ae0a5a0b9bc8894d32dc5be5c1dfc54b42e36", Pod:"calico-kube-controllers-5b8dc9df9d-46jp7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.9.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5c97a8169dc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:29:48.162357 containerd[2020]: 2025-04-30 03:29:48.090 [INFO][6361] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="62e57ba3bc99709f661f6600d0fcca08694390b3ad3a9bb90b19e100ecc465e2" Apr 30 03:29:48.162357 containerd[2020]: 2025-04-30 03:29:48.090 [INFO][6361] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="62e57ba3bc99709f661f6600d0fcca08694390b3ad3a9bb90b19e100ecc465e2" iface="eth0" netns="" Apr 30 03:29:48.162357 containerd[2020]: 2025-04-30 03:29:48.090 [INFO][6361] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="62e57ba3bc99709f661f6600d0fcca08694390b3ad3a9bb90b19e100ecc465e2" Apr 30 03:29:48.162357 containerd[2020]: 2025-04-30 03:29:48.090 [INFO][6361] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="62e57ba3bc99709f661f6600d0fcca08694390b3ad3a9bb90b19e100ecc465e2" Apr 30 03:29:48.162357 containerd[2020]: 2025-04-30 03:29:48.140 [INFO][6368] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="62e57ba3bc99709f661f6600d0fcca08694390b3ad3a9bb90b19e100ecc465e2" HandleID="k8s-pod-network.62e57ba3bc99709f661f6600d0fcca08694390b3ad3a9bb90b19e100ecc465e2" Workload="ip--172--31--23--191-k8s-calico--kube--controllers--5b8dc9df9d--46jp7-eth0" Apr 30 03:29:48.162357 containerd[2020]: 2025-04-30 03:29:48.140 [INFO][6368] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:29:48.162357 containerd[2020]: 2025-04-30 03:29:48.140 [INFO][6368] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:29:48.162357 containerd[2020]: 2025-04-30 03:29:48.148 [WARNING][6368] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="62e57ba3bc99709f661f6600d0fcca08694390b3ad3a9bb90b19e100ecc465e2" HandleID="k8s-pod-network.62e57ba3bc99709f661f6600d0fcca08694390b3ad3a9bb90b19e100ecc465e2" Workload="ip--172--31--23--191-k8s-calico--kube--controllers--5b8dc9df9d--46jp7-eth0" Apr 30 03:29:48.162357 containerd[2020]: 2025-04-30 03:29:48.148 [INFO][6368] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="62e57ba3bc99709f661f6600d0fcca08694390b3ad3a9bb90b19e100ecc465e2" HandleID="k8s-pod-network.62e57ba3bc99709f661f6600d0fcca08694390b3ad3a9bb90b19e100ecc465e2" Workload="ip--172--31--23--191-k8s-calico--kube--controllers--5b8dc9df9d--46jp7-eth0" Apr 30 03:29:48.162357 containerd[2020]: 2025-04-30 03:29:48.153 [INFO][6368] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:29:48.162357 containerd[2020]: 2025-04-30 03:29:48.160 [INFO][6361] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="62e57ba3bc99709f661f6600d0fcca08694390b3ad3a9bb90b19e100ecc465e2" Apr 30 03:29:48.164748 containerd[2020]: time="2025-04-30T03:29:48.162512094Z" level=info msg="TearDown network for sandbox \"62e57ba3bc99709f661f6600d0fcca08694390b3ad3a9bb90b19e100ecc465e2\" successfully" Apr 30 03:29:48.164748 containerd[2020]: time="2025-04-30T03:29:48.162545875Z" level=info msg="StopPodSandbox for \"62e57ba3bc99709f661f6600d0fcca08694390b3ad3a9bb90b19e100ecc465e2\" returns successfully" Apr 30 03:29:48.164748 containerd[2020]: time="2025-04-30T03:29:48.163137095Z" level=info msg="RemovePodSandbox for \"62e57ba3bc99709f661f6600d0fcca08694390b3ad3a9bb90b19e100ecc465e2\"" Apr 30 03:29:48.164748 containerd[2020]: time="2025-04-30T03:29:48.163170500Z" level=info msg="Forcibly stopping sandbox \"62e57ba3bc99709f661f6600d0fcca08694390b3ad3a9bb90b19e100ecc465e2\"" Apr 30 03:29:48.254193 containerd[2020]: 2025-04-30 03:29:48.214 [WARNING][6388] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="62e57ba3bc99709f661f6600d0fcca08694390b3ad3a9bb90b19e100ecc465e2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--191-k8s-calico--kube--controllers--5b8dc9df9d--46jp7-eth0", GenerateName:"calico-kube-controllers-5b8dc9df9d-", Namespace:"calico-system", SelfLink:"", UID:"83a6b31a-fa39-475d-820a-3c65d1ea9b44", ResourceVersion:"1000", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 29, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5b8dc9df9d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-191", ContainerID:"a36ca6d06ac6dfac0d7879cb314ae0a5a0b9bc8894d32dc5be5c1dfc54b42e36", Pod:"calico-kube-controllers-5b8dc9df9d-46jp7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.9.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5c97a8169dc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:29:48.254193 containerd[2020]: 2025-04-30 03:29:48.214 [INFO][6388] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="62e57ba3bc99709f661f6600d0fcca08694390b3ad3a9bb90b19e100ecc465e2" Apr 30 03:29:48.254193 containerd[2020]: 2025-04-30 03:29:48.214 [INFO][6388] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="62e57ba3bc99709f661f6600d0fcca08694390b3ad3a9bb90b19e100ecc465e2" iface="eth0" netns="" Apr 30 03:29:48.254193 containerd[2020]: 2025-04-30 03:29:48.214 [INFO][6388] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="62e57ba3bc99709f661f6600d0fcca08694390b3ad3a9bb90b19e100ecc465e2" Apr 30 03:29:48.254193 containerd[2020]: 2025-04-30 03:29:48.214 [INFO][6388] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="62e57ba3bc99709f661f6600d0fcca08694390b3ad3a9bb90b19e100ecc465e2" Apr 30 03:29:48.254193 containerd[2020]: 2025-04-30 03:29:48.240 [INFO][6396] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="62e57ba3bc99709f661f6600d0fcca08694390b3ad3a9bb90b19e100ecc465e2" HandleID="k8s-pod-network.62e57ba3bc99709f661f6600d0fcca08694390b3ad3a9bb90b19e100ecc465e2" Workload="ip--172--31--23--191-k8s-calico--kube--controllers--5b8dc9df9d--46jp7-eth0" Apr 30 03:29:48.254193 containerd[2020]: 2025-04-30 03:29:48.241 [INFO][6396] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:29:48.254193 containerd[2020]: 2025-04-30 03:29:48.241 [INFO][6396] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:29:48.254193 containerd[2020]: 2025-04-30 03:29:48.249 [WARNING][6396] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="62e57ba3bc99709f661f6600d0fcca08694390b3ad3a9bb90b19e100ecc465e2" HandleID="k8s-pod-network.62e57ba3bc99709f661f6600d0fcca08694390b3ad3a9bb90b19e100ecc465e2" Workload="ip--172--31--23--191-k8s-calico--kube--controllers--5b8dc9df9d--46jp7-eth0" Apr 30 03:29:48.254193 containerd[2020]: 2025-04-30 03:29:48.249 [INFO][6396] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="62e57ba3bc99709f661f6600d0fcca08694390b3ad3a9bb90b19e100ecc465e2" HandleID="k8s-pod-network.62e57ba3bc99709f661f6600d0fcca08694390b3ad3a9bb90b19e100ecc465e2" Workload="ip--172--31--23--191-k8s-calico--kube--controllers--5b8dc9df9d--46jp7-eth0" Apr 30 03:29:48.254193 containerd[2020]: 2025-04-30 03:29:48.250 [INFO][6396] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:29:48.254193 containerd[2020]: 2025-04-30 03:29:48.252 [INFO][6388] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="62e57ba3bc99709f661f6600d0fcca08694390b3ad3a9bb90b19e100ecc465e2" Apr 30 03:29:48.255456 containerd[2020]: time="2025-04-30T03:29:48.254231157Z" level=info msg="TearDown network for sandbox \"62e57ba3bc99709f661f6600d0fcca08694390b3ad3a9bb90b19e100ecc465e2\" successfully" Apr 30 03:29:48.269811 containerd[2020]: time="2025-04-30T03:29:48.269574332Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"62e57ba3bc99709f661f6600d0fcca08694390b3ad3a9bb90b19e100ecc465e2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 03:29:48.269811 containerd[2020]: time="2025-04-30T03:29:48.269646187Z" level=info msg="RemovePodSandbox \"62e57ba3bc99709f661f6600d0fcca08694390b3ad3a9bb90b19e100ecc465e2\" returns successfully" Apr 30 03:29:48.270943 containerd[2020]: time="2025-04-30T03:29:48.270915973Z" level=info msg="StopPodSandbox for \"68f40e1cb60f80588c61ce600f3c436b12aceb52142759708197e5d1c8f9fc3e\"" Apr 30 03:29:48.366438 containerd[2020]: 2025-04-30 03:29:48.326 [WARNING][6416] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="68f40e1cb60f80588c61ce600f3c436b12aceb52142759708197e5d1c8f9fc3e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--191-k8s-calico--apiserver--6568d4bb6--l2z69-eth0", GenerateName:"calico-apiserver-6568d4bb6-", Namespace:"calico-apiserver", SelfLink:"", UID:"9a75c049-74d5-4f65-bcf2-58f5a64e3866", ResourceVersion:"982", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 29, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6568d4bb6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-191", ContainerID:"bdc26b9286d4335c13a5ffd9eba14573f4c3b8285f62fe28f181f2f63cbef0c1", Pod:"calico-apiserver-6568d4bb6-l2z69", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.9.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali12100dc134f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:29:48.366438 containerd[2020]: 2025-04-30 03:29:48.326 [INFO][6416] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="68f40e1cb60f80588c61ce600f3c436b12aceb52142759708197e5d1c8f9fc3e" Apr 30 03:29:48.366438 containerd[2020]: 2025-04-30 03:29:48.326 [INFO][6416] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="68f40e1cb60f80588c61ce600f3c436b12aceb52142759708197e5d1c8f9fc3e" iface="eth0" netns="" Apr 30 03:29:48.366438 containerd[2020]: 2025-04-30 03:29:48.326 [INFO][6416] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="68f40e1cb60f80588c61ce600f3c436b12aceb52142759708197e5d1c8f9fc3e" Apr 30 03:29:48.366438 containerd[2020]: 2025-04-30 03:29:48.326 [INFO][6416] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="68f40e1cb60f80588c61ce600f3c436b12aceb52142759708197e5d1c8f9fc3e" Apr 30 03:29:48.366438 containerd[2020]: 2025-04-30 03:29:48.351 [INFO][6423] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="68f40e1cb60f80588c61ce600f3c436b12aceb52142759708197e5d1c8f9fc3e" HandleID="k8s-pod-network.68f40e1cb60f80588c61ce600f3c436b12aceb52142759708197e5d1c8f9fc3e" Workload="ip--172--31--23--191-k8s-calico--apiserver--6568d4bb6--l2z69-eth0" Apr 30 03:29:48.366438 containerd[2020]: 2025-04-30 03:29:48.351 [INFO][6423] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:29:48.366438 containerd[2020]: 2025-04-30 03:29:48.352 [INFO][6423] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:29:48.366438 containerd[2020]: 2025-04-30 03:29:48.358 [WARNING][6423] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="68f40e1cb60f80588c61ce600f3c436b12aceb52142759708197e5d1c8f9fc3e" HandleID="k8s-pod-network.68f40e1cb60f80588c61ce600f3c436b12aceb52142759708197e5d1c8f9fc3e" Workload="ip--172--31--23--191-k8s-calico--apiserver--6568d4bb6--l2z69-eth0" Apr 30 03:29:48.366438 containerd[2020]: 2025-04-30 03:29:48.359 [INFO][6423] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="68f40e1cb60f80588c61ce600f3c436b12aceb52142759708197e5d1c8f9fc3e" HandleID="k8s-pod-network.68f40e1cb60f80588c61ce600f3c436b12aceb52142759708197e5d1c8f9fc3e" Workload="ip--172--31--23--191-k8s-calico--apiserver--6568d4bb6--l2z69-eth0" Apr 30 03:29:48.366438 containerd[2020]: 2025-04-30 03:29:48.362 [INFO][6423] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:29:48.366438 containerd[2020]: 2025-04-30 03:29:48.364 [INFO][6416] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="68f40e1cb60f80588c61ce600f3c436b12aceb52142759708197e5d1c8f9fc3e" Apr 30 03:29:48.366438 containerd[2020]: time="2025-04-30T03:29:48.366369076Z" level=info msg="TearDown network for sandbox \"68f40e1cb60f80588c61ce600f3c436b12aceb52142759708197e5d1c8f9fc3e\" successfully" Apr 30 03:29:48.366438 containerd[2020]: time="2025-04-30T03:29:48.366398136Z" level=info msg="StopPodSandbox for \"68f40e1cb60f80588c61ce600f3c436b12aceb52142759708197e5d1c8f9fc3e\" returns successfully" Apr 30 03:29:48.368485 containerd[2020]: time="2025-04-30T03:29:48.367829702Z" level=info msg="RemovePodSandbox for \"68f40e1cb60f80588c61ce600f3c436b12aceb52142759708197e5d1c8f9fc3e\"" Apr 30 03:29:48.368485 containerd[2020]: time="2025-04-30T03:29:48.368004919Z" level=info msg="Forcibly stopping sandbox \"68f40e1cb60f80588c61ce600f3c436b12aceb52142759708197e5d1c8f9fc3e\"" Apr 30 03:29:48.471353 containerd[2020]: 2025-04-30 03:29:48.410 [WARNING][6442] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="68f40e1cb60f80588c61ce600f3c436b12aceb52142759708197e5d1c8f9fc3e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--191-k8s-calico--apiserver--6568d4bb6--l2z69-eth0", GenerateName:"calico-apiserver-6568d4bb6-", Namespace:"calico-apiserver", SelfLink:"", UID:"9a75c049-74d5-4f65-bcf2-58f5a64e3866", ResourceVersion:"982", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 29, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6568d4bb6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-191", ContainerID:"bdc26b9286d4335c13a5ffd9eba14573f4c3b8285f62fe28f181f2f63cbef0c1", Pod:"calico-apiserver-6568d4bb6-l2z69", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.9.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali12100dc134f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:29:48.471353 containerd[2020]: 2025-04-30 03:29:48.411 [INFO][6442] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="68f40e1cb60f80588c61ce600f3c436b12aceb52142759708197e5d1c8f9fc3e" Apr 30 03:29:48.471353 containerd[2020]: 2025-04-30 03:29:48.411 [INFO][6442] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="68f40e1cb60f80588c61ce600f3c436b12aceb52142759708197e5d1c8f9fc3e" iface="eth0" netns="" Apr 30 03:29:48.471353 containerd[2020]: 2025-04-30 03:29:48.411 [INFO][6442] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="68f40e1cb60f80588c61ce600f3c436b12aceb52142759708197e5d1c8f9fc3e" Apr 30 03:29:48.471353 containerd[2020]: 2025-04-30 03:29:48.411 [INFO][6442] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="68f40e1cb60f80588c61ce600f3c436b12aceb52142759708197e5d1c8f9fc3e" Apr 30 03:29:48.471353 containerd[2020]: 2025-04-30 03:29:48.456 [INFO][6449] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="68f40e1cb60f80588c61ce600f3c436b12aceb52142759708197e5d1c8f9fc3e" HandleID="k8s-pod-network.68f40e1cb60f80588c61ce600f3c436b12aceb52142759708197e5d1c8f9fc3e" Workload="ip--172--31--23--191-k8s-calico--apiserver--6568d4bb6--l2z69-eth0" Apr 30 03:29:48.471353 containerd[2020]: 2025-04-30 03:29:48.457 [INFO][6449] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:29:48.471353 containerd[2020]: 2025-04-30 03:29:48.457 [INFO][6449] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:29:48.471353 containerd[2020]: 2025-04-30 03:29:48.463 [WARNING][6449] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="68f40e1cb60f80588c61ce600f3c436b12aceb52142759708197e5d1c8f9fc3e" HandleID="k8s-pod-network.68f40e1cb60f80588c61ce600f3c436b12aceb52142759708197e5d1c8f9fc3e" Workload="ip--172--31--23--191-k8s-calico--apiserver--6568d4bb6--l2z69-eth0" Apr 30 03:29:48.471353 containerd[2020]: 2025-04-30 03:29:48.463 [INFO][6449] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="68f40e1cb60f80588c61ce600f3c436b12aceb52142759708197e5d1c8f9fc3e" HandleID="k8s-pod-network.68f40e1cb60f80588c61ce600f3c436b12aceb52142759708197e5d1c8f9fc3e" Workload="ip--172--31--23--191-k8s-calico--apiserver--6568d4bb6--l2z69-eth0" Apr 30 03:29:48.471353 containerd[2020]: 2025-04-30 03:29:48.465 [INFO][6449] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:29:48.471353 containerd[2020]: 2025-04-30 03:29:48.467 [INFO][6442] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="68f40e1cb60f80588c61ce600f3c436b12aceb52142759708197e5d1c8f9fc3e" Apr 30 03:29:48.477390 containerd[2020]: time="2025-04-30T03:29:48.471329049Z" level=info msg="TearDown network for sandbox \"68f40e1cb60f80588c61ce600f3c436b12aceb52142759708197e5d1c8f9fc3e\" successfully" Apr 30 03:29:48.481690 containerd[2020]: time="2025-04-30T03:29:48.481165229Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"68f40e1cb60f80588c61ce600f3c436b12aceb52142759708197e5d1c8f9fc3e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 03:29:48.481690 containerd[2020]: time="2025-04-30T03:29:48.481270749Z" level=info msg="RemovePodSandbox \"68f40e1cb60f80588c61ce600f3c436b12aceb52142759708197e5d1c8f9fc3e\" returns successfully" Apr 30 03:29:48.490606 containerd[2020]: time="2025-04-30T03:29:48.481940884Z" level=info msg="StopPodSandbox for \"c8f2652bf7b95e77a0379e746c629dccf4c3bde4ff433715c03fa0e8619c3e04\"" Apr 30 03:29:48.495829 sshd[6185]: pam_unix(sshd:session): session closed for user core Apr 30 03:29:48.506635 systemd[1]: sshd@13-172.31.23.191:22-147.75.109.163:45770.service: Deactivated successfully. Apr 30 03:29:48.527739 systemd-logind[1998]: Session 14 logged out. Waiting for processes to exit. Apr 30 03:29:48.529220 systemd[1]: session-14.scope: Deactivated successfully. Apr 30 03:29:48.543622 systemd-logind[1998]: Removed session 14. Apr 30 03:29:48.618366 containerd[2020]: 2025-04-30 03:29:48.582 [WARNING][6468] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c8f2652bf7b95e77a0379e746c629dccf4c3bde4ff433715c03fa0e8619c3e04" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--191-k8s-calico--apiserver--79d7797bfd--7hhqk-eth0", GenerateName:"calico-apiserver-79d7797bfd-", Namespace:"calico-apiserver", SelfLink:"", UID:"6170a3e5-e4fb-4596-abdd-016a02fa9e9d", ResourceVersion:"957", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 29, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"79d7797bfd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-191", ContainerID:"8cdf8bbdca24af919eb8228a49ac81f21e2bf3f02f73d39324a0adb17fd2feb9", Pod:"calico-apiserver-79d7797bfd-7hhqk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.9.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali79f8f63194b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:29:48.618366 containerd[2020]: 2025-04-30 03:29:48.583 [INFO][6468] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c8f2652bf7b95e77a0379e746c629dccf4c3bde4ff433715c03fa0e8619c3e04" Apr 30 03:29:48.618366 containerd[2020]: 2025-04-30 03:29:48.583 [INFO][6468] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c8f2652bf7b95e77a0379e746c629dccf4c3bde4ff433715c03fa0e8619c3e04" iface="eth0" netns="" Apr 30 03:29:48.618366 containerd[2020]: 2025-04-30 03:29:48.583 [INFO][6468] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c8f2652bf7b95e77a0379e746c629dccf4c3bde4ff433715c03fa0e8619c3e04" Apr 30 03:29:48.618366 containerd[2020]: 2025-04-30 03:29:48.583 [INFO][6468] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c8f2652bf7b95e77a0379e746c629dccf4c3bde4ff433715c03fa0e8619c3e04" Apr 30 03:29:48.618366 containerd[2020]: 2025-04-30 03:29:48.607 [INFO][6477] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c8f2652bf7b95e77a0379e746c629dccf4c3bde4ff433715c03fa0e8619c3e04" HandleID="k8s-pod-network.c8f2652bf7b95e77a0379e746c629dccf4c3bde4ff433715c03fa0e8619c3e04" Workload="ip--172--31--23--191-k8s-calico--apiserver--79d7797bfd--7hhqk-eth0" Apr 30 03:29:48.618366 containerd[2020]: 2025-04-30 03:29:48.607 [INFO][6477] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:29:48.618366 containerd[2020]: 2025-04-30 03:29:48.607 [INFO][6477] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:29:48.618366 containerd[2020]: 2025-04-30 03:29:48.613 [WARNING][6477] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c8f2652bf7b95e77a0379e746c629dccf4c3bde4ff433715c03fa0e8619c3e04" HandleID="k8s-pod-network.c8f2652bf7b95e77a0379e746c629dccf4c3bde4ff433715c03fa0e8619c3e04" Workload="ip--172--31--23--191-k8s-calico--apiserver--79d7797bfd--7hhqk-eth0" Apr 30 03:29:48.618366 containerd[2020]: 2025-04-30 03:29:48.613 [INFO][6477] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c8f2652bf7b95e77a0379e746c629dccf4c3bde4ff433715c03fa0e8619c3e04" HandleID="k8s-pod-network.c8f2652bf7b95e77a0379e746c629dccf4c3bde4ff433715c03fa0e8619c3e04" Workload="ip--172--31--23--191-k8s-calico--apiserver--79d7797bfd--7hhqk-eth0" Apr 30 03:29:48.618366 containerd[2020]: 2025-04-30 03:29:48.615 [INFO][6477] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:29:48.618366 containerd[2020]: 2025-04-30 03:29:48.616 [INFO][6468] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c8f2652bf7b95e77a0379e746c629dccf4c3bde4ff433715c03fa0e8619c3e04" Apr 30 03:29:48.618366 containerd[2020]: time="2025-04-30T03:29:48.618341699Z" level=info msg="TearDown network for sandbox \"c8f2652bf7b95e77a0379e746c629dccf4c3bde4ff433715c03fa0e8619c3e04\" successfully" Apr 30 03:29:48.618366 containerd[2020]: time="2025-04-30T03:29:48.618364225Z" level=info msg="StopPodSandbox for \"c8f2652bf7b95e77a0379e746c629dccf4c3bde4ff433715c03fa0e8619c3e04\" returns successfully" Apr 30 03:29:48.619163 containerd[2020]: time="2025-04-30T03:29:48.619022994Z" level=info msg="RemovePodSandbox for \"c8f2652bf7b95e77a0379e746c629dccf4c3bde4ff433715c03fa0e8619c3e04\"" Apr 30 03:29:48.619163 containerd[2020]: time="2025-04-30T03:29:48.619049276Z" level=info msg="Forcibly stopping sandbox \"c8f2652bf7b95e77a0379e746c629dccf4c3bde4ff433715c03fa0e8619c3e04\"" Apr 30 03:29:48.707519 containerd[2020]: 2025-04-30 03:29:48.666 [WARNING][6495] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c8f2652bf7b95e77a0379e746c629dccf4c3bde4ff433715c03fa0e8619c3e04" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--191-k8s-calico--apiserver--79d7797bfd--7hhqk-eth0", GenerateName:"calico-apiserver-79d7797bfd-", Namespace:"calico-apiserver", SelfLink:"", UID:"6170a3e5-e4fb-4596-abdd-016a02fa9e9d", ResourceVersion:"957", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 29, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"79d7797bfd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-191", ContainerID:"8cdf8bbdca24af919eb8228a49ac81f21e2bf3f02f73d39324a0adb17fd2feb9", Pod:"calico-apiserver-79d7797bfd-7hhqk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.9.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali79f8f63194b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:29:48.707519 containerd[2020]: 2025-04-30 03:29:48.666 [INFO][6495] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c8f2652bf7b95e77a0379e746c629dccf4c3bde4ff433715c03fa0e8619c3e04" Apr 30 03:29:48.707519 containerd[2020]: 2025-04-30 03:29:48.666 [INFO][6495] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c8f2652bf7b95e77a0379e746c629dccf4c3bde4ff433715c03fa0e8619c3e04" iface="eth0" netns="" Apr 30 03:29:48.707519 containerd[2020]: 2025-04-30 03:29:48.666 [INFO][6495] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c8f2652bf7b95e77a0379e746c629dccf4c3bde4ff433715c03fa0e8619c3e04" Apr 30 03:29:48.707519 containerd[2020]: 2025-04-30 03:29:48.666 [INFO][6495] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c8f2652bf7b95e77a0379e746c629dccf4c3bde4ff433715c03fa0e8619c3e04" Apr 30 03:29:48.707519 containerd[2020]: 2025-04-30 03:29:48.696 [INFO][6502] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c8f2652bf7b95e77a0379e746c629dccf4c3bde4ff433715c03fa0e8619c3e04" HandleID="k8s-pod-network.c8f2652bf7b95e77a0379e746c629dccf4c3bde4ff433715c03fa0e8619c3e04" Workload="ip--172--31--23--191-k8s-calico--apiserver--79d7797bfd--7hhqk-eth0" Apr 30 03:29:48.707519 containerd[2020]: 2025-04-30 03:29:48.696 [INFO][6502] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:29:48.707519 containerd[2020]: 2025-04-30 03:29:48.696 [INFO][6502] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:29:48.707519 containerd[2020]: 2025-04-30 03:29:48.702 [WARNING][6502] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c8f2652bf7b95e77a0379e746c629dccf4c3bde4ff433715c03fa0e8619c3e04" HandleID="k8s-pod-network.c8f2652bf7b95e77a0379e746c629dccf4c3bde4ff433715c03fa0e8619c3e04" Workload="ip--172--31--23--191-k8s-calico--apiserver--79d7797bfd--7hhqk-eth0" Apr 30 03:29:48.707519 containerd[2020]: 2025-04-30 03:29:48.702 [INFO][6502] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c8f2652bf7b95e77a0379e746c629dccf4c3bde4ff433715c03fa0e8619c3e04" HandleID="k8s-pod-network.c8f2652bf7b95e77a0379e746c629dccf4c3bde4ff433715c03fa0e8619c3e04" Workload="ip--172--31--23--191-k8s-calico--apiserver--79d7797bfd--7hhqk-eth0" Apr 30 03:29:48.707519 containerd[2020]: 2025-04-30 03:29:48.704 [INFO][6502] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:29:48.707519 containerd[2020]: 2025-04-30 03:29:48.705 [INFO][6495] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c8f2652bf7b95e77a0379e746c629dccf4c3bde4ff433715c03fa0e8619c3e04" Apr 30 03:29:48.708509 containerd[2020]: time="2025-04-30T03:29:48.707555960Z" level=info msg="TearDown network for sandbox \"c8f2652bf7b95e77a0379e746c629dccf4c3bde4ff433715c03fa0e8619c3e04\" successfully" Apr 30 03:29:48.713737 containerd[2020]: time="2025-04-30T03:29:48.713689142Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c8f2652bf7b95e77a0379e746c629dccf4c3bde4ff433715c03fa0e8619c3e04\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 03:29:48.714006 containerd[2020]: time="2025-04-30T03:29:48.713776462Z" level=info msg="RemovePodSandbox \"c8f2652bf7b95e77a0379e746c629dccf4c3bde4ff433715c03fa0e8619c3e04\" returns successfully" Apr 30 03:29:49.515925 systemd-resolved[1907]: Under memory pressure, flushing caches. Apr 30 03:29:49.515959 systemd-resolved[1907]: Flushed all caches. Apr 30 03:29:49.517621 systemd-journald[1497]: Under memory pressure, flushing caches. Apr 30 03:29:51.549345 kubelet[3315]: I0430 03:29:51.549269 3315 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-lfxjq" podStartSLOduration=31.748104175 podStartE2EDuration="40.545048465s" podCreationTimestamp="2025-04-30 03:29:11 +0000 UTC" firstStartedPulling="2025-04-30 03:29:35.668516014 +0000 UTC m=+49.194732744" lastFinishedPulling="2025-04-30 03:29:44.465460302 +0000 UTC m=+57.991677034" observedRunningTime="2025-04-30 03:29:45.096998655 +0000 UTC m=+58.623215407" watchObservedRunningTime="2025-04-30 03:29:51.545048465 +0000 UTC m=+65.071265215" Apr 30 03:29:51.727847 containerd[2020]: time="2025-04-30T03:29:51.727778291Z" level=info msg="StopContainer for \"d7d5646d72647ba708f44178a6a74894343ab4ffec3b563e7615ddb336f9263a\" with timeout 30 (s)" Apr 30 03:29:51.730960 containerd[2020]: time="2025-04-30T03:29:51.728268609Z" level=info msg="StopContainer for \"8e15dbc6f202ee13e8a5b1c0066067f783099edea32afebbc215018c63f84bbf\" with timeout 300 (s)" Apr 30 03:29:51.735144 containerd[2020]: time="2025-04-30T03:29:51.734855048Z" level=info msg="Stop container \"8e15dbc6f202ee13e8a5b1c0066067f783099edea32afebbc215018c63f84bbf\" with signal terminated" Apr 30 03:29:51.738241 containerd[2020]: time="2025-04-30T03:29:51.737971658Z" level=info msg="Stop container \"d7d5646d72647ba708f44178a6a74894343ab4ffec3b563e7615ddb336f9263a\" with signal terminated" Apr 30 03:29:51.850589 containerd[2020]: time="2025-04-30T03:29:51.842020079Z" level=info msg="shim disconnected" id=d7d5646d72647ba708f44178a6a74894343ab4ffec3b563e7615ddb336f9263a namespace=k8s.io Apr 30 03:29:51.854557 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d7d5646d72647ba708f44178a6a74894343ab4ffec3b563e7615ddb336f9263a-rootfs.mount: Deactivated successfully. Apr 30 03:29:51.867493 containerd[2020]: time="2025-04-30T03:29:51.867424004Z" level=warning msg="cleaning up after shim disconnected" id=d7d5646d72647ba708f44178a6a74894343ab4ffec3b563e7615ddb336f9263a namespace=k8s.io Apr 30 03:29:51.868689 containerd[2020]: time="2025-04-30T03:29:51.867535498Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:29:51.909526 containerd[2020]: time="2025-04-30T03:29:51.909452399Z" level=warning msg="cleanup warnings time=\"2025-04-30T03:29:51Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 30 03:29:51.941628 containerd[2020]: time="2025-04-30T03:29:51.941576031Z" level=info msg="StopContainer for \"d7d5646d72647ba708f44178a6a74894343ab4ffec3b563e7615ddb336f9263a\" returns successfully" Apr 30 03:29:51.956948 containerd[2020]: time="2025-04-30T03:29:51.956905448Z" level=info msg="StopPodSandbox for \"a36ca6d06ac6dfac0d7879cb314ae0a5a0b9bc8894d32dc5be5c1dfc54b42e36\"" Apr 30 03:29:51.960456 containerd[2020]: time="2025-04-30T03:29:51.960405247Z" level=info msg="Container to stop \"d7d5646d72647ba708f44178a6a74894343ab4ffec3b563e7615ddb336f9263a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 03:29:51.968134 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a36ca6d06ac6dfac0d7879cb314ae0a5a0b9bc8894d32dc5be5c1dfc54b42e36-shm.mount: Deactivated successfully. Apr 30 03:29:52.035503 containerd[2020]: time="2025-04-30T03:29:52.035306493Z" level=info msg="shim disconnected" id=a36ca6d06ac6dfac0d7879cb314ae0a5a0b9bc8894d32dc5be5c1dfc54b42e36 namespace=k8s.io Apr 30 03:29:52.036158 containerd[2020]: time="2025-04-30T03:29:52.036057046Z" level=warning msg="cleaning up after shim disconnected" id=a36ca6d06ac6dfac0d7879cb314ae0a5a0b9bc8894d32dc5be5c1dfc54b42e36 namespace=k8s.io Apr 30 03:29:52.036158 containerd[2020]: time="2025-04-30T03:29:52.036083583Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:29:52.057814 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a36ca6d06ac6dfac0d7879cb314ae0a5a0b9bc8894d32dc5be5c1dfc54b42e36-rootfs.mount: Deactivated successfully. Apr 30 03:29:52.261545 systemd-networkd[1575]: cali5c97a8169dc: Link DOWN Apr 30 03:29:52.266154 systemd-networkd[1575]: cali5c97a8169dc: Lost carrier Apr 30 03:29:52.457740 containerd[2020]: 2025-04-30 03:29:52.244 [INFO][6621] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a36ca6d06ac6dfac0d7879cb314ae0a5a0b9bc8894d32dc5be5c1dfc54b42e36" Apr 30 03:29:52.457740 containerd[2020]: 2025-04-30 03:29:52.245 [INFO][6621] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a36ca6d06ac6dfac0d7879cb314ae0a5a0b9bc8894d32dc5be5c1dfc54b42e36" iface="eth0" netns="/var/run/netns/cni-3b1cbbaf-561f-e329-ed80-b62150a17f3a" Apr 30 03:29:52.457740 containerd[2020]: 2025-04-30 03:29:52.245 [INFO][6621] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a36ca6d06ac6dfac0d7879cb314ae0a5a0b9bc8894d32dc5be5c1dfc54b42e36" iface="eth0" netns="/var/run/netns/cni-3b1cbbaf-561f-e329-ed80-b62150a17f3a" Apr 30 03:29:52.457740 containerd[2020]: 2025-04-30 03:29:52.264 [INFO][6621] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="a36ca6d06ac6dfac0d7879cb314ae0a5a0b9bc8894d32dc5be5c1dfc54b42e36" after=19.330897ms iface="eth0" netns="/var/run/netns/cni-3b1cbbaf-561f-e329-ed80-b62150a17f3a" Apr 30 03:29:52.457740 containerd[2020]: 2025-04-30 03:29:52.264 [INFO][6621] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a36ca6d06ac6dfac0d7879cb314ae0a5a0b9bc8894d32dc5be5c1dfc54b42e36" Apr 30 03:29:52.457740 containerd[2020]: 2025-04-30 03:29:52.264 [INFO][6621] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a36ca6d06ac6dfac0d7879cb314ae0a5a0b9bc8894d32dc5be5c1dfc54b42e36" Apr 30 03:29:52.457740 containerd[2020]: 2025-04-30 03:29:52.316 [INFO][6632] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a36ca6d06ac6dfac0d7879cb314ae0a5a0b9bc8894d32dc5be5c1dfc54b42e36" HandleID="k8s-pod-network.a36ca6d06ac6dfac0d7879cb314ae0a5a0b9bc8894d32dc5be5c1dfc54b42e36" Workload="ip--172--31--23--191-k8s-calico--kube--controllers--5b8dc9df9d--46jp7-eth0" Apr 30 03:29:52.457740 containerd[2020]: 2025-04-30 03:29:52.316 [INFO][6632] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:29:52.457740 containerd[2020]: 2025-04-30 03:29:52.317 [INFO][6632] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:29:52.457740 containerd[2020]: 2025-04-30 03:29:52.444 [INFO][6632] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="a36ca6d06ac6dfac0d7879cb314ae0a5a0b9bc8894d32dc5be5c1dfc54b42e36" HandleID="k8s-pod-network.a36ca6d06ac6dfac0d7879cb314ae0a5a0b9bc8894d32dc5be5c1dfc54b42e36" Workload="ip--172--31--23--191-k8s-calico--kube--controllers--5b8dc9df9d--46jp7-eth0" Apr 30 03:29:52.457740 containerd[2020]: 2025-04-30 03:29:52.444 [INFO][6632] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a36ca6d06ac6dfac0d7879cb314ae0a5a0b9bc8894d32dc5be5c1dfc54b42e36" HandleID="k8s-pod-network.a36ca6d06ac6dfac0d7879cb314ae0a5a0b9bc8894d32dc5be5c1dfc54b42e36" Workload="ip--172--31--23--191-k8s-calico--kube--controllers--5b8dc9df9d--46jp7-eth0" Apr 30 03:29:52.457740 containerd[2020]: 2025-04-30 03:29:52.446 [INFO][6632] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:29:52.457740 containerd[2020]: 2025-04-30 03:29:52.451 [INFO][6621] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a36ca6d06ac6dfac0d7879cb314ae0a5a0b9bc8894d32dc5be5c1dfc54b42e36" Apr 30 03:29:52.461241 containerd[2020]: time="2025-04-30T03:29:52.460138390Z" level=info msg="TearDown network for sandbox \"a36ca6d06ac6dfac0d7879cb314ae0a5a0b9bc8894d32dc5be5c1dfc54b42e36\" successfully" Apr 30 03:29:52.461241 containerd[2020]: time="2025-04-30T03:29:52.460168225Z" level=info msg="StopPodSandbox for \"a36ca6d06ac6dfac0d7879cb314ae0a5a0b9bc8894d32dc5be5c1dfc54b42e36\" returns successfully" Apr 30 03:29:52.467388 systemd[1]: run-netns-cni\x2d3b1cbbaf\x2d561f\x2de329\x2ded80\x2db62150a17f3a.mount: Deactivated successfully. Apr 30 03:29:52.672158 kubelet[3315]: I0430 03:29:52.672026 3315 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/83a6b31a-fa39-475d-820a-3c65d1ea9b44-tigera-ca-bundle\") pod \"83a6b31a-fa39-475d-820a-3c65d1ea9b44\" (UID: \"83a6b31a-fa39-475d-820a-3c65d1ea9b44\") " Apr 30 03:29:52.672158 kubelet[3315]: I0430 03:29:52.672117 3315 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k6k2p\" (UniqueName: \"kubernetes.io/projected/83a6b31a-fa39-475d-820a-3c65d1ea9b44-kube-api-access-k6k2p\") pod \"83a6b31a-fa39-475d-820a-3c65d1ea9b44\" (UID: \"83a6b31a-fa39-475d-820a-3c65d1ea9b44\") " Apr 30 03:29:52.711345 kubelet[3315]: I0430 03:29:52.708303 3315 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83a6b31a-fa39-475d-820a-3c65d1ea9b44-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "83a6b31a-fa39-475d-820a-3c65d1ea9b44" (UID: "83a6b31a-fa39-475d-820a-3c65d1ea9b44"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 30 03:29:52.710906 systemd[1]: var-lib-kubelet-pods-83a6b31a\x2dfa39\x2d475d\x2d820a\x2d3c65d1ea9b44-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dkube\x2dcontrollers-1.mount: Deactivated successfully. Apr 30 03:29:52.719982 systemd[1]: var-lib-kubelet-pods-83a6b31a\x2dfa39\x2d475d\x2d820a\x2d3c65d1ea9b44-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dk6k2p.mount: Deactivated successfully. Apr 30 03:29:52.725517 kubelet[3315]: I0430 03:29:52.725439 3315 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83a6b31a-fa39-475d-820a-3c65d1ea9b44-kube-api-access-k6k2p" (OuterVolumeSpecName: "kube-api-access-k6k2p") pod "83a6b31a-fa39-475d-820a-3c65d1ea9b44" (UID: "83a6b31a-fa39-475d-820a-3c65d1ea9b44"). InnerVolumeSpecName "kube-api-access-k6k2p". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 30 03:29:52.779420 kubelet[3315]: I0430 03:29:52.779337 3315 reconciler_common.go:289] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/83a6b31a-fa39-475d-820a-3c65d1ea9b44-tigera-ca-bundle\") on node \"ip-172-31-23-191\" DevicePath \"\"" Apr 30 03:29:52.779420 kubelet[3315]: I0430 03:29:52.779387 3315 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-k6k2p\" (UniqueName: \"kubernetes.io/projected/83a6b31a-fa39-475d-820a-3c65d1ea9b44-kube-api-access-k6k2p\") on node \"ip-172-31-23-191\" DevicePath \"\"" Apr 30 03:29:53.203541 kubelet[3315]: I0430 03:29:53.200525 3315 scope.go:117] "RemoveContainer" containerID="d7d5646d72647ba708f44178a6a74894343ab4ffec3b563e7615ddb336f9263a" Apr 30 03:29:53.212901 containerd[2020]: time="2025-04-30T03:29:53.212862668Z" level=info msg="RemoveContainer for \"d7d5646d72647ba708f44178a6a74894343ab4ffec3b563e7615ddb336f9263a\"" Apr 30 03:29:53.225841 containerd[2020]: time="2025-04-30T03:29:53.224642730Z" level=info msg="RemoveContainer for \"d7d5646d72647ba708f44178a6a74894343ab4ffec3b563e7615ddb336f9263a\" returns successfully" Apr 30 03:29:53.309910 kubelet[3315]: I0430 03:29:53.304820 3315 topology_manager.go:215] "Topology Admit Handler" podUID="c62965d7-9f83-46cd-9be0-8aa031aa3424" podNamespace="calico-system" podName="calico-kube-controllers-94bb47b58-2lw4m" Apr 30 03:29:53.312961 kubelet[3315]: E0430 03:29:53.312930 3315 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="83a6b31a-fa39-475d-820a-3c65d1ea9b44" containerName="calico-kube-controllers" Apr 30 03:29:53.314996 kubelet[3315]: I0430 03:29:53.314954 3315 memory_manager.go:354] "RemoveStaleState removing state" podUID="83a6b31a-fa39-475d-820a-3c65d1ea9b44" containerName="calico-kube-controllers" Apr 30 03:29:53.392347 kubelet[3315]: I0430 03:29:53.392301 3315 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4llvc\" (UniqueName: \"kubernetes.io/projected/c62965d7-9f83-46cd-9be0-8aa031aa3424-kube-api-access-4llvc\") pod \"calico-kube-controllers-94bb47b58-2lw4m\" (UID: \"c62965d7-9f83-46cd-9be0-8aa031aa3424\") " pod="calico-system/calico-kube-controllers-94bb47b58-2lw4m" Apr 30 03:29:53.392541 kubelet[3315]: I0430 03:29:53.392360 3315 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c62965d7-9f83-46cd-9be0-8aa031aa3424-tigera-ca-bundle\") pod \"calico-kube-controllers-94bb47b58-2lw4m\" (UID: \"c62965d7-9f83-46cd-9be0-8aa031aa3424\") " pod="calico-system/calico-kube-controllers-94bb47b58-2lw4m" Apr 30 03:29:53.538116 systemd[1]: Started sshd@14-172.31.23.191:22-147.75.109.163:45782.service - OpenSSH per-connection server daemon (147.75.109.163:45782). Apr 30 03:29:53.547752 systemd-resolved[1907]: Under memory pressure, flushing caches. Apr 30 03:29:53.550934 systemd-journald[1497]: Under memory pressure, flushing caches. Apr 30 03:29:53.547760 systemd-resolved[1907]: Flushed all caches. Apr 30 03:29:53.671145 containerd[2020]: time="2025-04-30T03:29:53.671096185Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-94bb47b58-2lw4m,Uid:c62965d7-9f83-46cd-9be0-8aa031aa3424,Namespace:calico-system,Attempt:0,}" Apr 30 03:29:53.844824 sshd[6646]: Accepted publickey for core from 147.75.109.163 port 45782 ssh2: RSA SHA256:7ZQea3lKZeIY1pq8546y2SpcWopo7i1peiZKBcYFJ3g Apr 30 03:29:53.847988 sshd[6646]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:29:53.860704 systemd-logind[1998]: New session 15 of user core. Apr 30 03:29:53.863984 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 30 03:29:53.973380 (udev-worker)[6628]: Network interface NamePolicy= disabled on kernel command line. Apr 30 03:29:53.974703 systemd-networkd[1575]: cali3888c833371: Link UP Apr 30 03:29:53.974973 systemd-networkd[1575]: cali3888c833371: Gained carrier Apr 30 03:29:54.018433 containerd[2020]: 2025-04-30 03:29:53.797 [INFO][6649] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--23--191-k8s-calico--kube--controllers--94bb47b58--2lw4m-eth0 calico-kube-controllers-94bb47b58- calico-system c62965d7-9f83-46cd-9be0-8aa031aa3424 1160 0 2025-04-30 03:29:53 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:94bb47b58 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-23-191 calico-kube-controllers-94bb47b58-2lw4m eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali3888c833371 [] []}} ContainerID="7f97820640581291f5a09e93254363c3d5f15e35d7eca64e7b29ecb3c058530b" Namespace="calico-system" Pod="calico-kube-controllers-94bb47b58-2lw4m" WorkloadEndpoint="ip--172--31--23--191-k8s-calico--kube--controllers--94bb47b58--2lw4m-" Apr 30 03:29:54.018433 containerd[2020]: 2025-04-30 03:29:53.798 [INFO][6649] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="7f97820640581291f5a09e93254363c3d5f15e35d7eca64e7b29ecb3c058530b" Namespace="calico-system" Pod="calico-kube-controllers-94bb47b58-2lw4m" WorkloadEndpoint="ip--172--31--23--191-k8s-calico--kube--controllers--94bb47b58--2lw4m-eth0" Apr 30 03:29:54.018433 containerd[2020]: 2025-04-30 03:29:53.894 [INFO][6661] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7f97820640581291f5a09e93254363c3d5f15e35d7eca64e7b29ecb3c058530b" HandleID="k8s-pod-network.7f97820640581291f5a09e93254363c3d5f15e35d7eca64e7b29ecb3c058530b" Workload="ip--172--31--23--191-k8s-calico--kube--controllers--94bb47b58--2lw4m-eth0" Apr 30 03:29:54.018433 containerd[2020]: 2025-04-30 03:29:53.909 [INFO][6661] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7f97820640581291f5a09e93254363c3d5f15e35d7eca64e7b29ecb3c058530b" HandleID="k8s-pod-network.7f97820640581291f5a09e93254363c3d5f15e35d7eca64e7b29ecb3c058530b" Workload="ip--172--31--23--191-k8s-calico--kube--controllers--94bb47b58--2lw4m-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000334d30), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-23-191", "pod":"calico-kube-controllers-94bb47b58-2lw4m", "timestamp":"2025-04-30 03:29:53.894435589 +0000 UTC"}, Hostname:"ip-172-31-23-191", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 03:29:54.018433 containerd[2020]: 2025-04-30 03:29:53.909 [INFO][6661] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:29:54.018433 containerd[2020]: 2025-04-30 03:29:53.910 [INFO][6661] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:29:54.018433 containerd[2020]: 2025-04-30 03:29:53.911 [INFO][6661] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-23-191' Apr 30 03:29:54.018433 containerd[2020]: 2025-04-30 03:29:53.912 [INFO][6661] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.7f97820640581291f5a09e93254363c3d5f15e35d7eca64e7b29ecb3c058530b" host="ip-172-31-23-191" Apr 30 03:29:54.018433 containerd[2020]: 2025-04-30 03:29:53.923 [INFO][6661] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-23-191" Apr 30 03:29:54.018433 containerd[2020]: 2025-04-30 03:29:53.934 [INFO][6661] ipam/ipam.go 489: Trying affinity for 192.168.9.64/26 host="ip-172-31-23-191" Apr 30 03:29:54.018433 containerd[2020]: 2025-04-30 03:29:53.936 [INFO][6661] ipam/ipam.go 155: Attempting to load block cidr=192.168.9.64/26 host="ip-172-31-23-191" Apr 30 03:29:54.018433 containerd[2020]: 2025-04-30 03:29:53.939 [INFO][6661] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.9.64/26 host="ip-172-31-23-191" Apr 30 03:29:54.018433 containerd[2020]: 2025-04-30 03:29:53.939 [INFO][6661] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.9.64/26 handle="k8s-pod-network.7f97820640581291f5a09e93254363c3d5f15e35d7eca64e7b29ecb3c058530b" host="ip-172-31-23-191" Apr 30 03:29:54.018433 containerd[2020]: 2025-04-30 03:29:53.942 [INFO][6661] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.7f97820640581291f5a09e93254363c3d5f15e35d7eca64e7b29ecb3c058530b Apr 30 03:29:54.018433 containerd[2020]: 2025-04-30 03:29:53.949 [INFO][6661] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.9.64/26 handle="k8s-pod-network.7f97820640581291f5a09e93254363c3d5f15e35d7eca64e7b29ecb3c058530b" host="ip-172-31-23-191" Apr 30 03:29:54.018433 containerd[2020]: 2025-04-30 03:29:53.962 [INFO][6661] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.9.72/26] block=192.168.9.64/26 handle="k8s-pod-network.7f97820640581291f5a09e93254363c3d5f15e35d7eca64e7b29ecb3c058530b" host="ip-172-31-23-191" Apr 30 03:29:54.018433 containerd[2020]: 2025-04-30 03:29:53.963 [INFO][6661] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.9.72/26] handle="k8s-pod-network.7f97820640581291f5a09e93254363c3d5f15e35d7eca64e7b29ecb3c058530b" host="ip-172-31-23-191" Apr 30 03:29:54.018433 containerd[2020]: 2025-04-30 03:29:53.964 [INFO][6661] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:29:54.018433 containerd[2020]: 2025-04-30 03:29:53.964 [INFO][6661] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.9.72/26] IPv6=[] ContainerID="7f97820640581291f5a09e93254363c3d5f15e35d7eca64e7b29ecb3c058530b" HandleID="k8s-pod-network.7f97820640581291f5a09e93254363c3d5f15e35d7eca64e7b29ecb3c058530b" Workload="ip--172--31--23--191-k8s-calico--kube--controllers--94bb47b58--2lw4m-eth0" Apr 30 03:29:54.027249 containerd[2020]: 2025-04-30 03:29:53.968 [INFO][6649] cni-plugin/k8s.go 386: Populated endpoint ContainerID="7f97820640581291f5a09e93254363c3d5f15e35d7eca64e7b29ecb3c058530b" Namespace="calico-system" Pod="calico-kube-controllers-94bb47b58-2lw4m" WorkloadEndpoint="ip--172--31--23--191-k8s-calico--kube--controllers--94bb47b58--2lw4m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--191-k8s-calico--kube--controllers--94bb47b58--2lw4m-eth0", GenerateName:"calico-kube-controllers-94bb47b58-", Namespace:"calico-system", SelfLink:"", UID:"c62965d7-9f83-46cd-9be0-8aa031aa3424", ResourceVersion:"1160", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 29, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"94bb47b58", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-191", ContainerID:"", Pod:"calico-kube-controllers-94bb47b58-2lw4m", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.9.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3888c833371", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:29:54.027249 containerd[2020]: 2025-04-30 03:29:53.968 [INFO][6649] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.9.72/32] ContainerID="7f97820640581291f5a09e93254363c3d5f15e35d7eca64e7b29ecb3c058530b" Namespace="calico-system" Pod="calico-kube-controllers-94bb47b58-2lw4m" WorkloadEndpoint="ip--172--31--23--191-k8s-calico--kube--controllers--94bb47b58--2lw4m-eth0" Apr 30 03:29:54.027249 containerd[2020]: 2025-04-30 03:29:53.968 [INFO][6649] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3888c833371 ContainerID="7f97820640581291f5a09e93254363c3d5f15e35d7eca64e7b29ecb3c058530b" Namespace="calico-system" Pod="calico-kube-controllers-94bb47b58-2lw4m" WorkloadEndpoint="ip--172--31--23--191-k8s-calico--kube--controllers--94bb47b58--2lw4m-eth0" Apr 30 03:29:54.027249 containerd[2020]: 2025-04-30 03:29:53.975 [INFO][6649] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7f97820640581291f5a09e93254363c3d5f15e35d7eca64e7b29ecb3c058530b" Namespace="calico-system" Pod="calico-kube-controllers-94bb47b58-2lw4m" WorkloadEndpoint="ip--172--31--23--191-k8s-calico--kube--controllers--94bb47b58--2lw4m-eth0" Apr 30 03:29:54.027249 containerd[2020]: 2025-04-30 03:29:53.976 [INFO][6649] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="7f97820640581291f5a09e93254363c3d5f15e35d7eca64e7b29ecb3c058530b" Namespace="calico-system" Pod="calico-kube-controllers-94bb47b58-2lw4m" WorkloadEndpoint="ip--172--31--23--191-k8s-calico--kube--controllers--94bb47b58--2lw4m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--191-k8s-calico--kube--controllers--94bb47b58--2lw4m-eth0", GenerateName:"calico-kube-controllers-94bb47b58-", Namespace:"calico-system", SelfLink:"", UID:"c62965d7-9f83-46cd-9be0-8aa031aa3424", ResourceVersion:"1160", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 29, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"94bb47b58", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-191", ContainerID:"7f97820640581291f5a09e93254363c3d5f15e35d7eca64e7b29ecb3c058530b", Pod:"calico-kube-controllers-94bb47b58-2lw4m", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.9.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3888c833371", MAC:"ba:ca:c9:e9:d1:09", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:29:54.027249 containerd[2020]: 2025-04-30 03:29:53.994 [INFO][6649] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="7f97820640581291f5a09e93254363c3d5f15e35d7eca64e7b29ecb3c058530b" Namespace="calico-system" Pod="calico-kube-controllers-94bb47b58-2lw4m" WorkloadEndpoint="ip--172--31--23--191-k8s-calico--kube--controllers--94bb47b58--2lw4m-eth0" Apr 30 03:29:54.106187 containerd[2020]: time="2025-04-30T03:29:54.105892527Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:29:54.106187 containerd[2020]: time="2025-04-30T03:29:54.105982843Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:29:54.108045 containerd[2020]: time="2025-04-30T03:29:54.107605801Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:54.109413 containerd[2020]: time="2025-04-30T03:29:54.109344652Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:54.407118 containerd[2020]: time="2025-04-30T03:29:54.407001209Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-94bb47b58-2lw4m,Uid:c62965d7-9f83-46cd-9be0-8aa031aa3424,Namespace:calico-system,Attempt:0,} returns sandbox id \"7f97820640581291f5a09e93254363c3d5f15e35d7eca64e7b29ecb3c058530b\"" Apr 30 03:29:55.003153 kubelet[3315]: I0430 03:29:54.999889 3315 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83a6b31a-fa39-475d-820a-3c65d1ea9b44" path="/var/lib/kubelet/pods/83a6b31a-fa39-475d-820a-3c65d1ea9b44/volumes" Apr 30 03:29:55.072981 containerd[2020]: time="2025-04-30T03:29:55.072917305Z" level=info msg="CreateContainer within sandbox \"7f97820640581291f5a09e93254363c3d5f15e35d7eca64e7b29ecb3c058530b\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Apr 30 03:29:55.137607 containerd[2020]: time="2025-04-30T03:29:55.137366922Z" level=info msg="CreateContainer within sandbox \"7f97820640581291f5a09e93254363c3d5f15e35d7eca64e7b29ecb3c058530b\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"e817caf45b3a0fc1414a9c2b8eae38784b4160104689f3cd6820be6671be601b\"" Apr 30 03:29:55.150006 containerd[2020]: time="2025-04-30T03:29:55.149962876Z" level=info msg="StartContainer for \"e817caf45b3a0fc1414a9c2b8eae38784b4160104689f3cd6820be6671be601b\"" Apr 30 03:29:55.282430 systemd-networkd[1575]: cali3888c833371: Gained IPv6LL Apr 30 03:29:55.366014 sshd[6646]: pam_unix(sshd:session): session closed for user core Apr 30 03:29:55.376982 systemd[1]: sshd@14-172.31.23.191:22-147.75.109.163:45782.service: Deactivated successfully. Apr 30 03:29:55.396388 systemd[1]: session-15.scope: Deactivated successfully. Apr 30 03:29:55.400538 systemd-logind[1998]: Session 15 logged out. Waiting for processes to exit. Apr 30 03:29:55.428347 systemd[1]: Started sshd@15-172.31.23.191:22-147.75.109.163:45786.service - OpenSSH per-connection server daemon (147.75.109.163:45786). Apr 30 03:29:55.438559 containerd[2020]: time="2025-04-30T03:29:55.437510246Z" level=info msg="StartContainer for \"e817caf45b3a0fc1414a9c2b8eae38784b4160104689f3cd6820be6671be601b\" returns successfully" Apr 30 03:29:55.444263 systemd-logind[1998]: Removed session 15. Apr 30 03:29:55.598688 systemd-journald[1497]: Under memory pressure, flushing caches. Apr 30 03:29:55.595829 systemd-resolved[1907]: Under memory pressure, flushing caches. Apr 30 03:29:55.595838 systemd-resolved[1907]: Flushed all caches. Apr 30 03:29:55.695318 sshd[6791]: Accepted publickey for core from 147.75.109.163 port 45786 ssh2: RSA SHA256:7ZQea3lKZeIY1pq8546y2SpcWopo7i1peiZKBcYFJ3g Apr 30 03:29:55.698987 sshd[6791]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:29:55.705877 systemd-logind[1998]: New session 16 of user core. Apr 30 03:29:55.711360 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 30 03:29:56.497703 sshd[6791]: pam_unix(sshd:session): session closed for user core Apr 30 03:29:56.509376 systemd-logind[1998]: Session 16 logged out. Waiting for processes to exit. Apr 30 03:29:56.511145 systemd[1]: sshd@15-172.31.23.191:22-147.75.109.163:45786.service: Deactivated successfully. Apr 30 03:29:56.518789 systemd[1]: session-16.scope: Deactivated successfully. Apr 30 03:29:56.521039 systemd-logind[1998]: Removed session 16. Apr 30 03:29:56.537991 systemd[1]: Started sshd@16-172.31.23.191:22-147.75.109.163:45796.service - OpenSSH per-connection server daemon (147.75.109.163:45796). Apr 30 03:29:56.725607 kubelet[3315]: I0430 03:29:56.708943 3315 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-94bb47b58-2lw4m" podStartSLOduration=3.7060955399999997 podStartE2EDuration="3.70609554s" podCreationTimestamp="2025-04-30 03:29:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 03:29:56.679735603 +0000 UTC m=+70.205952353" watchObservedRunningTime="2025-04-30 03:29:56.70609554 +0000 UTC m=+70.232312292" Apr 30 03:29:56.764275 systemd[1]: run-containerd-runc-k8s.io-e817caf45b3a0fc1414a9c2b8eae38784b4160104689f3cd6820be6671be601b-runc.UiLPDV.mount: Deactivated successfully. Apr 30 03:29:56.846034 sshd[6827]: Accepted publickey for core from 147.75.109.163 port 45796 ssh2: RSA SHA256:7ZQea3lKZeIY1pq8546y2SpcWopo7i1peiZKBcYFJ3g Apr 30 03:29:56.851457 sshd[6827]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:29:56.880859 systemd-logind[1998]: New session 17 of user core. Apr 30 03:29:56.885972 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 30 03:29:56.929837 containerd[2020]: time="2025-04-30T03:29:56.929727447Z" level=info msg="shim disconnected" id=8e15dbc6f202ee13e8a5b1c0066067f783099edea32afebbc215018c63f84bbf namespace=k8s.io Apr 30 03:29:56.930412 containerd[2020]: time="2025-04-30T03:29:56.929831383Z" level=warning msg="cleaning up after shim disconnected" id=8e15dbc6f202ee13e8a5b1c0066067f783099edea32afebbc215018c63f84bbf namespace=k8s.io Apr 30 03:29:56.930412 containerd[2020]: time="2025-04-30T03:29:56.929866827Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:29:56.938323 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8e15dbc6f202ee13e8a5b1c0066067f783099edea32afebbc215018c63f84bbf-rootfs.mount: Deactivated successfully. Apr 30 03:29:57.000466 containerd[2020]: time="2025-04-30T03:29:57.000296106Z" level=info msg="StopContainer for \"8e15dbc6f202ee13e8a5b1c0066067f783099edea32afebbc215018c63f84bbf\" returns successfully" Apr 30 03:29:57.000922 containerd[2020]: time="2025-04-30T03:29:57.000898723Z" level=info msg="StopPodSandbox for \"8710cb032452789bd4cb0b6dcfbfc73f0af53ba4b00c4ca56c7dcc0ce74c6ff3\"" Apr 30 03:29:57.001024 containerd[2020]: time="2025-04-30T03:29:57.000957087Z" level=info msg="Container to stop \"8e15dbc6f202ee13e8a5b1c0066067f783099edea32afebbc215018c63f84bbf\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 03:29:57.016281 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8710cb032452789bd4cb0b6dcfbfc73f0af53ba4b00c4ca56c7dcc0ce74c6ff3-shm.mount: Deactivated successfully. Apr 30 03:29:57.076525 containerd[2020]: time="2025-04-30T03:29:57.076260186Z" level=info msg="shim disconnected" id=8710cb032452789bd4cb0b6dcfbfc73f0af53ba4b00c4ca56c7dcc0ce74c6ff3 namespace=k8s.io Apr 30 03:29:57.077965 containerd[2020]: time="2025-04-30T03:29:57.077880153Z" level=warning msg="cleaning up after shim disconnected" id=8710cb032452789bd4cb0b6dcfbfc73f0af53ba4b00c4ca56c7dcc0ce74c6ff3 namespace=k8s.io Apr 30 03:29:57.078285 containerd[2020]: time="2025-04-30T03:29:57.078260874Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:29:57.090272 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8710cb032452789bd4cb0b6dcfbfc73f0af53ba4b00c4ca56c7dcc0ce74c6ff3-rootfs.mount: Deactivated successfully. Apr 30 03:29:57.163918 containerd[2020]: time="2025-04-30T03:29:57.162192104Z" level=info msg="TearDown network for sandbox \"8710cb032452789bd4cb0b6dcfbfc73f0af53ba4b00c4ca56c7dcc0ce74c6ff3\" successfully" Apr 30 03:29:57.163918 containerd[2020]: time="2025-04-30T03:29:57.162231890Z" level=info msg="StopPodSandbox for \"8710cb032452789bd4cb0b6dcfbfc73f0af53ba4b00c4ca56c7dcc0ce74c6ff3\" returns successfully" Apr 30 03:29:57.382663 kubelet[3315]: I0430 03:29:57.381817 3315 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ade21f3d-5259-47ed-ad9a-431742ebb77b-tigera-ca-bundle\") pod \"ade21f3d-5259-47ed-ad9a-431742ebb77b\" (UID: \"ade21f3d-5259-47ed-ad9a-431742ebb77b\") " Apr 30 03:29:57.382663 kubelet[3315]: I0430 03:29:57.381902 3315 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/ade21f3d-5259-47ed-ad9a-431742ebb77b-typha-certs\") pod \"ade21f3d-5259-47ed-ad9a-431742ebb77b\" (UID: \"ade21f3d-5259-47ed-ad9a-431742ebb77b\") " Apr 30 03:29:57.382663 kubelet[3315]: I0430 03:29:57.381946 3315 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2b2l5\" (UniqueName: \"kubernetes.io/projected/ade21f3d-5259-47ed-ad9a-431742ebb77b-kube-api-access-2b2l5\") pod \"ade21f3d-5259-47ed-ad9a-431742ebb77b\" (UID: \"ade21f3d-5259-47ed-ad9a-431742ebb77b\") " Apr 30 03:29:57.394870 kubelet[3315]: I0430 03:29:57.394828 3315 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ade21f3d-5259-47ed-ad9a-431742ebb77b-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "ade21f3d-5259-47ed-ad9a-431742ebb77b" (UID: "ade21f3d-5259-47ed-ad9a-431742ebb77b"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 30 03:29:57.405508 kubelet[3315]: I0430 03:29:57.405328 3315 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ade21f3d-5259-47ed-ad9a-431742ebb77b-kube-api-access-2b2l5" (OuterVolumeSpecName: "kube-api-access-2b2l5") pod "ade21f3d-5259-47ed-ad9a-431742ebb77b" (UID: "ade21f3d-5259-47ed-ad9a-431742ebb77b"). InnerVolumeSpecName "kube-api-access-2b2l5". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 30 03:29:57.408238 kubelet[3315]: I0430 03:29:57.407720 3315 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ade21f3d-5259-47ed-ad9a-431742ebb77b-typha-certs" (OuterVolumeSpecName: "typha-certs") pod "ade21f3d-5259-47ed-ad9a-431742ebb77b" (UID: "ade21f3d-5259-47ed-ad9a-431742ebb77b"). InnerVolumeSpecName "typha-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 30 03:29:57.483318 kubelet[3315]: I0430 03:29:57.483249 3315 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-2b2l5\" (UniqueName: \"kubernetes.io/projected/ade21f3d-5259-47ed-ad9a-431742ebb77b-kube-api-access-2b2l5\") on node \"ip-172-31-23-191\" DevicePath \"\"" Apr 30 03:29:57.483318 kubelet[3315]: I0430 03:29:57.483282 3315 reconciler_common.go:289] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ade21f3d-5259-47ed-ad9a-431742ebb77b-tigera-ca-bundle\") on node \"ip-172-31-23-191\" DevicePath \"\"" Apr 30 03:29:57.483318 kubelet[3315]: I0430 03:29:57.483293 3315 reconciler_common.go:289] "Volume detached for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/ade21f3d-5259-47ed-ad9a-431742ebb77b-typha-certs\") on node \"ip-172-31-23-191\" DevicePath \"\"" Apr 30 03:29:57.544407 ntpd[1983]: Listen normally on 15 cali3888c833371 [fe80::ecee:eeff:feee:eeee%14]:123 Apr 30 03:29:57.544850 ntpd[1983]: 30 Apr 03:29:57 ntpd[1983]: Listen normally on 15 cali3888c833371 [fe80::ecee:eeff:feee:eeee%14]:123 Apr 30 03:29:57.544850 ntpd[1983]: 30 Apr 03:29:57 ntpd[1983]: Deleting interface #11 cali5c97a8169dc, fe80::ecee:eeff:feee:eeee%10#123, interface stats: received=0, sent=0, dropped=0, active_time=16 secs Apr 30 03:29:57.544458 ntpd[1983]: Deleting interface #11 cali5c97a8169dc, fe80::ecee:eeff:feee:eeee%10#123, interface stats: received=0, sent=0, dropped=0, active_time=16 secs Apr 30 03:29:57.646777 systemd-journald[1497]: Under memory pressure, flushing caches. Apr 30 03:29:57.646625 systemd-resolved[1907]: Under memory pressure, flushing caches. Apr 30 03:29:57.646650 systemd-resolved[1907]: Flushed all caches. Apr 30 03:29:57.732287 kubelet[3315]: I0430 03:29:57.662393 3315 scope.go:117] "RemoveContainer" containerID="8e15dbc6f202ee13e8a5b1c0066067f783099edea32afebbc215018c63f84bbf" Apr 30 03:29:57.746903 systemd[1]: var-lib-kubelet-pods-ade21f3d\x2d5259\x2d47ed\x2dad9a\x2d431742ebb77b-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dtypha-1.mount: Deactivated successfully. Apr 30 03:29:57.747149 systemd[1]: var-lib-kubelet-pods-ade21f3d\x2d5259\x2d47ed\x2dad9a\x2d431742ebb77b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2b2l5.mount: Deactivated successfully. Apr 30 03:29:57.747301 systemd[1]: var-lib-kubelet-pods-ade21f3d\x2d5259\x2d47ed\x2dad9a\x2d431742ebb77b-volumes-kubernetes.io\x7esecret-typha\x2dcerts.mount: Deactivated successfully. Apr 30 03:29:57.781794 containerd[2020]: time="2025-04-30T03:29:57.780468280Z" level=info msg="RemoveContainer for \"8e15dbc6f202ee13e8a5b1c0066067f783099edea32afebbc215018c63f84bbf\"" Apr 30 03:29:57.802619 containerd[2020]: time="2025-04-30T03:29:57.801253804Z" level=info msg="RemoveContainer for \"8e15dbc6f202ee13e8a5b1c0066067f783099edea32afebbc215018c63f84bbf\" returns successfully" Apr 30 03:29:57.806430 kubelet[3315]: I0430 03:29:57.805731 3315 scope.go:117] "RemoveContainer" containerID="8e15dbc6f202ee13e8a5b1c0066067f783099edea32afebbc215018c63f84bbf" Apr 30 03:29:57.884052 containerd[2020]: time="2025-04-30T03:29:57.845525641Z" level=error msg="ContainerStatus for \"8e15dbc6f202ee13e8a5b1c0066067f783099edea32afebbc215018c63f84bbf\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8e15dbc6f202ee13e8a5b1c0066067f783099edea32afebbc215018c63f84bbf\": not found" Apr 30 03:29:57.941447 kubelet[3315]: E0430 03:29:57.941239 3315 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8e15dbc6f202ee13e8a5b1c0066067f783099edea32afebbc215018c63f84bbf\": not found" containerID="8e15dbc6f202ee13e8a5b1c0066067f783099edea32afebbc215018c63f84bbf" Apr 30 03:29:57.941447 kubelet[3315]: I0430 03:29:57.941323 3315 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8e15dbc6f202ee13e8a5b1c0066067f783099edea32afebbc215018c63f84bbf"} err="failed to get container status \"8e15dbc6f202ee13e8a5b1c0066067f783099edea32afebbc215018c63f84bbf\": rpc error: code = NotFound desc = an error occurred when try to find container \"8e15dbc6f202ee13e8a5b1c0066067f783099edea32afebbc215018c63f84bbf\": not found" Apr 30 03:29:58.670748 kubelet[3315]: I0430 03:29:58.670715 3315 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ade21f3d-5259-47ed-ad9a-431742ebb77b" path="/var/lib/kubelet/pods/ade21f3d-5259-47ed-ad9a-431742ebb77b/volumes" Apr 30 03:29:58.725248 systemd[1]: run-containerd-runc-k8s.io-e817caf45b3a0fc1414a9c2b8eae38784b4160104689f3cd6820be6671be601b-runc.TsUgDG.mount: Deactivated successfully. Apr 30 03:29:59.693170 systemd-resolved[1907]: Under memory pressure, flushing caches. Apr 30 03:29:59.693964 systemd-journald[1497]: Under memory pressure, flushing caches. Apr 30 03:29:59.693202 systemd-resolved[1907]: Flushed all caches. Apr 30 03:30:00.484026 sshd[6827]: pam_unix(sshd:session): session closed for user core Apr 30 03:30:00.498630 systemd[1]: sshd@16-172.31.23.191:22-147.75.109.163:45796.service: Deactivated successfully. Apr 30 03:30:00.518665 systemd-logind[1998]: Session 17 logged out. Waiting for processes to exit. Apr 30 03:30:00.519714 systemd[1]: session-17.scope: Deactivated successfully. Apr 30 03:30:00.543258 systemd[1]: Started sshd@17-172.31.23.191:22-147.75.109.163:58484.service - OpenSSH per-connection server daemon (147.75.109.163:58484). Apr 30 03:30:00.546683 systemd-logind[1998]: Removed session 17. Apr 30 03:30:00.836614 sshd[7046]: Accepted publickey for core from 147.75.109.163 port 58484 ssh2: RSA SHA256:7ZQea3lKZeIY1pq8546y2SpcWopo7i1peiZKBcYFJ3g Apr 30 03:30:00.837431 sshd[7046]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:30:00.842626 systemd-logind[1998]: New session 18 of user core. Apr 30 03:30:00.850949 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 30 03:30:01.535931 systemd[1]: run-containerd-runc-k8s.io-cc1704ed32693bebb4a9793172d3d0f790694d95cccca5919601a62eae50d234-runc.xf0rZ8.mount: Deactivated successfully. Apr 30 03:30:01.739937 systemd-resolved[1907]: Under memory pressure, flushing caches. Apr 30 03:30:01.739946 systemd-resolved[1907]: Flushed all caches. Apr 30 03:30:01.747168 systemd-journald[1497]: Under memory pressure, flushing caches. Apr 30 03:30:03.390798 sshd[7046]: pam_unix(sshd:session): session closed for user core Apr 30 03:30:03.401806 systemd[1]: sshd@17-172.31.23.191:22-147.75.109.163:58484.service: Deactivated successfully. Apr 30 03:30:03.412938 systemd-logind[1998]: Session 18 logged out. Waiting for processes to exit. Apr 30 03:30:03.413463 systemd[1]: session-18.scope: Deactivated successfully. Apr 30 03:30:03.439734 systemd[1]: Started sshd@18-172.31.23.191:22-147.75.109.163:58486.service - OpenSSH per-connection server daemon (147.75.109.163:58486). Apr 30 03:30:03.442188 systemd-logind[1998]: Removed session 18. Apr 30 03:30:03.758547 sshd[7144]: Accepted publickey for core from 147.75.109.163 port 58486 ssh2: RSA SHA256:7ZQea3lKZeIY1pq8546y2SpcWopo7i1peiZKBcYFJ3g Apr 30 03:30:03.777213 sshd[7144]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:30:03.793934 systemd-journald[1497]: Under memory pressure, flushing caches. Apr 30 03:30:03.788646 systemd-resolved[1907]: Under memory pressure, flushing caches. Apr 30 03:30:03.788654 systemd-resolved[1907]: Flushed all caches. Apr 30 03:30:03.814540 systemd-logind[1998]: New session 19 of user core. Apr 30 03:30:03.819983 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 30 03:30:04.145943 sshd[7144]: pam_unix(sshd:session): session closed for user core Apr 30 03:30:04.150910 systemd[1]: sshd@18-172.31.23.191:22-147.75.109.163:58486.service: Deactivated successfully. Apr 30 03:30:04.158404 systemd-logind[1998]: Session 19 logged out. Waiting for processes to exit. Apr 30 03:30:04.159628 systemd[1]: session-19.scope: Deactivated successfully. Apr 30 03:30:04.161382 systemd-logind[1998]: Removed session 19. Apr 30 03:30:09.188905 systemd[1]: Started sshd@19-172.31.23.191:22-147.75.109.163:39198.service - OpenSSH per-connection server daemon (147.75.109.163:39198). Apr 30 03:30:09.298036 kubelet[3315]: I0430 03:30:09.297986 3315 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 30 03:30:09.441176 containerd[2020]: time="2025-04-30T03:30:09.440931271Z" level=info msg="StopContainer for \"1a6dd8040c172be2f5b61d3ab4eb784c4ffc016ed9d92b0d13a4d9cd0175b855\" with timeout 30 (s)" Apr 30 03:30:09.444646 containerd[2020]: time="2025-04-30T03:30:09.442939562Z" level=info msg="Stop container \"1a6dd8040c172be2f5b61d3ab4eb784c4ffc016ed9d92b0d13a4d9cd0175b855\" with signal terminated" Apr 30 03:30:09.453286 sshd[7259]: Accepted publickey for core from 147.75.109.163 port 39198 ssh2: RSA SHA256:7ZQea3lKZeIY1pq8546y2SpcWopo7i1peiZKBcYFJ3g Apr 30 03:30:09.460084 sshd[7259]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:30:09.483529 systemd-logind[1998]: New session 20 of user core. Apr 30 03:30:09.486961 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 30 03:30:09.632673 containerd[2020]: time="2025-04-30T03:30:09.631081195Z" level=info msg="shim disconnected" id=1a6dd8040c172be2f5b61d3ab4eb784c4ffc016ed9d92b0d13a4d9cd0175b855 namespace=k8s.io Apr 30 03:30:09.632673 containerd[2020]: time="2025-04-30T03:30:09.631177059Z" level=warning msg="cleaning up after shim disconnected" id=1a6dd8040c172be2f5b61d3ab4eb784c4ffc016ed9d92b0d13a4d9cd0175b855 namespace=k8s.io Apr 30 03:30:09.632673 containerd[2020]: time="2025-04-30T03:30:09.631189862Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:30:09.636906 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1a6dd8040c172be2f5b61d3ab4eb784c4ffc016ed9d92b0d13a4d9cd0175b855-rootfs.mount: Deactivated successfully. Apr 30 03:30:09.816643 containerd[2020]: time="2025-04-30T03:30:09.816416224Z" level=info msg="StopContainer for \"1a6dd8040c172be2f5b61d3ab4eb784c4ffc016ed9d92b0d13a4d9cd0175b855\" returns successfully" Apr 30 03:30:09.818294 containerd[2020]: time="2025-04-30T03:30:09.818257399Z" level=info msg="StopPodSandbox for \"bdc26b9286d4335c13a5ffd9eba14573f4c3b8285f62fe28f181f2f63cbef0c1\"" Apr 30 03:30:09.818412 containerd[2020]: time="2025-04-30T03:30:09.818305230Z" level=info msg="Container to stop \"1a6dd8040c172be2f5b61d3ab4eb784c4ffc016ed9d92b0d13a4d9cd0175b855\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 03:30:09.836266 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-bdc26b9286d4335c13a5ffd9eba14573f4c3b8285f62fe28f181f2f63cbef0c1-shm.mount: Deactivated successfully. Apr 30 03:30:09.878159 sshd[7259]: pam_unix(sshd:session): session closed for user core Apr 30 03:30:09.885730 systemd[1]: sshd@19-172.31.23.191:22-147.75.109.163:39198.service: Deactivated successfully. Apr 30 03:30:09.898283 systemd[1]: session-20.scope: Deactivated successfully. Apr 30 03:30:09.908510 systemd-logind[1998]: Session 20 logged out. Waiting for processes to exit. Apr 30 03:30:09.914199 systemd-logind[1998]: Removed session 20. Apr 30 03:30:09.965664 containerd[2020]: time="2025-04-30T03:30:09.964476560Z" level=info msg="shim disconnected" id=bdc26b9286d4335c13a5ffd9eba14573f4c3b8285f62fe28f181f2f63cbef0c1 namespace=k8s.io Apr 30 03:30:09.965664 containerd[2020]: time="2025-04-30T03:30:09.964796323Z" level=warning msg="cleaning up after shim disconnected" id=bdc26b9286d4335c13a5ffd9eba14573f4c3b8285f62fe28f181f2f63cbef0c1 namespace=k8s.io Apr 30 03:30:09.965664 containerd[2020]: time="2025-04-30T03:30:09.964813539Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:30:09.973533 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bdc26b9286d4335c13a5ffd9eba14573f4c3b8285f62fe28f181f2f63cbef0c1-rootfs.mount: Deactivated successfully. Apr 30 03:30:10.385769 systemd-networkd[1575]: cali12100dc134f: Link DOWN Apr 30 03:30:10.385778 systemd-networkd[1575]: cali12100dc134f: Lost carrier Apr 30 03:30:10.858639 kubelet[3315]: I0430 03:30:10.858312 3315 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bdc26b9286d4335c13a5ffd9eba14573f4c3b8285f62fe28f181f2f63cbef0c1" Apr 30 03:30:10.889015 containerd[2020]: 2025-04-30 03:30:10.379 [INFO][7368] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="bdc26b9286d4335c13a5ffd9eba14573f4c3b8285f62fe28f181f2f63cbef0c1" Apr 30 03:30:10.889015 containerd[2020]: 2025-04-30 03:30:10.383 [INFO][7368] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="bdc26b9286d4335c13a5ffd9eba14573f4c3b8285f62fe28f181f2f63cbef0c1" iface="eth0" netns="/var/run/netns/cni-b16df892-edbe-5e73-be71-b72bf1049a67" Apr 30 03:30:10.889015 containerd[2020]: 2025-04-30 03:30:10.384 [INFO][7368] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="bdc26b9286d4335c13a5ffd9eba14573f4c3b8285f62fe28f181f2f63cbef0c1" iface="eth0" netns="/var/run/netns/cni-b16df892-edbe-5e73-be71-b72bf1049a67" Apr 30 03:30:10.889015 containerd[2020]: 2025-04-30 03:30:10.396 [INFO][7368] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="bdc26b9286d4335c13a5ffd9eba14573f4c3b8285f62fe28f181f2f63cbef0c1" after=12.900437ms iface="eth0" netns="/var/run/netns/cni-b16df892-edbe-5e73-be71-b72bf1049a67" Apr 30 03:30:10.889015 containerd[2020]: 2025-04-30 03:30:10.396 [INFO][7368] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="bdc26b9286d4335c13a5ffd9eba14573f4c3b8285f62fe28f181f2f63cbef0c1" Apr 30 03:30:10.889015 containerd[2020]: 2025-04-30 03:30:10.396 [INFO][7368] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bdc26b9286d4335c13a5ffd9eba14573f4c3b8285f62fe28f181f2f63cbef0c1" Apr 30 03:30:10.889015 containerd[2020]: 2025-04-30 03:30:10.794 [INFO][7383] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bdc26b9286d4335c13a5ffd9eba14573f4c3b8285f62fe28f181f2f63cbef0c1" HandleID="k8s-pod-network.bdc26b9286d4335c13a5ffd9eba14573f4c3b8285f62fe28f181f2f63cbef0c1" Workload="ip--172--31--23--191-k8s-calico--apiserver--6568d4bb6--l2z69-eth0" Apr 30 03:30:10.889015 containerd[2020]: 2025-04-30 03:30:10.800 [INFO][7383] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:30:10.889015 containerd[2020]: 2025-04-30 03:30:10.801 [INFO][7383] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:30:10.889015 containerd[2020]: 2025-04-30 03:30:10.882 [INFO][7383] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="bdc26b9286d4335c13a5ffd9eba14573f4c3b8285f62fe28f181f2f63cbef0c1" HandleID="k8s-pod-network.bdc26b9286d4335c13a5ffd9eba14573f4c3b8285f62fe28f181f2f63cbef0c1" Workload="ip--172--31--23--191-k8s-calico--apiserver--6568d4bb6--l2z69-eth0" Apr 30 03:30:10.889015 containerd[2020]: 2025-04-30 03:30:10.882 [INFO][7383] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bdc26b9286d4335c13a5ffd9eba14573f4c3b8285f62fe28f181f2f63cbef0c1" HandleID="k8s-pod-network.bdc26b9286d4335c13a5ffd9eba14573f4c3b8285f62fe28f181f2f63cbef0c1" Workload="ip--172--31--23--191-k8s-calico--apiserver--6568d4bb6--l2z69-eth0" Apr 30 03:30:10.889015 containerd[2020]: 2025-04-30 03:30:10.884 [INFO][7383] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:30:10.889015 containerd[2020]: 2025-04-30 03:30:10.887 [INFO][7368] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="bdc26b9286d4335c13a5ffd9eba14573f4c3b8285f62fe28f181f2f63cbef0c1" Apr 30 03:30:10.891856 containerd[2020]: time="2025-04-30T03:30:10.891467317Z" level=info msg="TearDown network for sandbox \"bdc26b9286d4335c13a5ffd9eba14573f4c3b8285f62fe28f181f2f63cbef0c1\" successfully" Apr 30 03:30:10.891856 containerd[2020]: time="2025-04-30T03:30:10.891495092Z" level=info msg="StopPodSandbox for \"bdc26b9286d4335c13a5ffd9eba14573f4c3b8285f62fe28f181f2f63cbef0c1\" returns successfully" Apr 30 03:30:10.892959 systemd[1]: run-netns-cni\x2db16df892\x2dedbe\x2d5e73\x2dbe71\x2db72bf1049a67.mount: Deactivated successfully. Apr 30 03:30:11.104741 kubelet[3315]: I0430 03:30:11.104667 3315 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/9a75c049-74d5-4f65-bcf2-58f5a64e3866-calico-apiserver-certs\") pod \"9a75c049-74d5-4f65-bcf2-58f5a64e3866\" (UID: \"9a75c049-74d5-4f65-bcf2-58f5a64e3866\") " Apr 30 03:30:11.104882 kubelet[3315]: I0430 03:30:11.104763 3315 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bz85j\" (UniqueName: \"kubernetes.io/projected/9a75c049-74d5-4f65-bcf2-58f5a64e3866-kube-api-access-bz85j\") pod \"9a75c049-74d5-4f65-bcf2-58f5a64e3866\" (UID: \"9a75c049-74d5-4f65-bcf2-58f5a64e3866\") " Apr 30 03:30:11.139770 systemd[1]: var-lib-kubelet-pods-9a75c049\x2d74d5\x2d4f65\x2dbcf2\x2d58f5a64e3866-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dbz85j.mount: Deactivated successfully. Apr 30 03:30:11.142834 kubelet[3315]: I0430 03:30:11.141119 3315 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a75c049-74d5-4f65-bcf2-58f5a64e3866-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "9a75c049-74d5-4f65-bcf2-58f5a64e3866" (UID: "9a75c049-74d5-4f65-bcf2-58f5a64e3866"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 30 03:30:11.142834 kubelet[3315]: I0430 03:30:11.141208 3315 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a75c049-74d5-4f65-bcf2-58f5a64e3866-kube-api-access-bz85j" (OuterVolumeSpecName: "kube-api-access-bz85j") pod "9a75c049-74d5-4f65-bcf2-58f5a64e3866" (UID: "9a75c049-74d5-4f65-bcf2-58f5a64e3866"). InnerVolumeSpecName "kube-api-access-bz85j". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 30 03:30:11.143627 systemd[1]: var-lib-kubelet-pods-9a75c049\x2d74d5\x2d4f65\x2dbcf2\x2d58f5a64e3866-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. Apr 30 03:30:11.206832 kubelet[3315]: I0430 03:30:11.205841 3315 reconciler_common.go:289] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/9a75c049-74d5-4f65-bcf2-58f5a64e3866-calico-apiserver-certs\") on node \"ip-172-31-23-191\" DevicePath \"\"" Apr 30 03:30:11.206832 kubelet[3315]: I0430 03:30:11.205887 3315 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-bz85j\" (UniqueName: \"kubernetes.io/projected/9a75c049-74d5-4f65-bcf2-58f5a64e3866-kube-api-access-bz85j\") on node \"ip-172-31-23-191\" DevicePath \"\"" Apr 30 03:30:11.531679 systemd-resolved[1907]: Under memory pressure, flushing caches. Apr 30 03:30:11.534437 systemd-journald[1497]: Under memory pressure, flushing caches. Apr 30 03:30:11.531707 systemd-resolved[1907]: Flushed all caches. Apr 30 03:30:12.544146 ntpd[1983]: Deleting interface #14 cali12100dc134f, fe80::ecee:eeff:feee:eeee%13#123, interface stats: received=0, sent=0, dropped=0, active_time=31 secs Apr 30 03:30:12.545743 ntpd[1983]: 30 Apr 03:30:12 ntpd[1983]: Deleting interface #14 cali12100dc134f, fe80::ecee:eeff:feee:eeee%13#123, interface stats: received=0, sent=0, dropped=0, active_time=31 secs Apr 30 03:30:12.629773 kubelet[3315]: I0430 03:30:12.629733 3315 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a75c049-74d5-4f65-bcf2-58f5a64e3866" path="/var/lib/kubelet/pods/9a75c049-74d5-4f65-bcf2-58f5a64e3866/volumes" Apr 30 03:30:13.579830 systemd-resolved[1907]: Under memory pressure, flushing caches. Apr 30 03:30:13.579837 systemd-resolved[1907]: Flushed all caches. Apr 30 03:30:13.581618 systemd-journald[1497]: Under memory pressure, flushing caches. Apr 30 03:30:14.924000 systemd[1]: Started sshd@20-172.31.23.191:22-147.75.109.163:39214.service - OpenSSH per-connection server daemon (147.75.109.163:39214). Apr 30 03:30:15.204466 sshd[7470]: Accepted publickey for core from 147.75.109.163 port 39214 ssh2: RSA SHA256:7ZQea3lKZeIY1pq8546y2SpcWopo7i1peiZKBcYFJ3g Apr 30 03:30:15.209346 sshd[7470]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:30:15.217082 systemd-logind[1998]: New session 21 of user core. Apr 30 03:30:15.223959 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 30 03:30:16.037857 sshd[7470]: pam_unix(sshd:session): session closed for user core Apr 30 03:30:16.041098 systemd[1]: sshd@20-172.31.23.191:22-147.75.109.163:39214.service: Deactivated successfully. Apr 30 03:30:16.046951 systemd-logind[1998]: Session 21 logged out. Waiting for processes to exit. Apr 30 03:30:16.047113 systemd[1]: session-21.scope: Deactivated successfully. Apr 30 03:30:16.051985 systemd-logind[1998]: Removed session 21. Apr 30 03:30:21.089558 systemd[1]: Started sshd@21-172.31.23.191:22-147.75.109.163:52934.service - OpenSSH per-connection server daemon (147.75.109.163:52934). Apr 30 03:30:21.390744 sshd[7598]: Accepted publickey for core from 147.75.109.163 port 52934 ssh2: RSA SHA256:7ZQea3lKZeIY1pq8546y2SpcWopo7i1peiZKBcYFJ3g Apr 30 03:30:21.394412 sshd[7598]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:30:21.403104 systemd-logind[1998]: New session 22 of user core. Apr 30 03:30:21.410971 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 30 03:30:21.936215 sshd[7598]: pam_unix(sshd:session): session closed for user core Apr 30 03:30:21.941403 systemd[1]: sshd@21-172.31.23.191:22-147.75.109.163:52934.service: Deactivated successfully. Apr 30 03:30:21.941932 systemd-logind[1998]: Session 22 logged out. Waiting for processes to exit. Apr 30 03:30:21.948106 systemd[1]: session-22.scope: Deactivated successfully. Apr 30 03:30:21.950259 systemd-logind[1998]: Removed session 22. Apr 30 03:30:22.801972 containerd[2020]: time="2025-04-30T03:30:22.801931273Z" level=info msg="StopContainer for \"cc1704ed32693bebb4a9793172d3d0f790694d95cccca5919601a62eae50d234\" with timeout 5 (s)" Apr 30 03:30:22.802967 containerd[2020]: time="2025-04-30T03:30:22.802927784Z" level=info msg="Stop container \"cc1704ed32693bebb4a9793172d3d0f790694d95cccca5919601a62eae50d234\" with signal terminated" Apr 30 03:30:22.880825 containerd[2020]: time="2025-04-30T03:30:22.880733289Z" level=info msg="shim disconnected" id=cc1704ed32693bebb4a9793172d3d0f790694d95cccca5919601a62eae50d234 namespace=k8s.io Apr 30 03:30:22.880825 containerd[2020]: time="2025-04-30T03:30:22.880825436Z" level=warning msg="cleaning up after shim disconnected" id=cc1704ed32693bebb4a9793172d3d0f790694d95cccca5919601a62eae50d234 namespace=k8s.io Apr 30 03:30:22.881702 containerd[2020]: time="2025-04-30T03:30:22.880838400Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:30:22.885710 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cc1704ed32693bebb4a9793172d3d0f790694d95cccca5919601a62eae50d234-rootfs.mount: Deactivated successfully. Apr 30 03:30:22.955111 containerd[2020]: time="2025-04-30T03:30:22.955065544Z" level=info msg="StopContainer for \"cc1704ed32693bebb4a9793172d3d0f790694d95cccca5919601a62eae50d234\" returns successfully" Apr 30 03:30:22.955949 containerd[2020]: time="2025-04-30T03:30:22.955811443Z" level=info msg="StopPodSandbox for \"133d75c115f02b8c0cd78e8b89860f7642ef433e9597c210ddf1220d811152f1\"" Apr 30 03:30:22.955949 containerd[2020]: time="2025-04-30T03:30:22.955887474Z" level=info msg="Container to stop \"cc1704ed32693bebb4a9793172d3d0f790694d95cccca5919601a62eae50d234\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 03:30:22.956119 containerd[2020]: time="2025-04-30T03:30:22.955909252Z" level=info msg="Container to stop \"aa0c54da0baec9701f0495c51d884a73dd7236665b273c2aa249b4be9bff613b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 03:30:22.956119 containerd[2020]: time="2025-04-30T03:30:22.955966938Z" level=info msg="Container to stop \"c6d4fc7ef4d161c79ce5acf8f548a17da6d0d5ed4089df2b706041cee13d3cdc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 03:30:22.968261 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-133d75c115f02b8c0cd78e8b89860f7642ef433e9597c210ddf1220d811152f1-shm.mount: Deactivated successfully. Apr 30 03:30:23.017639 containerd[2020]: time="2025-04-30T03:30:23.012859331Z" level=info msg="shim disconnected" id=133d75c115f02b8c0cd78e8b89860f7642ef433e9597c210ddf1220d811152f1 namespace=k8s.io Apr 30 03:30:23.017639 containerd[2020]: time="2025-04-30T03:30:23.012933951Z" level=warning msg="cleaning up after shim disconnected" id=133d75c115f02b8c0cd78e8b89860f7642ef433e9597c210ddf1220d811152f1 namespace=k8s.io Apr 30 03:30:23.017639 containerd[2020]: time="2025-04-30T03:30:23.012947144Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:30:23.015497 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-133d75c115f02b8c0cd78e8b89860f7642ef433e9597c210ddf1220d811152f1-rootfs.mount: Deactivated successfully. Apr 30 03:30:23.053021 containerd[2020]: time="2025-04-30T03:30:23.052837941Z" level=info msg="TearDown network for sandbox \"133d75c115f02b8c0cd78e8b89860f7642ef433e9597c210ddf1220d811152f1\" successfully" Apr 30 03:30:23.053021 containerd[2020]: time="2025-04-30T03:30:23.052871082Z" level=info msg="StopPodSandbox for \"133d75c115f02b8c0cd78e8b89860f7642ef433e9597c210ddf1220d811152f1\" returns successfully" Apr 30 03:30:23.131555 kubelet[3315]: I0430 03:30:23.125895 3315 topology_manager.go:215] "Topology Admit Handler" podUID="2c3e6b48-43c3-4b63-9a9a-6e3ba9e3033e" podNamespace="calico-system" podName="calico-node-cf98t" Apr 30 03:30:23.148779 kubelet[3315]: E0430 03:30:23.148035 3315 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9f950506-6b51-4472-a7c6-05d30c4d7f9f" containerName="install-cni" Apr 30 03:30:23.148779 kubelet[3315]: E0430 03:30:23.148589 3315 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9a75c049-74d5-4f65-bcf2-58f5a64e3866" containerName="calico-apiserver" Apr 30 03:30:23.148779 kubelet[3315]: E0430 03:30:23.148604 3315 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ade21f3d-5259-47ed-ad9a-431742ebb77b" containerName="calico-typha" Apr 30 03:30:23.148779 kubelet[3315]: E0430 03:30:23.148615 3315 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9f950506-6b51-4472-a7c6-05d30c4d7f9f" containerName="flexvol-driver" Apr 30 03:30:23.148779 kubelet[3315]: E0430 03:30:23.148623 3315 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9f950506-6b51-4472-a7c6-05d30c4d7f9f" containerName="calico-node" Apr 30 03:30:23.177950 kubelet[3315]: I0430 03:30:23.177750 3315 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f950506-6b51-4472-a7c6-05d30c4d7f9f" containerName="calico-node" Apr 30 03:30:23.177950 kubelet[3315]: I0430 03:30:23.177796 3315 memory_manager.go:354] "RemoveStaleState removing state" podUID="ade21f3d-5259-47ed-ad9a-431742ebb77b" containerName="calico-typha" Apr 30 03:30:23.177950 kubelet[3315]: I0430 03:30:23.177804 3315 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a75c049-74d5-4f65-bcf2-58f5a64e3866" containerName="calico-apiserver" Apr 30 03:30:23.190786 kubelet[3315]: I0430 03:30:23.190485 3315 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/9f950506-6b51-4472-a7c6-05d30c4d7f9f-flexvol-driver-host\") pod \"9f950506-6b51-4472-a7c6-05d30c4d7f9f\" (UID: \"9f950506-6b51-4472-a7c6-05d30c4d7f9f\") " Apr 30 03:30:23.190786 kubelet[3315]: I0430 03:30:23.190529 3315 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/9f950506-6b51-4472-a7c6-05d30c4d7f9f-node-certs\") pod \"9f950506-6b51-4472-a7c6-05d30c4d7f9f\" (UID: \"9f950506-6b51-4472-a7c6-05d30c4d7f9f\") " Apr 30 03:30:23.190786 kubelet[3315]: I0430 03:30:23.190551 3315 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/9f950506-6b51-4472-a7c6-05d30c4d7f9f-cni-bin-dir\") pod \"9f950506-6b51-4472-a7c6-05d30c4d7f9f\" (UID: \"9f950506-6b51-4472-a7c6-05d30c4d7f9f\") " Apr 30 03:30:23.190786 kubelet[3315]: I0430 03:30:23.190586 3315 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xb7r6\" (UniqueName: \"kubernetes.io/projected/9f950506-6b51-4472-a7c6-05d30c4d7f9f-kube-api-access-xb7r6\") pod \"9f950506-6b51-4472-a7c6-05d30c4d7f9f\" (UID: \"9f950506-6b51-4472-a7c6-05d30c4d7f9f\") " Apr 30 03:30:23.190786 kubelet[3315]: I0430 03:30:23.190604 3315 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9f950506-6b51-4472-a7c6-05d30c4d7f9f-tigera-ca-bundle\") pod \"9f950506-6b51-4472-a7c6-05d30c4d7f9f\" (UID: \"9f950506-6b51-4472-a7c6-05d30c4d7f9f\") " Apr 30 03:30:23.190786 kubelet[3315]: I0430 03:30:23.190624 3315 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/9f950506-6b51-4472-a7c6-05d30c4d7f9f-var-run-calico\") pod \"9f950506-6b51-4472-a7c6-05d30c4d7f9f\" (UID: \"9f950506-6b51-4472-a7c6-05d30c4d7f9f\") " Apr 30 03:30:23.191951 kubelet[3315]: I0430 03:30:23.190638 3315 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9f950506-6b51-4472-a7c6-05d30c4d7f9f-lib-modules\") pod \"9f950506-6b51-4472-a7c6-05d30c4d7f9f\" (UID: \"9f950506-6b51-4472-a7c6-05d30c4d7f9f\") " Apr 30 03:30:23.191951 kubelet[3315]: I0430 03:30:23.190651 3315 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/9f950506-6b51-4472-a7c6-05d30c4d7f9f-cni-net-dir\") pod \"9f950506-6b51-4472-a7c6-05d30c4d7f9f\" (UID: \"9f950506-6b51-4472-a7c6-05d30c4d7f9f\") " Apr 30 03:30:23.191951 kubelet[3315]: I0430 03:30:23.190664 3315 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/9f950506-6b51-4472-a7c6-05d30c4d7f9f-policysync\") pod \"9f950506-6b51-4472-a7c6-05d30c4d7f9f\" (UID: \"9f950506-6b51-4472-a7c6-05d30c4d7f9f\") " Apr 30 03:30:23.191951 kubelet[3315]: I0430 03:30:23.190676 3315 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/9f950506-6b51-4472-a7c6-05d30c4d7f9f-var-lib-calico\") pod \"9f950506-6b51-4472-a7c6-05d30c4d7f9f\" (UID: \"9f950506-6b51-4472-a7c6-05d30c4d7f9f\") " Apr 30 03:30:23.191951 kubelet[3315]: I0430 03:30:23.190691 3315 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/9f950506-6b51-4472-a7c6-05d30c4d7f9f-cni-log-dir\") pod \"9f950506-6b51-4472-a7c6-05d30c4d7f9f\" (UID: \"9f950506-6b51-4472-a7c6-05d30c4d7f9f\") " Apr 30 03:30:23.191951 kubelet[3315]: I0430 03:30:23.190706 3315 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9f950506-6b51-4472-a7c6-05d30c4d7f9f-xtables-lock\") pod \"9f950506-6b51-4472-a7c6-05d30c4d7f9f\" (UID: \"9f950506-6b51-4472-a7c6-05d30c4d7f9f\") " Apr 30 03:30:23.204050 kubelet[3315]: I0430 03:30:23.203629 3315 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9f950506-6b51-4472-a7c6-05d30c4d7f9f-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "9f950506-6b51-4472-a7c6-05d30c4d7f9f" (UID: "9f950506-6b51-4472-a7c6-05d30c4d7f9f"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 03:30:23.206724 kubelet[3315]: I0430 03:30:23.205967 3315 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9f950506-6b51-4472-a7c6-05d30c4d7f9f-flexvol-driver-host" (OuterVolumeSpecName: "flexvol-driver-host") pod "9f950506-6b51-4472-a7c6-05d30c4d7f9f" (UID: "9f950506-6b51-4472-a7c6-05d30c4d7f9f"). InnerVolumeSpecName "flexvol-driver-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 03:30:23.214713 kubelet[3315]: I0430 03:30:23.214499 3315 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9f950506-6b51-4472-a7c6-05d30c4d7f9f-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "9f950506-6b51-4472-a7c6-05d30c4d7f9f" (UID: "9f950506-6b51-4472-a7c6-05d30c4d7f9f"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 03:30:23.214713 kubelet[3315]: I0430 03:30:23.214534 3315 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9f950506-6b51-4472-a7c6-05d30c4d7f9f-cni-bin-dir" (OuterVolumeSpecName: "cni-bin-dir") pod "9f950506-6b51-4472-a7c6-05d30c4d7f9f" (UID: "9f950506-6b51-4472-a7c6-05d30c4d7f9f"). InnerVolumeSpecName "cni-bin-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 03:30:23.219589 systemd[1]: var-lib-kubelet-pods-9f950506\x2d6b51\x2d4472\x2da7c6\x2d05d30c4d7f9f-volumes-kubernetes.io\x7esecret-node\x2dcerts.mount: Deactivated successfully. Apr 30 03:30:23.219983 kubelet[3315]: I0430 03:30:23.219846 3315 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f950506-6b51-4472-a7c6-05d30c4d7f9f-kube-api-access-xb7r6" (OuterVolumeSpecName: "kube-api-access-xb7r6") pod "9f950506-6b51-4472-a7c6-05d30c4d7f9f" (UID: "9f950506-6b51-4472-a7c6-05d30c4d7f9f"). InnerVolumeSpecName "kube-api-access-xb7r6". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 30 03:30:23.220078 kubelet[3315]: I0430 03:30:23.220039 3315 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9f950506-6b51-4472-a7c6-05d30c4d7f9f-cni-net-dir" (OuterVolumeSpecName: "cni-net-dir") pod "9f950506-6b51-4472-a7c6-05d30c4d7f9f" (UID: "9f950506-6b51-4472-a7c6-05d30c4d7f9f"). InnerVolumeSpecName "cni-net-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 03:30:23.220113 kubelet[3315]: I0430 03:30:23.220084 3315 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9f950506-6b51-4472-a7c6-05d30c4d7f9f-policysync" (OuterVolumeSpecName: "policysync") pod "9f950506-6b51-4472-a7c6-05d30c4d7f9f" (UID: "9f950506-6b51-4472-a7c6-05d30c4d7f9f"). InnerVolumeSpecName "policysync". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 03:30:23.220113 kubelet[3315]: I0430 03:30:23.220105 3315 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9f950506-6b51-4472-a7c6-05d30c4d7f9f-var-lib-calico" (OuterVolumeSpecName: "var-lib-calico") pod "9f950506-6b51-4472-a7c6-05d30c4d7f9f" (UID: "9f950506-6b51-4472-a7c6-05d30c4d7f9f"). InnerVolumeSpecName "var-lib-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 03:30:23.220177 kubelet[3315]: I0430 03:30:23.220120 3315 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9f950506-6b51-4472-a7c6-05d30c4d7f9f-cni-log-dir" (OuterVolumeSpecName: "cni-log-dir") pod "9f950506-6b51-4472-a7c6-05d30c4d7f9f" (UID: "9f950506-6b51-4472-a7c6-05d30c4d7f9f"). InnerVolumeSpecName "cni-log-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 03:30:23.222715 kubelet[3315]: I0430 03:30:23.221724 3315 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f950506-6b51-4472-a7c6-05d30c4d7f9f-node-certs" (OuterVolumeSpecName: "node-certs") pod "9f950506-6b51-4472-a7c6-05d30c4d7f9f" (UID: "9f950506-6b51-4472-a7c6-05d30c4d7f9f"). InnerVolumeSpecName "node-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 30 03:30:23.265366 kubelet[3315]: I0430 03:30:23.265306 3315 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9f950506-6b51-4472-a7c6-05d30c4d7f9f-var-run-calico" (OuterVolumeSpecName: "var-run-calico") pod "9f950506-6b51-4472-a7c6-05d30c4d7f9f" (UID: "9f950506-6b51-4472-a7c6-05d30c4d7f9f"). InnerVolumeSpecName "var-run-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 03:30:23.286900 kubelet[3315]: I0430 03:30:23.286853 3315 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f950506-6b51-4472-a7c6-05d30c4d7f9f-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "9f950506-6b51-4472-a7c6-05d30c4d7f9f" (UID: "9f950506-6b51-4472-a7c6-05d30c4d7f9f"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 30 03:30:23.303129 kubelet[3315]: I0430 03:30:23.302425 3315 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/2c3e6b48-43c3-4b63-9a9a-6e3ba9e3033e-flexvol-driver-host\") pod \"calico-node-cf98t\" (UID: \"2c3e6b48-43c3-4b63-9a9a-6e3ba9e3033e\") " pod="calico-system/calico-node-cf98t" Apr 30 03:30:23.303129 kubelet[3315]: I0430 03:30:23.302484 3315 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/2c3e6b48-43c3-4b63-9a9a-6e3ba9e3033e-var-lib-calico\") pod \"calico-node-cf98t\" (UID: \"2c3e6b48-43c3-4b63-9a9a-6e3ba9e3033e\") " pod="calico-system/calico-node-cf98t" Apr 30 03:30:23.303129 kubelet[3315]: I0430 03:30:23.302513 3315 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/2c3e6b48-43c3-4b63-9a9a-6e3ba9e3033e-cni-log-dir\") pod \"calico-node-cf98t\" (UID: \"2c3e6b48-43c3-4b63-9a9a-6e3ba9e3033e\") " pod="calico-system/calico-node-cf98t" Apr 30 03:30:23.303129 kubelet[3315]: I0430 03:30:23.302544 3315 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2c3e6b48-43c3-4b63-9a9a-6e3ba9e3033e-tigera-ca-bundle\") pod \"calico-node-cf98t\" (UID: \"2c3e6b48-43c3-4b63-9a9a-6e3ba9e3033e\") " pod="calico-system/calico-node-cf98t" Apr 30 03:30:23.303129 kubelet[3315]: I0430 03:30:23.302601 3315 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/2c3e6b48-43c3-4b63-9a9a-6e3ba9e3033e-cni-bin-dir\") pod \"calico-node-cf98t\" (UID: \"2c3e6b48-43c3-4b63-9a9a-6e3ba9e3033e\") " pod="calico-system/calico-node-cf98t" Apr 30 03:30:23.303440 kubelet[3315]: I0430 03:30:23.302624 3315 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/2c3e6b48-43c3-4b63-9a9a-6e3ba9e3033e-cni-net-dir\") pod \"calico-node-cf98t\" (UID: \"2c3e6b48-43c3-4b63-9a9a-6e3ba9e3033e\") " pod="calico-system/calico-node-cf98t" Apr 30 03:30:23.303440 kubelet[3315]: I0430 03:30:23.302650 3315 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xl7bb\" (UniqueName: \"kubernetes.io/projected/2c3e6b48-43c3-4b63-9a9a-6e3ba9e3033e-kube-api-access-xl7bb\") pod \"calico-node-cf98t\" (UID: \"2c3e6b48-43c3-4b63-9a9a-6e3ba9e3033e\") " pod="calico-system/calico-node-cf98t" Apr 30 03:30:23.303440 kubelet[3315]: I0430 03:30:23.302680 3315 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2c3e6b48-43c3-4b63-9a9a-6e3ba9e3033e-lib-modules\") pod \"calico-node-cf98t\" (UID: \"2c3e6b48-43c3-4b63-9a9a-6e3ba9e3033e\") " pod="calico-system/calico-node-cf98t" Apr 30 03:30:23.303440 kubelet[3315]: I0430 03:30:23.302703 3315 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2c3e6b48-43c3-4b63-9a9a-6e3ba9e3033e-xtables-lock\") pod \"calico-node-cf98t\" (UID: \"2c3e6b48-43c3-4b63-9a9a-6e3ba9e3033e\") " pod="calico-system/calico-node-cf98t" Apr 30 03:30:23.303440 kubelet[3315]: I0430 03:30:23.302728 3315 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/2c3e6b48-43c3-4b63-9a9a-6e3ba9e3033e-policysync\") pod \"calico-node-cf98t\" (UID: \"2c3e6b48-43c3-4b63-9a9a-6e3ba9e3033e\") " pod="calico-system/calico-node-cf98t" Apr 30 03:30:23.303672 kubelet[3315]: I0430 03:30:23.302756 3315 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/2c3e6b48-43c3-4b63-9a9a-6e3ba9e3033e-node-certs\") pod \"calico-node-cf98t\" (UID: \"2c3e6b48-43c3-4b63-9a9a-6e3ba9e3033e\") " pod="calico-system/calico-node-cf98t" Apr 30 03:30:23.303672 kubelet[3315]: I0430 03:30:23.302781 3315 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/2c3e6b48-43c3-4b63-9a9a-6e3ba9e3033e-var-run-calico\") pod \"calico-node-cf98t\" (UID: \"2c3e6b48-43c3-4b63-9a9a-6e3ba9e3033e\") " pod="calico-system/calico-node-cf98t" Apr 30 03:30:23.303672 kubelet[3315]: I0430 03:30:23.302818 3315 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9f950506-6b51-4472-a7c6-05d30c4d7f9f-xtables-lock\") on node \"ip-172-31-23-191\" DevicePath \"\"" Apr 30 03:30:23.303672 kubelet[3315]: I0430 03:30:23.302832 3315 reconciler_common.go:289] "Volume detached for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/9f950506-6b51-4472-a7c6-05d30c4d7f9f-cni-bin-dir\") on node \"ip-172-31-23-191\" DevicePath \"\"" Apr 30 03:30:23.303672 kubelet[3315]: I0430 03:30:23.302844 3315 reconciler_common.go:289] "Volume detached for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/9f950506-6b51-4472-a7c6-05d30c4d7f9f-flexvol-driver-host\") on node \"ip-172-31-23-191\" DevicePath \"\"" Apr 30 03:30:23.303672 kubelet[3315]: I0430 03:30:23.302858 3315 reconciler_common.go:289] "Volume detached for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/9f950506-6b51-4472-a7c6-05d30c4d7f9f-node-certs\") on node \"ip-172-31-23-191\" DevicePath \"\"" Apr 30 03:30:23.303672 kubelet[3315]: I0430 03:30:23.302870 3315 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-xb7r6\" (UniqueName: \"kubernetes.io/projected/9f950506-6b51-4472-a7c6-05d30c4d7f9f-kube-api-access-xb7r6\") on node \"ip-172-31-23-191\" DevicePath \"\"" Apr 30 03:30:23.303979 kubelet[3315]: I0430 03:30:23.302882 3315 reconciler_common.go:289] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9f950506-6b51-4472-a7c6-05d30c4d7f9f-tigera-ca-bundle\") on node \"ip-172-31-23-191\" DevicePath \"\"" Apr 30 03:30:23.303979 kubelet[3315]: I0430 03:30:23.302899 3315 reconciler_common.go:289] "Volume detached for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/9f950506-6b51-4472-a7c6-05d30c4d7f9f-var-run-calico\") on node \"ip-172-31-23-191\" DevicePath \"\"" Apr 30 03:30:23.303979 kubelet[3315]: I0430 03:30:23.302911 3315 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9f950506-6b51-4472-a7c6-05d30c4d7f9f-lib-modules\") on node \"ip-172-31-23-191\" DevicePath \"\"" Apr 30 03:30:23.303979 kubelet[3315]: I0430 03:30:23.302924 3315 reconciler_common.go:289] "Volume detached for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/9f950506-6b51-4472-a7c6-05d30c4d7f9f-var-lib-calico\") on node \"ip-172-31-23-191\" DevicePath \"\"" Apr 30 03:30:23.303979 kubelet[3315]: I0430 03:30:23.302937 3315 reconciler_common.go:289] "Volume detached for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/9f950506-6b51-4472-a7c6-05d30c4d7f9f-cni-log-dir\") on node \"ip-172-31-23-191\" DevicePath \"\"" Apr 30 03:30:23.303979 kubelet[3315]: I0430 03:30:23.302950 3315 reconciler_common.go:289] "Volume detached for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/9f950506-6b51-4472-a7c6-05d30c4d7f9f-cni-net-dir\") on node \"ip-172-31-23-191\" DevicePath \"\"" Apr 30 03:30:23.303979 kubelet[3315]: I0430 03:30:23.302961 3315 reconciler_common.go:289] "Volume detached for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/9f950506-6b51-4472-a7c6-05d30c4d7f9f-policysync\") on node \"ip-172-31-23-191\" DevicePath \"\"" Apr 30 03:30:23.516347 containerd[2020]: time="2025-04-30T03:30:23.516307867Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-cf98t,Uid:2c3e6b48-43c3-4b63-9a9a-6e3ba9e3033e,Namespace:calico-system,Attempt:0,}" Apr 30 03:30:23.565598 systemd-resolved[1907]: Under memory pressure, flushing caches. Apr 30 03:30:23.565640 systemd-resolved[1907]: Flushed all caches. Apr 30 03:30:23.566813 systemd-journald[1497]: Under memory pressure, flushing caches. Apr 30 03:30:23.592254 containerd[2020]: time="2025-04-30T03:30:23.592115343Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:30:23.592643 containerd[2020]: time="2025-04-30T03:30:23.592596388Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:30:23.592841 containerd[2020]: time="2025-04-30T03:30:23.592746139Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:30:23.594092 containerd[2020]: time="2025-04-30T03:30:23.594003073Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:30:23.669618 containerd[2020]: time="2025-04-30T03:30:23.669308911Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-cf98t,Uid:2c3e6b48-43c3-4b63-9a9a-6e3ba9e3033e,Namespace:calico-system,Attempt:0,} returns sandbox id \"94b7cb2fcd5728f14f935f02c4cdcead123f82c7715896b62ba61318abd31f6c\"" Apr 30 03:30:23.695326 systemd[1]: var-lib-kubelet-pods-9f950506\x2d6b51\x2d4472\x2da7c6\x2d05d30c4d7f9f-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dnode-1.mount: Deactivated successfully. Apr 30 03:30:23.696670 systemd[1]: var-lib-kubelet-pods-9f950506\x2d6b51\x2d4472\x2da7c6\x2d05d30c4d7f9f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dxb7r6.mount: Deactivated successfully. Apr 30 03:30:23.711708 containerd[2020]: time="2025-04-30T03:30:23.710971012Z" level=info msg="CreateContainer within sandbox \"94b7cb2fcd5728f14f935f02c4cdcead123f82c7715896b62ba61318abd31f6c\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Apr 30 03:30:23.768735 containerd[2020]: time="2025-04-30T03:30:23.768661388Z" level=info msg="CreateContainer within sandbox \"94b7cb2fcd5728f14f935f02c4cdcead123f82c7715896b62ba61318abd31f6c\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"5004c3eb4b7b99ecfefb40c70ee8928839b5cba53a279a19eb4ac1b1d1f3a0e2\"" Apr 30 03:30:23.769825 containerd[2020]: time="2025-04-30T03:30:23.769475445Z" level=info msg="StartContainer for \"5004c3eb4b7b99ecfefb40c70ee8928839b5cba53a279a19eb4ac1b1d1f3a0e2\"" Apr 30 03:30:23.910054 kubelet[3315]: I0430 03:30:23.909697 3315 scope.go:117] "RemoveContainer" containerID="cc1704ed32693bebb4a9793172d3d0f790694d95cccca5919601a62eae50d234" Apr 30 03:30:23.930940 containerd[2020]: time="2025-04-30T03:30:23.922447400Z" level=info msg="RemoveContainer for \"cc1704ed32693bebb4a9793172d3d0f790694d95cccca5919601a62eae50d234\"" Apr 30 03:30:23.938791 containerd[2020]: time="2025-04-30T03:30:23.935098465Z" level=info msg="RemoveContainer for \"cc1704ed32693bebb4a9793172d3d0f790694d95cccca5919601a62eae50d234\" returns successfully" Apr 30 03:30:23.940583 kubelet[3315]: I0430 03:30:23.939156 3315 scope.go:117] "RemoveContainer" containerID="c6d4fc7ef4d161c79ce5acf8f548a17da6d0d5ed4089df2b706041cee13d3cdc" Apr 30 03:30:23.960466 containerd[2020]: time="2025-04-30T03:30:23.959187583Z" level=info msg="RemoveContainer for \"c6d4fc7ef4d161c79ce5acf8f548a17da6d0d5ed4089df2b706041cee13d3cdc\"" Apr 30 03:30:23.971539 containerd[2020]: time="2025-04-30T03:30:23.971334631Z" level=info msg="RemoveContainer for \"c6d4fc7ef4d161c79ce5acf8f548a17da6d0d5ed4089df2b706041cee13d3cdc\" returns successfully" Apr 30 03:30:23.972455 kubelet[3315]: I0430 03:30:23.971919 3315 scope.go:117] "RemoveContainer" containerID="aa0c54da0baec9701f0495c51d884a73dd7236665b273c2aa249b4be9bff613b" Apr 30 03:30:23.976838 containerd[2020]: time="2025-04-30T03:30:23.976654238Z" level=info msg="RemoveContainer for \"aa0c54da0baec9701f0495c51d884a73dd7236665b273c2aa249b4be9bff613b\"" Apr 30 03:30:24.019002 containerd[2020]: time="2025-04-30T03:30:24.018946091Z" level=info msg="StartContainer for \"5004c3eb4b7b99ecfefb40c70ee8928839b5cba53a279a19eb4ac1b1d1f3a0e2\" returns successfully" Apr 30 03:30:24.047356 containerd[2020]: time="2025-04-30T03:30:24.047309055Z" level=info msg="RemoveContainer for \"aa0c54da0baec9701f0495c51d884a73dd7236665b273c2aa249b4be9bff613b\" returns successfully" Apr 30 03:30:24.049689 kubelet[3315]: I0430 03:30:24.049660 3315 scope.go:117] "RemoveContainer" containerID="cc1704ed32693bebb4a9793172d3d0f790694d95cccca5919601a62eae50d234" Apr 30 03:30:24.052299 containerd[2020]: time="2025-04-30T03:30:24.052249377Z" level=error msg="ContainerStatus for \"cc1704ed32693bebb4a9793172d3d0f790694d95cccca5919601a62eae50d234\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cc1704ed32693bebb4a9793172d3d0f790694d95cccca5919601a62eae50d234\": not found" Apr 30 03:30:24.054385 kubelet[3315]: E0430 03:30:24.053977 3315 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cc1704ed32693bebb4a9793172d3d0f790694d95cccca5919601a62eae50d234\": not found" containerID="cc1704ed32693bebb4a9793172d3d0f790694d95cccca5919601a62eae50d234" Apr 30 03:30:24.054385 kubelet[3315]: I0430 03:30:24.054034 3315 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"cc1704ed32693bebb4a9793172d3d0f790694d95cccca5919601a62eae50d234"} err="failed to get container status \"cc1704ed32693bebb4a9793172d3d0f790694d95cccca5919601a62eae50d234\": rpc error: code = NotFound desc = an error occurred when try to find container \"cc1704ed32693bebb4a9793172d3d0f790694d95cccca5919601a62eae50d234\": not found" Apr 30 03:30:24.054385 kubelet[3315]: I0430 03:30:24.054065 3315 scope.go:117] "RemoveContainer" containerID="c6d4fc7ef4d161c79ce5acf8f548a17da6d0d5ed4089df2b706041cee13d3cdc" Apr 30 03:30:24.054878 containerd[2020]: time="2025-04-30T03:30:24.054752461Z" level=error msg="ContainerStatus for \"c6d4fc7ef4d161c79ce5acf8f548a17da6d0d5ed4089df2b706041cee13d3cdc\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c6d4fc7ef4d161c79ce5acf8f548a17da6d0d5ed4089df2b706041cee13d3cdc\": not found" Apr 30 03:30:24.055716 kubelet[3315]: E0430 03:30:24.055684 3315 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c6d4fc7ef4d161c79ce5acf8f548a17da6d0d5ed4089df2b706041cee13d3cdc\": not found" containerID="c6d4fc7ef4d161c79ce5acf8f548a17da6d0d5ed4089df2b706041cee13d3cdc" Apr 30 03:30:24.055816 kubelet[3315]: I0430 03:30:24.055721 3315 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c6d4fc7ef4d161c79ce5acf8f548a17da6d0d5ed4089df2b706041cee13d3cdc"} err="failed to get container status \"c6d4fc7ef4d161c79ce5acf8f548a17da6d0d5ed4089df2b706041cee13d3cdc\": rpc error: code = NotFound desc = an error occurred when try to find container \"c6d4fc7ef4d161c79ce5acf8f548a17da6d0d5ed4089df2b706041cee13d3cdc\": not found" Apr 30 03:30:24.055816 kubelet[3315]: I0430 03:30:24.055747 3315 scope.go:117] "RemoveContainer" containerID="aa0c54da0baec9701f0495c51d884a73dd7236665b273c2aa249b4be9bff613b" Apr 30 03:30:24.056436 containerd[2020]: time="2025-04-30T03:30:24.056399710Z" level=error msg="ContainerStatus for \"aa0c54da0baec9701f0495c51d884a73dd7236665b273c2aa249b4be9bff613b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"aa0c54da0baec9701f0495c51d884a73dd7236665b273c2aa249b4be9bff613b\": not found" Apr 30 03:30:24.057316 kubelet[3315]: E0430 03:30:24.057284 3315 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"aa0c54da0baec9701f0495c51d884a73dd7236665b273c2aa249b4be9bff613b\": not found" containerID="aa0c54da0baec9701f0495c51d884a73dd7236665b273c2aa249b4be9bff613b" Apr 30 03:30:24.057743 kubelet[3315]: I0430 03:30:24.057703 3315 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"aa0c54da0baec9701f0495c51d884a73dd7236665b273c2aa249b4be9bff613b"} err="failed to get container status \"aa0c54da0baec9701f0495c51d884a73dd7236665b273c2aa249b4be9bff613b\": rpc error: code = NotFound desc = an error occurred when try to find container \"aa0c54da0baec9701f0495c51d884a73dd7236665b273c2aa249b4be9bff613b\": not found" Apr 30 03:30:24.200316 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5004c3eb4b7b99ecfefb40c70ee8928839b5cba53a279a19eb4ac1b1d1f3a0e2-rootfs.mount: Deactivated successfully. Apr 30 03:30:24.225839 containerd[2020]: time="2025-04-30T03:30:24.225535010Z" level=info msg="shim disconnected" id=5004c3eb4b7b99ecfefb40c70ee8928839b5cba53a279a19eb4ac1b1d1f3a0e2 namespace=k8s.io Apr 30 03:30:24.225839 containerd[2020]: time="2025-04-30T03:30:24.225616251Z" level=warning msg="cleaning up after shim disconnected" id=5004c3eb4b7b99ecfefb40c70ee8928839b5cba53a279a19eb4ac1b1d1f3a0e2 namespace=k8s.io Apr 30 03:30:24.225839 containerd[2020]: time="2025-04-30T03:30:24.225628333Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:30:24.248750 containerd[2020]: time="2025-04-30T03:30:24.248688567Z" level=warning msg="cleanup warnings time=\"2025-04-30T03:30:24Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 30 03:30:24.630289 kubelet[3315]: I0430 03:30:24.629928 3315 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f950506-6b51-4472-a7c6-05d30c4d7f9f" path="/var/lib/kubelet/pods/9f950506-6b51-4472-a7c6-05d30c4d7f9f/volumes" Apr 30 03:30:24.922040 containerd[2020]: time="2025-04-30T03:30:24.921815514Z" level=info msg="CreateContainer within sandbox \"94b7cb2fcd5728f14f935f02c4cdcead123f82c7715896b62ba61318abd31f6c\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 30 03:30:24.952300 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2208612624.mount: Deactivated successfully. Apr 30 03:30:24.956332 containerd[2020]: time="2025-04-30T03:30:24.956289393Z" level=info msg="CreateContainer within sandbox \"94b7cb2fcd5728f14f935f02c4cdcead123f82c7715896b62ba61318abd31f6c\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"72ffb343ccfb648bd27b7b2d18f6529f17446c261d7ac42d135229c17a7d7249\"" Apr 30 03:30:24.958442 containerd[2020]: time="2025-04-30T03:30:24.958402861Z" level=info msg="StartContainer for \"72ffb343ccfb648bd27b7b2d18f6529f17446c261d7ac42d135229c17a7d7249\"" Apr 30 03:30:25.051372 containerd[2020]: time="2025-04-30T03:30:25.051328256Z" level=info msg="StartContainer for \"72ffb343ccfb648bd27b7b2d18f6529f17446c261d7ac42d135229c17a7d7249\" returns successfully" Apr 30 03:30:26.983155 systemd[1]: Started sshd@22-172.31.23.191:22-147.75.109.163:39830.service - OpenSSH per-connection server daemon (147.75.109.163:39830). Apr 30 03:30:27.320451 sshd[7897]: Accepted publickey for core from 147.75.109.163 port 39830 ssh2: RSA SHA256:7ZQea3lKZeIY1pq8546y2SpcWopo7i1peiZKBcYFJ3g Apr 30 03:30:27.324869 sshd[7897]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:30:27.338515 systemd-logind[1998]: New session 23 of user core. Apr 30 03:30:27.343551 systemd[1]: Started session-23.scope - Session 23 of User core. Apr 30 03:30:27.537962 systemd-journald[1497]: Under memory pressure, flushing caches. Apr 30 03:30:27.532126 systemd-resolved[1907]: Under memory pressure, flushing caches. Apr 30 03:30:27.532135 systemd-resolved[1907]: Flushed all caches. Apr 30 03:30:27.910042 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-72ffb343ccfb648bd27b7b2d18f6529f17446c261d7ac42d135229c17a7d7249-rootfs.mount: Deactivated successfully. Apr 30 03:30:27.942062 containerd[2020]: time="2025-04-30T03:30:27.918890786Z" level=info msg="shim disconnected" id=72ffb343ccfb648bd27b7b2d18f6529f17446c261d7ac42d135229c17a7d7249 namespace=k8s.io Apr 30 03:30:27.942812 containerd[2020]: time="2025-04-30T03:30:27.942765293Z" level=warning msg="cleaning up after shim disconnected" id=72ffb343ccfb648bd27b7b2d18f6529f17446c261d7ac42d135229c17a7d7249 namespace=k8s.io Apr 30 03:30:27.943615 containerd[2020]: time="2025-04-30T03:30:27.943450820Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:30:28.758404 sshd[7897]: pam_unix(sshd:session): session closed for user core Apr 30 03:30:28.762607 systemd[1]: sshd@22-172.31.23.191:22-147.75.109.163:39830.service: Deactivated successfully. Apr 30 03:30:28.766868 systemd-logind[1998]: Session 23 logged out. Waiting for processes to exit. Apr 30 03:30:28.768494 systemd[1]: session-23.scope: Deactivated successfully. Apr 30 03:30:28.769647 systemd-logind[1998]: Removed session 23. Apr 30 03:30:29.143985 containerd[2020]: time="2025-04-30T03:30:29.143761937Z" level=info msg="CreateContainer within sandbox \"94b7cb2fcd5728f14f935f02c4cdcead123f82c7715896b62ba61318abd31f6c\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Apr 30 03:30:29.225229 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3370107270.mount: Deactivated successfully. Apr 30 03:30:29.229976 containerd[2020]: time="2025-04-30T03:30:29.229931696Z" level=info msg="CreateContainer within sandbox \"94b7cb2fcd5728f14f935f02c4cdcead123f82c7715896b62ba61318abd31f6c\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"d8ab766b6b29c5b29e3876bf9a8f78a9776404963aa65205ed7419ccba2d3d22\"" Apr 30 03:30:29.231416 containerd[2020]: time="2025-04-30T03:30:29.231026034Z" level=info msg="StartContainer for \"d8ab766b6b29c5b29e3876bf9a8f78a9776404963aa65205ed7419ccba2d3d22\"" Apr 30 03:30:29.321446 containerd[2020]: time="2025-04-30T03:30:29.321321670Z" level=info msg="StartContainer for \"d8ab766b6b29c5b29e3876bf9a8f78a9776404963aa65205ed7419ccba2d3d22\" returns successfully" Apr 30 03:30:29.582603 systemd-journald[1497]: Under memory pressure, flushing caches. Apr 30 03:30:29.581680 systemd-resolved[1907]: Under memory pressure, flushing caches. Apr 30 03:30:29.581709 systemd-resolved[1907]: Flushed all caches. Apr 30 03:30:30.170796 kubelet[3315]: I0430 03:30:30.168804 3315 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-cf98t" podStartSLOduration=7.11700145 podStartE2EDuration="7.11700145s" podCreationTimestamp="2025-04-30 03:30:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 03:30:30.116447344 +0000 UTC m=+103.642664123" watchObservedRunningTime="2025-04-30 03:30:30.11700145 +0000 UTC m=+103.643218209" Apr 30 03:30:31.161023 systemd[1]: run-containerd-runc-k8s.io-d8ab766b6b29c5b29e3876bf9a8f78a9776404963aa65205ed7419ccba2d3d22-runc.X0Oy8z.mount: Deactivated successfully. Apr 30 03:30:31.628945 systemd-resolved[1907]: Under memory pressure, flushing caches. Apr 30 03:30:31.628953 systemd-resolved[1907]: Flushed all caches. Apr 30 03:30:31.630734 systemd-journald[1497]: Under memory pressure, flushing caches. Apr 30 03:30:32.021679 (udev-worker)[8190]: Network interface NamePolicy= disabled on kernel command line. Apr 30 03:30:32.029041 (udev-worker)[8192]: Network interface NamePolicy= disabled on kernel command line. Apr 30 03:30:33.805843 systemd[1]: Started sshd@23-172.31.23.191:22-147.75.109.163:39846.service - OpenSSH per-connection server daemon (147.75.109.163:39846). Apr 30 03:30:34.097333 sshd[8232]: Accepted publickey for core from 147.75.109.163 port 39846 ssh2: RSA SHA256:7ZQea3lKZeIY1pq8546y2SpcWopo7i1peiZKBcYFJ3g Apr 30 03:30:34.100367 sshd[8232]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:30:34.106800 systemd-logind[1998]: New session 24 of user core. Apr 30 03:30:34.110953 systemd[1]: Started session-24.scope - Session 24 of User core. Apr 30 03:30:34.967411 sshd[8232]: pam_unix(sshd:session): session closed for user core Apr 30 03:30:34.970361 systemd[1]: sshd@23-172.31.23.191:22-147.75.109.163:39846.service: Deactivated successfully. Apr 30 03:30:34.978030 systemd[1]: session-24.scope: Deactivated successfully. Apr 30 03:30:34.978074 systemd-logind[1998]: Session 24 logged out. Waiting for processes to exit. Apr 30 03:30:34.980428 systemd-logind[1998]: Removed session 24. Apr 30 03:30:48.429815 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-680d7b9163ff3f4c5019192dca5d9f0dc74c603cbbf31e5fd115dfcf6e48ba5c-rootfs.mount: Deactivated successfully. Apr 30 03:30:48.434855 containerd[2020]: time="2025-04-30T03:30:48.419153968Z" level=info msg="shim disconnected" id=680d7b9163ff3f4c5019192dca5d9f0dc74c603cbbf31e5fd115dfcf6e48ba5c namespace=k8s.io Apr 30 03:30:48.435915 containerd[2020]: time="2025-04-30T03:30:48.434865339Z" level=warning msg="cleaning up after shim disconnected" id=680d7b9163ff3f4c5019192dca5d9f0dc74c603cbbf31e5fd115dfcf6e48ba5c namespace=k8s.io Apr 30 03:30:48.435915 containerd[2020]: time="2025-04-30T03:30:48.434887332Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:30:48.741794 kubelet[3315]: I0430 03:30:48.741669 3315 scope.go:117] "RemoveContainer" containerID="1a6dd8040c172be2f5b61d3ab4eb784c4ffc016ed9d92b0d13a4d9cd0175b855" Apr 30 03:30:48.756954 containerd[2020]: time="2025-04-30T03:30:48.756899845Z" level=info msg="RemoveContainer for \"1a6dd8040c172be2f5b61d3ab4eb784c4ffc016ed9d92b0d13a4d9cd0175b855\"" Apr 30 03:30:48.769328 containerd[2020]: time="2025-04-30T03:30:48.769262664Z" level=info msg="RemoveContainer for \"1a6dd8040c172be2f5b61d3ab4eb784c4ffc016ed9d92b0d13a4d9cd0175b855\" returns successfully" Apr 30 03:30:48.770997 containerd[2020]: time="2025-04-30T03:30:48.770942212Z" level=info msg="StopPodSandbox for \"bdc26b9286d4335c13a5ffd9eba14573f4c3b8285f62fe28f181f2f63cbef0c1\"" Apr 30 03:30:49.181632 containerd[2020]: time="2025-04-30T03:30:49.181110857Z" level=info msg="shim disconnected" id=06c2a9bfcf6a701a114a8effa3fcafeb43db0da307d1274fea643bd895c4cfb8 namespace=k8s.io Apr 30 03:30:49.181632 containerd[2020]: time="2025-04-30T03:30:49.181169378Z" level=warning msg="cleaning up after shim disconnected" id=06c2a9bfcf6a701a114a8effa3fcafeb43db0da307d1274fea643bd895c4cfb8 namespace=k8s.io Apr 30 03:30:49.181632 containerd[2020]: time="2025-04-30T03:30:49.181181974Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:30:49.181408 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-06c2a9bfcf6a701a114a8effa3fcafeb43db0da307d1274fea643bd895c4cfb8-rootfs.mount: Deactivated successfully. Apr 30 03:30:49.184042 kubelet[3315]: I0430 03:30:49.183802 3315 scope.go:117] "RemoveContainer" containerID="680d7b9163ff3f4c5019192dca5d9f0dc74c603cbbf31e5fd115dfcf6e48ba5c" Apr 30 03:30:49.224138 containerd[2020]: time="2025-04-30T03:30:49.224065440Z" level=info msg="CreateContainer within sandbox \"ee8c4b4cf190ee3381a983a022efbfa80823f7d9b0dcdce27d9d561a35777cf9\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Apr 30 03:30:49.318717 containerd[2020]: time="2025-04-30T03:30:49.318658009Z" level=info msg="CreateContainer within sandbox \"ee8c4b4cf190ee3381a983a022efbfa80823f7d9b0dcdce27d9d561a35777cf9\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"97fcd5223fd5c324b7985483bc3c8a3c2b704dc4e16a9d931067a6a031f47815\"" Apr 30 03:30:49.319256 containerd[2020]: time="2025-04-30T03:30:49.319193820Z" level=info msg="StartContainer for \"97fcd5223fd5c324b7985483bc3c8a3c2b704dc4e16a9d931067a6a031f47815\"" Apr 30 03:30:49.417924 containerd[2020]: time="2025-04-30T03:30:49.416964134Z" level=info msg="StartContainer for \"97fcd5223fd5c324b7985483bc3c8a3c2b704dc4e16a9d931067a6a031f47815\" returns successfully" Apr 30 03:30:49.538953 kubelet[3315]: E0430 03:30:49.527270 3315 controller.go:195] "Failed to update lease" err="Put \"https://172.31.23.191:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-191?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Apr 30 03:30:49.548174 systemd-resolved[1907]: Under memory pressure, flushing caches. Apr 30 03:30:49.549640 systemd-journald[1497]: Under memory pressure, flushing caches. Apr 30 03:30:49.548210 systemd-resolved[1907]: Flushed all caches. Apr 30 03:30:50.216864 kubelet[3315]: I0430 03:30:50.216838 3315 scope.go:117] "RemoveContainer" containerID="06c2a9bfcf6a701a114a8effa3fcafeb43db0da307d1274fea643bd895c4cfb8" Apr 30 03:30:50.254792 containerd[2020]: time="2025-04-30T03:30:50.254524191Z" level=info msg="CreateContainer within sandbox \"11f3669d236c693b50e36f93fb23ec8faa6f5de13579f9a5dd2cb590020c88cf\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Apr 30 03:30:50.311380 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount816212720.mount: Deactivated successfully. Apr 30 03:30:50.315971 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1818759744.mount: Deactivated successfully. Apr 30 03:30:50.328659 containerd[2020]: time="2025-04-30T03:30:50.328529594Z" level=info msg="CreateContainer within sandbox \"11f3669d236c693b50e36f93fb23ec8faa6f5de13579f9a5dd2cb590020c88cf\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"22cc562528f839dc394d9cd1f0cf85604e2693d030caf4b542a002df99451dc6\"" Apr 30 03:30:50.330182 containerd[2020]: time="2025-04-30T03:30:50.329440782Z" level=info msg="StartContainer for \"22cc562528f839dc394d9cd1f0cf85604e2693d030caf4b542a002df99451dc6\"" Apr 30 03:30:50.447017 containerd[2020]: time="2025-04-30T03:30:50.446978451Z" level=info msg="StartContainer for \"22cc562528f839dc394d9cd1f0cf85604e2693d030caf4b542a002df99451dc6\" returns successfully" Apr 30 03:30:53.711265 systemd[1]: run-containerd-runc-k8s.io-e817caf45b3a0fc1414a9c2b8eae38784b4160104689f3cd6820be6671be601b-runc.ZGGHLQ.mount: Deactivated successfully. Apr 30 03:30:54.468086 containerd[2020]: time="2025-04-30T03:30:54.468013441Z" level=info msg="shim disconnected" id=d687f5287dd2ead22a5c370cf2f61327d2568f4e1c6ebb33e8a056072437943d namespace=k8s.io Apr 30 03:30:54.468086 containerd[2020]: time="2025-04-30T03:30:54.468070484Z" level=warning msg="cleaning up after shim disconnected" id=d687f5287dd2ead22a5c370cf2f61327d2568f4e1c6ebb33e8a056072437943d namespace=k8s.io Apr 30 03:30:54.468814 containerd[2020]: time="2025-04-30T03:30:54.468392161Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:30:54.540543 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d687f5287dd2ead22a5c370cf2f61327d2568f4e1c6ebb33e8a056072437943d-rootfs.mount: Deactivated successfully. Apr 30 03:30:55.232011 kubelet[3315]: I0430 03:30:55.231978 3315 scope.go:117] "RemoveContainer" containerID="d687f5287dd2ead22a5c370cf2f61327d2568f4e1c6ebb33e8a056072437943d" Apr 30 03:30:55.234624 containerd[2020]: time="2025-04-30T03:30:55.234586433Z" level=info msg="CreateContainer within sandbox \"7048168b3aeb876df13e2f442c78ccba6bf5066ff33eea84b0436e2819adecb3\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Apr 30 03:30:55.259119 containerd[2020]: time="2025-04-30T03:30:55.259069392Z" level=info msg="CreateContainer within sandbox \"7048168b3aeb876df13e2f442c78ccba6bf5066ff33eea84b0436e2819adecb3\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"de8b65ab53ef7dbf8d6d1ae5b45cdcd7e611b54719c55dd9da3fb4b940bddaba\"" Apr 30 03:30:55.259645 containerd[2020]: time="2025-04-30T03:30:55.259621849Z" level=info msg="StartContainer for \"de8b65ab53ef7dbf8d6d1ae5b45cdcd7e611b54719c55dd9da3fb4b940bddaba\"" Apr 30 03:30:55.337203 containerd[2020]: time="2025-04-30T03:30:55.337155232Z" level=info msg="StartContainer for \"de8b65ab53ef7dbf8d6d1ae5b45cdcd7e611b54719c55dd9da3fb4b940bddaba\" returns successfully" Apr 30 03:30:59.551810 kubelet[3315]: E0430 03:30:59.551760 3315 controller.go:195] "Failed to update lease" err="the server was unable to return a response in the time allotted, but may still be processing the request (put leases.coordination.k8s.io ip-172-31-23-191)"