Jun 20 18:55:34.956851 kernel: Linux version 6.6.94-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Fri Jun 20 17:12:40 -00 2025 Jun 20 18:55:34.956892 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=c5ce7ee72c13e935b8a741ba19830125b417ea1672f46b6a215da9317cee8e17 Jun 20 18:55:34.956911 kernel: BIOS-provided physical RAM map: Jun 20 18:55:34.956922 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jun 20 18:55:34.956933 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000786cdfff] usable Jun 20 18:55:34.956943 kernel: BIOS-e820: [mem 0x00000000786ce000-0x000000007894dfff] reserved Jun 20 18:55:34.956957 kernel: BIOS-e820: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Jun 20 18:55:34.956968 kernel: BIOS-e820: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Jun 20 18:55:34.956979 kernel: BIOS-e820: [mem 0x00000000789de000-0x000000007c97bfff] usable Jun 20 18:55:34.956990 kernel: BIOS-e820: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Jun 20 18:55:34.957006 kernel: NX (Execute Disable) protection: active Jun 20 18:55:34.957017 kernel: APIC: Static calls initialized Jun 20 18:55:34.957029 kernel: e820: update [mem 0x768c0018-0x768c8e57] usable ==> usable Jun 20 18:55:34.957042 kernel: e820: update [mem 0x768c0018-0x768c8e57] usable ==> usable Jun 20 18:55:34.957057 kernel: extended physical RAM map: Jun 20 18:55:34.957124 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Jun 20 18:55:34.957143 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000768c0017] usable Jun 20 18:55:34.957158 kernel: reserve setup_data: [mem 0x00000000768c0018-0x00000000768c8e57] usable Jun 20 18:55:34.957172 kernel: reserve setup_data: [mem 0x00000000768c8e58-0x00000000786cdfff] usable Jun 20 18:55:34.957186 kernel: reserve setup_data: [mem 0x00000000786ce000-0x000000007894dfff] reserved Jun 20 18:55:34.957200 kernel: reserve setup_data: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Jun 20 18:55:34.957214 kernel: reserve setup_data: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Jun 20 18:55:34.957228 kernel: reserve setup_data: [mem 0x00000000789de000-0x000000007c97bfff] usable Jun 20 18:55:34.957242 kernel: reserve setup_data: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Jun 20 18:55:34.957256 kernel: efi: EFI v2.7 by EDK II Jun 20 18:55:34.957270 kernel: efi: SMBIOS=0x7886a000 ACPI=0x7895d000 ACPI 2.0=0x7895d014 MEMATTR=0x77003518 Jun 20 18:55:34.957287 kernel: secureboot: Secure boot disabled Jun 20 18:55:34.957301 kernel: SMBIOS 2.7 present. Jun 20 18:55:34.957314 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Jun 20 18:55:34.957328 kernel: Hypervisor detected: KVM Jun 20 18:55:34.957342 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jun 20 18:55:34.957356 kernel: kvm-clock: using sched offset of 4055622728 cycles Jun 20 18:55:34.957370 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jun 20 18:55:34.957385 kernel: tsc: Detected 2499.998 MHz processor Jun 20 18:55:34.957399 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jun 20 18:55:34.957413 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jun 20 18:55:34.957428 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Jun 20 18:55:34.957445 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jun 20 18:55:34.957460 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jun 20 18:55:34.957475 kernel: Using GB pages for direct mapping Jun 20 18:55:34.957495 kernel: ACPI: Early table checksum verification disabled Jun 20 18:55:34.957511 kernel: ACPI: RSDP 0x000000007895D014 000024 (v02 AMAZON) Jun 20 18:55:34.957526 kernel: ACPI: XSDT 0x000000007895C0E8 00006C (v01 AMAZON AMZNFACP 00000001 01000013) Jun 20 18:55:34.957544 kernel: ACPI: FACP 0x0000000078955000 000114 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Jun 20 18:55:34.957559 kernel: ACPI: DSDT 0x0000000078956000 00115A (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Jun 20 18:55:34.957575 kernel: ACPI: FACS 0x00000000789D0000 000040 Jun 20 18:55:34.957590 kernel: ACPI: WAET 0x000000007895B000 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Jun 20 18:55:34.957605 kernel: ACPI: SLIT 0x000000007895A000 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jun 20 18:55:34.957620 kernel: ACPI: APIC 0x0000000078959000 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jun 20 18:55:34.957635 kernel: ACPI: SRAT 0x0000000078958000 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Jun 20 18:55:34.957651 kernel: ACPI: HPET 0x0000000078954000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Jun 20 18:55:34.957669 kernel: ACPI: SSDT 0x0000000078953000 000759 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Jun 20 18:55:34.957685 kernel: ACPI: SSDT 0x0000000078952000 00007F (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Jun 20 18:55:34.957700 kernel: ACPI: BGRT 0x0000000078951000 000038 (v01 AMAZON AMAZON 00000002 01000013) Jun 20 18:55:34.957716 kernel: ACPI: Reserving FACP table memory at [mem 0x78955000-0x78955113] Jun 20 18:55:34.957731 kernel: ACPI: Reserving DSDT table memory at [mem 0x78956000-0x78957159] Jun 20 18:55:34.957746 kernel: ACPI: Reserving FACS table memory at [mem 0x789d0000-0x789d003f] Jun 20 18:55:34.957762 kernel: ACPI: Reserving WAET table memory at [mem 0x7895b000-0x7895b027] Jun 20 18:55:34.957777 kernel: ACPI: Reserving SLIT table memory at [mem 0x7895a000-0x7895a06b] Jun 20 18:55:34.957792 kernel: ACPI: Reserving APIC table memory at [mem 0x78959000-0x78959075] Jun 20 18:55:34.957810 kernel: ACPI: Reserving SRAT table memory at [mem 0x78958000-0x7895809f] Jun 20 18:55:34.957825 kernel: ACPI: Reserving HPET table memory at [mem 0x78954000-0x78954037] Jun 20 18:55:34.957840 kernel: ACPI: Reserving SSDT table memory at [mem 0x78953000-0x78953758] Jun 20 18:55:34.957855 kernel: ACPI: Reserving SSDT table memory at [mem 0x78952000-0x7895207e] Jun 20 18:55:34.957871 kernel: ACPI: Reserving BGRT table memory at [mem 0x78951000-0x78951037] Jun 20 18:55:34.957886 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jun 20 18:55:34.957900 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jun 20 18:55:34.957916 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Jun 20 18:55:34.957931 kernel: NUMA: Initialized distance table, cnt=1 Jun 20 18:55:34.957949 kernel: NODE_DATA(0) allocated [mem 0x7a8ef000-0x7a8f4fff] Jun 20 18:55:34.957964 kernel: Zone ranges: Jun 20 18:55:34.957979 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jun 20 18:55:34.957995 kernel: DMA32 [mem 0x0000000001000000-0x000000007c97bfff] Jun 20 18:55:34.958009 kernel: Normal empty Jun 20 18:55:34.958024 kernel: Movable zone start for each node Jun 20 18:55:34.958039 kernel: Early memory node ranges Jun 20 18:55:34.958054 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jun 20 18:55:34.958129 kernel: node 0: [mem 0x0000000000100000-0x00000000786cdfff] Jun 20 18:55:34.958147 kernel: node 0: [mem 0x00000000789de000-0x000000007c97bfff] Jun 20 18:55:34.958161 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007c97bfff] Jun 20 18:55:34.958174 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jun 20 18:55:34.958187 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jun 20 18:55:34.958203 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Jun 20 18:55:34.958218 kernel: On node 0, zone DMA32: 13956 pages in unavailable ranges Jun 20 18:55:34.958233 kernel: ACPI: PM-Timer IO Port: 0xb008 Jun 20 18:55:34.958248 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jun 20 18:55:34.958264 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Jun 20 18:55:34.958282 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jun 20 18:55:34.958298 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jun 20 18:55:34.958313 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jun 20 18:55:34.958329 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jun 20 18:55:34.958344 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jun 20 18:55:34.958359 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jun 20 18:55:34.958374 kernel: TSC deadline timer available Jun 20 18:55:34.958389 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jun 20 18:55:34.958405 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jun 20 18:55:34.958423 kernel: [mem 0x7ca00000-0xffffffff] available for PCI devices Jun 20 18:55:34.958438 kernel: Booting paravirtualized kernel on KVM Jun 20 18:55:34.958453 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jun 20 18:55:34.958469 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jun 20 18:55:34.958485 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u1048576 Jun 20 18:55:34.958500 kernel: pcpu-alloc: s197096 r8192 d32280 u1048576 alloc=1*2097152 Jun 20 18:55:34.958515 kernel: pcpu-alloc: [0] 0 1 Jun 20 18:55:34.958530 kernel: kvm-guest: PV spinlocks enabled Jun 20 18:55:34.958546 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jun 20 18:55:34.958567 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=c5ce7ee72c13e935b8a741ba19830125b417ea1672f46b6a215da9317cee8e17 Jun 20 18:55:34.958583 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jun 20 18:55:34.958598 kernel: random: crng init done Jun 20 18:55:34.958613 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jun 20 18:55:34.958629 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jun 20 18:55:34.958644 kernel: Fallback order for Node 0: 0 Jun 20 18:55:34.958660 kernel: Built 1 zonelists, mobility grouping on. Total pages: 501318 Jun 20 18:55:34.958675 kernel: Policy zone: DMA32 Jun 20 18:55:34.958693 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jun 20 18:55:34.958709 kernel: Memory: 1872536K/2037804K available (14336K kernel code, 2295K rwdata, 22872K rodata, 43488K init, 1588K bss, 165012K reserved, 0K cma-reserved) Jun 20 18:55:34.958724 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jun 20 18:55:34.958737 kernel: Kernel/User page tables isolation: enabled Jun 20 18:55:34.958753 kernel: ftrace: allocating 37938 entries in 149 pages Jun 20 18:55:34.958780 kernel: ftrace: allocated 149 pages with 4 groups Jun 20 18:55:34.958800 kernel: Dynamic Preempt: voluntary Jun 20 18:55:34.958816 kernel: rcu: Preemptible hierarchical RCU implementation. Jun 20 18:55:34.958833 kernel: rcu: RCU event tracing is enabled. Jun 20 18:55:34.958850 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jun 20 18:55:34.958866 kernel: Trampoline variant of Tasks RCU enabled. Jun 20 18:55:34.958882 kernel: Rude variant of Tasks RCU enabled. Jun 20 18:55:34.958901 kernel: Tracing variant of Tasks RCU enabled. Jun 20 18:55:34.958918 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jun 20 18:55:34.958934 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jun 20 18:55:34.958950 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jun 20 18:55:34.958967 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jun 20 18:55:34.958987 kernel: Console: colour dummy device 80x25 Jun 20 18:55:34.959003 kernel: printk: console [tty0] enabled Jun 20 18:55:34.959019 kernel: printk: console [ttyS0] enabled Jun 20 18:55:34.959035 kernel: ACPI: Core revision 20230628 Jun 20 18:55:34.959051 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Jun 20 18:55:34.959083 kernel: APIC: Switch to symmetric I/O mode setup Jun 20 18:55:34.959097 kernel: x2apic enabled Jun 20 18:55:34.959112 kernel: APIC: Switched APIC routing to: physical x2apic Jun 20 18:55:34.959126 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Jun 20 18:55:34.959143 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499998) Jun 20 18:55:34.959158 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jun 20 18:55:34.959172 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Jun 20 18:55:34.959185 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jun 20 18:55:34.959199 kernel: Spectre V2 : Mitigation: Retpolines Jun 20 18:55:34.959212 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jun 20 18:55:34.959226 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jun 20 18:55:34.959240 kernel: RETBleed: Vulnerable Jun 20 18:55:34.959254 kernel: Speculative Store Bypass: Vulnerable Jun 20 18:55:34.959268 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Jun 20 18:55:34.959285 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jun 20 18:55:34.959298 kernel: GDS: Unknown: Dependent on hypervisor status Jun 20 18:55:34.959311 kernel: ITS: Mitigation: Aligned branch/return thunks Jun 20 18:55:34.959326 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jun 20 18:55:34.959340 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jun 20 18:55:34.959354 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jun 20 18:55:34.959368 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Jun 20 18:55:34.959381 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Jun 20 18:55:34.959396 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jun 20 18:55:34.959411 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jun 20 18:55:34.959427 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jun 20 18:55:34.959446 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Jun 20 18:55:34.959462 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jun 20 18:55:34.959478 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Jun 20 18:55:34.959491 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Jun 20 18:55:34.959506 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Jun 20 18:55:34.959519 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Jun 20 18:55:34.959533 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Jun 20 18:55:34.959547 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Jun 20 18:55:34.959560 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Jun 20 18:55:34.959581 kernel: Freeing SMP alternatives memory: 32K Jun 20 18:55:34.959600 kernel: pid_max: default: 32768 minimum: 301 Jun 20 18:55:34.959624 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jun 20 18:55:34.959642 kernel: landlock: Up and running. Jun 20 18:55:34.959657 kernel: SELinux: Initializing. Jun 20 18:55:34.959671 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jun 20 18:55:34.959686 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jun 20 18:55:34.959702 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Jun 20 18:55:34.959717 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jun 20 18:55:34.959743 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jun 20 18:55:34.959759 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jun 20 18:55:34.959775 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Jun 20 18:55:34.959794 kernel: signal: max sigframe size: 3632 Jun 20 18:55:34.959810 kernel: rcu: Hierarchical SRCU implementation. Jun 20 18:55:34.959826 kernel: rcu: Max phase no-delay instances is 400. Jun 20 18:55:34.959841 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jun 20 18:55:34.959857 kernel: smp: Bringing up secondary CPUs ... Jun 20 18:55:34.959873 kernel: smpboot: x86: Booting SMP configuration: Jun 20 18:55:34.959888 kernel: .... node #0, CPUs: #1 Jun 20 18:55:34.959904 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Jun 20 18:55:34.959920 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jun 20 18:55:34.959939 kernel: smp: Brought up 1 node, 2 CPUs Jun 20 18:55:34.959954 kernel: smpboot: Max logical packages: 1 Jun 20 18:55:34.959970 kernel: smpboot: Total of 2 processors activated (9999.99 BogoMIPS) Jun 20 18:55:34.959986 kernel: devtmpfs: initialized Jun 20 18:55:34.960001 kernel: x86/mm: Memory block size: 128MB Jun 20 18:55:34.960017 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7895e000-0x789ddfff] (524288 bytes) Jun 20 18:55:34.960032 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jun 20 18:55:34.960048 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jun 20 18:55:34.960063 kernel: pinctrl core: initialized pinctrl subsystem Jun 20 18:55:34.962131 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jun 20 18:55:34.962150 kernel: audit: initializing netlink subsys (disabled) Jun 20 18:55:34.962167 kernel: audit: type=2000 audit(1750445735.235:1): state=initialized audit_enabled=0 res=1 Jun 20 18:55:34.962183 kernel: thermal_sys: Registered thermal governor 'step_wise' Jun 20 18:55:34.962199 kernel: thermal_sys: Registered thermal governor 'user_space' Jun 20 18:55:34.962215 kernel: cpuidle: using governor menu Jun 20 18:55:34.962231 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jun 20 18:55:34.962246 kernel: dca service started, version 1.12.1 Jun 20 18:55:34.962262 kernel: PCI: Using configuration type 1 for base access Jun 20 18:55:34.962282 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jun 20 18:55:34.962298 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jun 20 18:55:34.962313 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jun 20 18:55:34.962329 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jun 20 18:55:34.962344 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jun 20 18:55:34.962360 kernel: ACPI: Added _OSI(Module Device) Jun 20 18:55:34.962375 kernel: ACPI: Added _OSI(Processor Device) Jun 20 18:55:34.962391 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jun 20 18:55:34.962406 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Jun 20 18:55:34.962425 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jun 20 18:55:34.962440 kernel: ACPI: Interpreter enabled Jun 20 18:55:34.962456 kernel: ACPI: PM: (supports S0 S5) Jun 20 18:55:34.962471 kernel: ACPI: Using IOAPIC for interrupt routing Jun 20 18:55:34.962487 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jun 20 18:55:34.962503 kernel: PCI: Using E820 reservations for host bridge windows Jun 20 18:55:34.962518 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jun 20 18:55:34.962534 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jun 20 18:55:34.962768 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jun 20 18:55:34.962922 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jun 20 18:55:34.963061 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jun 20 18:55:34.963093 kernel: acpiphp: Slot [3] registered Jun 20 18:55:34.963109 kernel: acpiphp: Slot [4] registered Jun 20 18:55:34.963125 kernel: acpiphp: Slot [5] registered Jun 20 18:55:34.963140 kernel: acpiphp: Slot [6] registered Jun 20 18:55:34.963156 kernel: acpiphp: Slot [7] registered Jun 20 18:55:34.963175 kernel: acpiphp: Slot [8] registered Jun 20 18:55:34.963191 kernel: acpiphp: Slot [9] registered Jun 20 18:55:34.963206 kernel: acpiphp: Slot [10] registered Jun 20 18:55:34.963222 kernel: acpiphp: Slot [11] registered Jun 20 18:55:34.963237 kernel: acpiphp: Slot [12] registered Jun 20 18:55:34.963253 kernel: acpiphp: Slot [13] registered Jun 20 18:55:34.963268 kernel: acpiphp: Slot [14] registered Jun 20 18:55:34.963284 kernel: acpiphp: Slot [15] registered Jun 20 18:55:34.963300 kernel: acpiphp: Slot [16] registered Jun 20 18:55:34.963315 kernel: acpiphp: Slot [17] registered Jun 20 18:55:34.963333 kernel: acpiphp: Slot [18] registered Jun 20 18:55:34.963349 kernel: acpiphp: Slot [19] registered Jun 20 18:55:34.963364 kernel: acpiphp: Slot [20] registered Jun 20 18:55:34.963380 kernel: acpiphp: Slot [21] registered Jun 20 18:55:34.963395 kernel: acpiphp: Slot [22] registered Jun 20 18:55:34.963410 kernel: acpiphp: Slot [23] registered Jun 20 18:55:34.963425 kernel: acpiphp: Slot [24] registered Jun 20 18:55:34.963440 kernel: acpiphp: Slot [25] registered Jun 20 18:55:34.963454 kernel: acpiphp: Slot [26] registered Jun 20 18:55:34.963471 kernel: acpiphp: Slot [27] registered Jun 20 18:55:34.963487 kernel: acpiphp: Slot [28] registered Jun 20 18:55:34.963501 kernel: acpiphp: Slot [29] registered Jun 20 18:55:34.963516 kernel: acpiphp: Slot [30] registered Jun 20 18:55:34.963531 kernel: acpiphp: Slot [31] registered Jun 20 18:55:34.963546 kernel: PCI host bridge to bus 0000:00 Jun 20 18:55:34.963710 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jun 20 18:55:34.963861 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jun 20 18:55:34.963986 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jun 20 18:55:34.966199 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Jun 20 18:55:34.966350 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x2000ffffffff window] Jun 20 18:55:34.966472 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jun 20 18:55:34.966631 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jun 20 18:55:34.966774 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jun 20 18:55:34.966934 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Jun 20 18:55:34.970960 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Jun 20 18:55:34.971203 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Jun 20 18:55:34.971348 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Jun 20 18:55:34.971491 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Jun 20 18:55:34.971637 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Jun 20 18:55:34.971787 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Jun 20 18:55:34.971944 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Jun 20 18:55:34.972171 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Jun 20 18:55:34.972338 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x80000000-0x803fffff pref] Jun 20 18:55:34.972498 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Jun 20 18:55:34.972656 kernel: pci 0000:00:03.0: BAR 0: assigned to efifb Jun 20 18:55:34.972816 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jun 20 18:55:34.972988 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Jun 20 18:55:34.973202 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80404000-0x80407fff] Jun 20 18:55:34.973352 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Jun 20 18:55:34.973490 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80400000-0x80403fff] Jun 20 18:55:34.973510 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jun 20 18:55:34.973527 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jun 20 18:55:34.973543 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jun 20 18:55:34.973560 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jun 20 18:55:34.973576 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jun 20 18:55:34.973598 kernel: iommu: Default domain type: Translated Jun 20 18:55:34.973613 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jun 20 18:55:34.973629 kernel: efivars: Registered efivars operations Jun 20 18:55:34.973646 kernel: PCI: Using ACPI for IRQ routing Jun 20 18:55:34.973662 kernel: PCI: pci_cache_line_size set to 64 bytes Jun 20 18:55:34.973678 kernel: e820: reserve RAM buffer [mem 0x768c0018-0x77ffffff] Jun 20 18:55:34.973693 kernel: e820: reserve RAM buffer [mem 0x786ce000-0x7bffffff] Jun 20 18:55:34.973709 kernel: e820: reserve RAM buffer [mem 0x7c97c000-0x7fffffff] Jun 20 18:55:34.973846 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Jun 20 18:55:34.973989 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Jun 20 18:55:34.974145 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jun 20 18:55:34.974166 kernel: vgaarb: loaded Jun 20 18:55:34.974183 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Jun 20 18:55:34.974200 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Jun 20 18:55:34.974216 kernel: clocksource: Switched to clocksource kvm-clock Jun 20 18:55:34.974232 kernel: VFS: Disk quotas dquot_6.6.0 Jun 20 18:55:34.974249 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jun 20 18:55:34.974264 kernel: pnp: PnP ACPI init Jun 20 18:55:34.974285 kernel: pnp: PnP ACPI: found 5 devices Jun 20 18:55:34.974301 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jun 20 18:55:34.974317 kernel: NET: Registered PF_INET protocol family Jun 20 18:55:34.974333 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jun 20 18:55:34.974350 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jun 20 18:55:34.974366 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jun 20 18:55:34.974383 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jun 20 18:55:34.974399 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jun 20 18:55:34.974418 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jun 20 18:55:34.974434 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jun 20 18:55:34.974450 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jun 20 18:55:34.974466 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jun 20 18:55:34.974482 kernel: NET: Registered PF_XDP protocol family Jun 20 18:55:34.974612 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jun 20 18:55:34.974735 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jun 20 18:55:34.974857 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jun 20 18:55:34.974980 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Jun 20 18:55:34.975125 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x2000ffffffff window] Jun 20 18:55:34.975269 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jun 20 18:55:34.975288 kernel: PCI: CLS 0 bytes, default 64 Jun 20 18:55:34.975302 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jun 20 18:55:34.975315 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Jun 20 18:55:34.975329 kernel: clocksource: Switched to clocksource tsc Jun 20 18:55:34.975343 kernel: Initialise system trusted keyrings Jun 20 18:55:34.975360 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jun 20 18:55:34.975386 kernel: Key type asymmetric registered Jun 20 18:55:34.975402 kernel: Asymmetric key parser 'x509' registered Jun 20 18:55:34.975418 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jun 20 18:55:34.975434 kernel: io scheduler mq-deadline registered Jun 20 18:55:34.975452 kernel: io scheduler kyber registered Jun 20 18:55:34.975469 kernel: io scheduler bfq registered Jun 20 18:55:34.975486 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jun 20 18:55:34.975502 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jun 20 18:55:34.975520 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jun 20 18:55:34.975542 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jun 20 18:55:34.975559 kernel: i8042: Warning: Keylock active Jun 20 18:55:34.975576 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jun 20 18:55:34.975593 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jun 20 18:55:34.975826 kernel: rtc_cmos 00:00: RTC can wake from S4 Jun 20 18:55:34.976011 kernel: rtc_cmos 00:00: registered as rtc0 Jun 20 18:55:34.978240 kernel: rtc_cmos 00:00: setting system clock to 2025-06-20T18:55:34 UTC (1750445734) Jun 20 18:55:34.978394 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Jun 20 18:55:34.978421 kernel: intel_pstate: CPU model not supported Jun 20 18:55:34.978438 kernel: efifb: probing for efifb Jun 20 18:55:34.978453 kernel: efifb: framebuffer at 0x80000000, using 1876k, total 1875k Jun 20 18:55:34.978469 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 Jun 20 18:55:34.978506 kernel: efifb: scrolling: redraw Jun 20 18:55:34.978526 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jun 20 18:55:34.978542 kernel: Console: switching to colour frame buffer device 100x37 Jun 20 18:55:34.978558 kernel: fb0: EFI VGA frame buffer device Jun 20 18:55:34.978574 kernel: pstore: Using crash dump compression: deflate Jun 20 18:55:34.978594 kernel: pstore: Registered efi_pstore as persistent store backend Jun 20 18:55:34.978613 kernel: NET: Registered PF_INET6 protocol family Jun 20 18:55:34.978628 kernel: Segment Routing with IPv6 Jun 20 18:55:34.978645 kernel: In-situ OAM (IOAM) with IPv6 Jun 20 18:55:34.978660 kernel: NET: Registered PF_PACKET protocol family Jun 20 18:55:34.978677 kernel: Key type dns_resolver registered Jun 20 18:55:34.978693 kernel: IPI shorthand broadcast: enabled Jun 20 18:55:34.978709 kernel: sched_clock: Marking stable (558002578, 237293124)->(898778824, -103483122) Jun 20 18:55:34.978726 kernel: registered taskstats version 1 Jun 20 18:55:34.978744 kernel: Loading compiled-in X.509 certificates Jun 20 18:55:34.978761 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.94-flatcar: 583832681762bbd3c2cbcca308896cbba88c4497' Jun 20 18:55:34.978777 kernel: Key type .fscrypt registered Jun 20 18:55:34.978792 kernel: Key type fscrypt-provisioning registered Jun 20 18:55:34.978808 kernel: ima: No TPM chip found, activating TPM-bypass! Jun 20 18:55:34.978825 kernel: ima: Allocated hash algorithm: sha1 Jun 20 18:55:34.978840 kernel: ima: No architecture policies found Jun 20 18:55:34.978857 kernel: clk: Disabling unused clocks Jun 20 18:55:34.978873 kernel: Freeing unused kernel image (initmem) memory: 43488K Jun 20 18:55:34.978891 kernel: Write protecting the kernel read-only data: 38912k Jun 20 18:55:34.978908 kernel: Freeing unused kernel image (rodata/data gap) memory: 1704K Jun 20 18:55:34.978924 kernel: Run /init as init process Jun 20 18:55:34.978939 kernel: with arguments: Jun 20 18:55:34.978954 kernel: /init Jun 20 18:55:34.978969 kernel: with environment: Jun 20 18:55:34.978983 kernel: HOME=/ Jun 20 18:55:34.978997 kernel: TERM=linux Jun 20 18:55:34.979013 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jun 20 18:55:34.979033 systemd[1]: Successfully made /usr/ read-only. Jun 20 18:55:34.979054 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jun 20 18:55:34.979093 systemd[1]: Detected virtualization amazon. Jun 20 18:55:34.979111 systemd[1]: Detected architecture x86-64. Jun 20 18:55:34.979140 systemd[1]: Running in initrd. Jun 20 18:55:34.979160 systemd[1]: No hostname configured, using default hostname. Jun 20 18:55:34.979175 systemd[1]: Hostname set to . Jun 20 18:55:34.979189 systemd[1]: Initializing machine ID from VM UUID. Jun 20 18:55:34.979206 systemd[1]: Queued start job for default target initrd.target. Jun 20 18:55:34.979223 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 20 18:55:34.979242 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 20 18:55:34.979258 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jun 20 18:55:34.979277 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 20 18:55:34.979294 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jun 20 18:55:34.979313 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jun 20 18:55:34.979332 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jun 20 18:55:34.979350 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jun 20 18:55:34.979368 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 20 18:55:34.979386 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 20 18:55:34.979406 systemd[1]: Reached target paths.target - Path Units. Jun 20 18:55:34.979424 systemd[1]: Reached target slices.target - Slice Units. Jun 20 18:55:34.979441 systemd[1]: Reached target swap.target - Swaps. Jun 20 18:55:34.979459 systemd[1]: Reached target timers.target - Timer Units. Jun 20 18:55:34.979477 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jun 20 18:55:34.979495 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 20 18:55:34.979512 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jun 20 18:55:34.979529 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jun 20 18:55:34.979549 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 20 18:55:34.979567 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 20 18:55:34.979584 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 20 18:55:34.979602 systemd[1]: Reached target sockets.target - Socket Units. Jun 20 18:55:34.979618 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jun 20 18:55:34.979635 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 20 18:55:34.979652 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jun 20 18:55:34.979669 systemd[1]: Starting systemd-fsck-usr.service... Jun 20 18:55:34.979686 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 20 18:55:34.979706 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 20 18:55:34.979723 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 18:55:34.979747 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jun 20 18:55:34.979765 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 20 18:55:34.979783 systemd[1]: Finished systemd-fsck-usr.service. Jun 20 18:55:34.979841 systemd-journald[179]: Collecting audit messages is disabled. Jun 20 18:55:34.979879 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jun 20 18:55:34.979897 systemd-journald[179]: Journal started Jun 20 18:55:34.979935 systemd-journald[179]: Runtime Journal (/run/log/journal/ec2460c633eb7aab0cb3ffd669539e3f) is 4.7M, max 38.1M, 33.4M free. Jun 20 18:55:34.977148 systemd-modules-load[180]: Inserted module 'overlay' Jun 20 18:55:34.987478 systemd[1]: Started systemd-journald.service - Journal Service. Jun 20 18:55:34.989876 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 18:55:35.001331 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 20 18:55:35.007642 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jun 20 18:55:35.010468 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jun 20 18:55:35.013155 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 20 18:55:35.019264 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 20 18:55:35.032191 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jun 20 18:55:35.035091 kernel: Bridge firewalling registered Jun 20 18:55:35.036300 systemd-modules-load[180]: Inserted module 'br_netfilter' Jun 20 18:55:35.040167 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 20 18:55:35.041948 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jun 20 18:55:35.053285 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 20 18:55:35.054305 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 20 18:55:35.055292 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 20 18:55:35.064285 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jun 20 18:55:35.065933 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 20 18:55:35.069845 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 20 18:55:35.084017 dracut-cmdline[213]: dracut-dracut-053 Jun 20 18:55:35.088969 dracut-cmdline[213]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=c5ce7ee72c13e935b8a741ba19830125b417ea1672f46b6a215da9317cee8e17 Jun 20 18:55:35.125817 systemd-resolved[215]: Positive Trust Anchors: Jun 20 18:55:35.126783 systemd-resolved[215]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 20 18:55:35.126850 systemd-resolved[215]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jun 20 18:55:35.136301 systemd-resolved[215]: Defaulting to hostname 'linux'. Jun 20 18:55:35.137808 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 20 18:55:35.138570 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 20 18:55:35.179107 kernel: SCSI subsystem initialized Jun 20 18:55:35.191106 kernel: Loading iSCSI transport class v2.0-870. Jun 20 18:55:35.202101 kernel: iscsi: registered transport (tcp) Jun 20 18:55:35.224432 kernel: iscsi: registered transport (qla4xxx) Jun 20 18:55:35.224516 kernel: QLogic iSCSI HBA Driver Jun 20 18:55:35.262979 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jun 20 18:55:35.268300 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jun 20 18:55:35.296160 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jun 20 18:55:35.296242 kernel: device-mapper: uevent: version 1.0.3 Jun 20 18:55:35.296265 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jun 20 18:55:35.340123 kernel: raid6: avx512x4 gen() 18181 MB/s Jun 20 18:55:35.358100 kernel: raid6: avx512x2 gen() 18162 MB/s Jun 20 18:55:35.376105 kernel: raid6: avx512x1 gen() 18231 MB/s Jun 20 18:55:35.394098 kernel: raid6: avx2x4 gen() 18137 MB/s Jun 20 18:55:35.412104 kernel: raid6: avx2x2 gen() 18159 MB/s Jun 20 18:55:35.430827 kernel: raid6: avx2x1 gen() 13965 MB/s Jun 20 18:55:35.430883 kernel: raid6: using algorithm avx512x1 gen() 18231 MB/s Jun 20 18:55:35.450390 kernel: raid6: .... xor() 21719 MB/s, rmw enabled Jun 20 18:55:35.450457 kernel: raid6: using avx512x2 recovery algorithm Jun 20 18:55:35.473106 kernel: xor: automatically using best checksumming function avx Jun 20 18:55:35.628119 kernel: Btrfs loaded, zoned=no, fsverity=no Jun 20 18:55:35.638901 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jun 20 18:55:35.644308 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 20 18:55:35.661429 systemd-udevd[398]: Using default interface naming scheme 'v255'. Jun 20 18:55:35.667439 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 20 18:55:35.677775 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jun 20 18:55:35.695119 dracut-pre-trigger[404]: rd.md=0: removing MD RAID activation Jun 20 18:55:35.725362 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jun 20 18:55:35.731327 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 20 18:55:35.785218 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 20 18:55:35.794340 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jun 20 18:55:35.817668 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jun 20 18:55:35.820028 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jun 20 18:55:35.822284 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 20 18:55:35.822815 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 20 18:55:35.831351 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jun 20 18:55:35.849782 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jun 20 18:55:35.886111 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jun 20 18:55:35.886380 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jun 20 18:55:35.897896 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Jun 20 18:55:35.901144 kernel: cryptd: max_cpu_qlen set to 1000 Jun 20 18:55:35.918143 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80400000, mac addr 06:34:04:5c:e3:ff Jun 20 18:55:35.933993 kernel: AVX2 version of gcm_enc/dec engaged. Jun 20 18:55:35.934056 kernel: AES CTR mode by8 optimization enabled Jun 20 18:55:35.938859 (udev-worker)[451]: Network interface NamePolicy= disabled on kernel command line. Jun 20 18:55:35.942116 kernel: nvme nvme0: pci function 0000:00:04.0 Jun 20 18:55:35.942350 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jun 20 18:55:35.944952 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jun 20 18:55:35.945857 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 20 18:55:35.948983 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 20 18:55:35.958151 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jun 20 18:55:35.955167 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 20 18:55:35.955489 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 18:55:35.960570 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 18:55:35.967669 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jun 20 18:55:35.967722 kernel: GPT:9289727 != 16777215 Jun 20 18:55:35.967751 kernel: GPT:Alternate GPT header not at the end of the disk. Jun 20 18:55:35.967768 kernel: GPT:9289727 != 16777215 Jun 20 18:55:35.967787 kernel: GPT: Use GNU Parted to correct GPT errors. Jun 20 18:55:35.967806 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jun 20 18:55:35.971202 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 18:55:35.977917 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jun 20 18:55:35.986639 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 20 18:55:35.986770 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 18:55:35.989541 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jun 20 18:55:35.998400 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 18:55:36.029497 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 18:55:36.036307 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 20 18:55:36.046418 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by (udev-worker) (455) Jun 20 18:55:36.059672 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 20 18:55:36.079818 kernel: BTRFS: device fsid 5ff786f3-14e2-4689-ad32-ff903cf13f91 devid 1 transid 38 /dev/nvme0n1p3 scanned by (udev-worker) (445) Jun 20 18:55:36.125793 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jun 20 18:55:36.145840 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jun 20 18:55:36.157150 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jun 20 18:55:36.166746 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jun 20 18:55:36.167356 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Jun 20 18:55:36.173257 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jun 20 18:55:36.180834 disk-uuid[635]: Primary Header is updated. Jun 20 18:55:36.180834 disk-uuid[635]: Secondary Entries is updated. Jun 20 18:55:36.180834 disk-uuid[635]: Secondary Header is updated. Jun 20 18:55:36.188115 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jun 20 18:55:36.194099 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jun 20 18:55:37.203281 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jun 20 18:55:37.203366 disk-uuid[636]: The operation has completed successfully. Jun 20 18:55:37.306012 systemd[1]: disk-uuid.service: Deactivated successfully. Jun 20 18:55:37.306121 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jun 20 18:55:37.356261 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jun 20 18:55:37.359369 sh[894]: Success Jun 20 18:55:37.380252 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jun 20 18:55:37.497628 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jun 20 18:55:37.513745 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jun 20 18:55:37.514891 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jun 20 18:55:37.541376 kernel: BTRFS info (device dm-0): first mount of filesystem 5ff786f3-14e2-4689-ad32-ff903cf13f91 Jun 20 18:55:37.541439 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jun 20 18:55:37.543128 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jun 20 18:55:37.545879 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jun 20 18:55:37.545930 kernel: BTRFS info (device dm-0): using free space tree Jun 20 18:55:37.656127 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jun 20 18:55:37.668937 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jun 20 18:55:37.670005 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jun 20 18:55:37.676248 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jun 20 18:55:37.679238 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jun 20 18:55:37.705942 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 0d4ae0d2-6537-4cbd-8c37-7b929dcf3a9f Jun 20 18:55:37.706002 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jun 20 18:55:37.706016 kernel: BTRFS info (device nvme0n1p6): using free space tree Jun 20 18:55:37.713097 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jun 20 18:55:37.719100 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 0d4ae0d2-6537-4cbd-8c37-7b929dcf3a9f Jun 20 18:55:37.721770 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jun 20 18:55:37.728232 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jun 20 18:55:37.777376 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 20 18:55:37.784287 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 20 18:55:37.810570 systemd-networkd[1083]: lo: Link UP Jun 20 18:55:37.810583 systemd-networkd[1083]: lo: Gained carrier Jun 20 18:55:37.812014 systemd-networkd[1083]: Enumeration completed Jun 20 18:55:37.812440 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 20 18:55:37.812586 systemd-networkd[1083]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 18:55:37.812591 systemd-networkd[1083]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 20 18:55:37.814017 systemd[1]: Reached target network.target - Network. Jun 20 18:55:37.815441 systemd-networkd[1083]: eth0: Link UP Jun 20 18:55:37.815445 systemd-networkd[1083]: eth0: Gained carrier Jun 20 18:55:37.815455 systemd-networkd[1083]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 18:55:37.831181 systemd-networkd[1083]: eth0: DHCPv4 address 172.31.28.28/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jun 20 18:55:38.084899 ignition[1008]: Ignition 2.20.0 Jun 20 18:55:38.084911 ignition[1008]: Stage: fetch-offline Jun 20 18:55:38.085103 ignition[1008]: no configs at "/usr/lib/ignition/base.d" Jun 20 18:55:38.086404 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jun 20 18:55:38.085112 ignition[1008]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jun 20 18:55:38.085347 ignition[1008]: Ignition finished successfully Jun 20 18:55:38.094321 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jun 20 18:55:38.106178 ignition[1092]: Ignition 2.20.0 Jun 20 18:55:38.106189 ignition[1092]: Stage: fetch Jun 20 18:55:38.106500 ignition[1092]: no configs at "/usr/lib/ignition/base.d" Jun 20 18:55:38.106509 ignition[1092]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jun 20 18:55:38.106596 ignition[1092]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jun 20 18:55:38.114619 ignition[1092]: PUT result: OK Jun 20 18:55:38.116608 ignition[1092]: parsed url from cmdline: "" Jun 20 18:55:38.116619 ignition[1092]: no config URL provided Jun 20 18:55:38.116626 ignition[1092]: reading system config file "/usr/lib/ignition/user.ign" Jun 20 18:55:38.116639 ignition[1092]: no config at "/usr/lib/ignition/user.ign" Jun 20 18:55:38.116678 ignition[1092]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jun 20 18:55:38.117261 ignition[1092]: PUT result: OK Jun 20 18:55:38.117301 ignition[1092]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jun 20 18:55:38.117984 ignition[1092]: GET result: OK Jun 20 18:55:38.118056 ignition[1092]: parsing config with SHA512: 53e11bc2f8d7ee0e2ac89cbee8dff2db3013b9d36b65a6e7ffda7cfb2fb4dbe0079375758ba1558448f780ea5d6a9b5a67a1da345577f8164cecfc72996d586b Jun 20 18:55:38.122110 unknown[1092]: fetched base config from "system" Jun 20 18:55:38.122120 unknown[1092]: fetched base config from "system" Jun 20 18:55:38.122444 ignition[1092]: fetch: fetch complete Jun 20 18:55:38.122125 unknown[1092]: fetched user config from "aws" Jun 20 18:55:38.122449 ignition[1092]: fetch: fetch passed Jun 20 18:55:38.122486 ignition[1092]: Ignition finished successfully Jun 20 18:55:38.124162 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jun 20 18:55:38.129313 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jun 20 18:55:38.143862 ignition[1098]: Ignition 2.20.0 Jun 20 18:55:38.143873 ignition[1098]: Stage: kargs Jun 20 18:55:38.144208 ignition[1098]: no configs at "/usr/lib/ignition/base.d" Jun 20 18:55:38.144219 ignition[1098]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jun 20 18:55:38.144299 ignition[1098]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jun 20 18:55:38.145224 ignition[1098]: PUT result: OK Jun 20 18:55:38.147694 ignition[1098]: kargs: kargs passed Jun 20 18:55:38.147885 ignition[1098]: Ignition finished successfully Jun 20 18:55:38.148962 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jun 20 18:55:38.162341 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jun 20 18:55:38.175710 ignition[1104]: Ignition 2.20.0 Jun 20 18:55:38.175793 ignition[1104]: Stage: disks Jun 20 18:55:38.176132 ignition[1104]: no configs at "/usr/lib/ignition/base.d" Jun 20 18:55:38.176141 ignition[1104]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jun 20 18:55:38.176223 ignition[1104]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jun 20 18:55:38.177927 ignition[1104]: PUT result: OK Jun 20 18:55:38.182620 ignition[1104]: disks: disks passed Jun 20 18:55:38.182687 ignition[1104]: Ignition finished successfully Jun 20 18:55:38.183685 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jun 20 18:55:38.184682 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jun 20 18:55:38.185035 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jun 20 18:55:38.185623 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 20 18:55:38.186174 systemd[1]: Reached target sysinit.target - System Initialization. Jun 20 18:55:38.186722 systemd[1]: Reached target basic.target - Basic System. Jun 20 18:55:38.198362 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jun 20 18:55:38.231879 systemd-fsck[1112]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jun 20 18:55:38.234691 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jun 20 18:55:38.241229 systemd[1]: Mounting sysroot.mount - /sysroot... Jun 20 18:55:38.339099 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 943f8432-3dc9-4e22-b9bd-c29bf6a1f5e1 r/w with ordered data mode. Quota mode: none. Jun 20 18:55:38.340550 systemd[1]: Mounted sysroot.mount - /sysroot. Jun 20 18:55:38.341773 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jun 20 18:55:38.348203 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 20 18:55:38.351206 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jun 20 18:55:38.353957 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jun 20 18:55:38.355212 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jun 20 18:55:38.355249 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jun 20 18:55:38.364778 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jun 20 18:55:38.373315 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jun 20 18:55:38.374684 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1131) Jun 20 18:55:38.380889 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 0d4ae0d2-6537-4cbd-8c37-7b929dcf3a9f Jun 20 18:55:38.380952 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jun 20 18:55:38.380976 kernel: BTRFS info (device nvme0n1p6): using free space tree Jun 20 18:55:38.388117 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jun 20 18:55:38.390320 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 20 18:55:38.666440 initrd-setup-root[1155]: cut: /sysroot/etc/passwd: No such file or directory Jun 20 18:55:38.683909 initrd-setup-root[1162]: cut: /sysroot/etc/group: No such file or directory Jun 20 18:55:38.689422 initrd-setup-root[1169]: cut: /sysroot/etc/shadow: No such file or directory Jun 20 18:55:38.708309 initrd-setup-root[1176]: cut: /sysroot/etc/gshadow: No such file or directory Jun 20 18:55:38.937724 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jun 20 18:55:38.942194 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jun 20 18:55:38.945472 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jun 20 18:55:38.955696 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jun 20 18:55:38.958803 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 0d4ae0d2-6537-4cbd-8c37-7b929dcf3a9f Jun 20 18:55:38.989216 ignition[1244]: INFO : Ignition 2.20.0 Jun 20 18:55:38.989216 ignition[1244]: INFO : Stage: mount Jun 20 18:55:38.990807 ignition[1244]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 20 18:55:38.990807 ignition[1244]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jun 20 18:55:38.990807 ignition[1244]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jun 20 18:55:38.993201 ignition[1244]: INFO : PUT result: OK Jun 20 18:55:38.996700 ignition[1244]: INFO : mount: mount passed Jun 20 18:55:38.997297 ignition[1244]: INFO : Ignition finished successfully Jun 20 18:55:38.998445 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jun 20 18:55:39.003251 systemd[1]: Starting ignition-files.service - Ignition (files)... Jun 20 18:55:39.007366 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jun 20 18:55:39.017323 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 20 18:55:39.039107 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/nvme0n1p6 scanned by mount (1255) Jun 20 18:55:39.043333 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 0d4ae0d2-6537-4cbd-8c37-7b929dcf3a9f Jun 20 18:55:39.043411 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jun 20 18:55:39.043425 kernel: BTRFS info (device nvme0n1p6): using free space tree Jun 20 18:55:39.050117 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jun 20 18:55:39.052598 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 20 18:55:39.071205 ignition[1271]: INFO : Ignition 2.20.0 Jun 20 18:55:39.071205 ignition[1271]: INFO : Stage: files Jun 20 18:55:39.072734 ignition[1271]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 20 18:55:39.072734 ignition[1271]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jun 20 18:55:39.072734 ignition[1271]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jun 20 18:55:39.074241 ignition[1271]: INFO : PUT result: OK Jun 20 18:55:39.076277 ignition[1271]: DEBUG : files: compiled without relabeling support, skipping Jun 20 18:55:39.096957 ignition[1271]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jun 20 18:55:39.096957 ignition[1271]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jun 20 18:55:39.133899 ignition[1271]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jun 20 18:55:39.134715 ignition[1271]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jun 20 18:55:39.134715 ignition[1271]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jun 20 18:55:39.134340 unknown[1271]: wrote ssh authorized keys file for user: core Jun 20 18:55:39.136917 ignition[1271]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jun 20 18:55:39.136917 ignition[1271]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jun 20 18:55:39.276453 ignition[1271]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jun 20 18:55:39.411307 systemd-networkd[1083]: eth0: Gained IPv6LL Jun 20 18:55:39.435909 ignition[1271]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jun 20 18:55:39.437133 ignition[1271]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jun 20 18:55:39.437133 ignition[1271]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jun 20 18:55:39.934566 ignition[1271]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jun 20 18:55:40.053097 ignition[1271]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jun 20 18:55:40.054027 ignition[1271]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jun 20 18:55:40.054027 ignition[1271]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jun 20 18:55:40.054027 ignition[1271]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jun 20 18:55:40.054027 ignition[1271]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jun 20 18:55:40.054027 ignition[1271]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 20 18:55:40.054027 ignition[1271]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 20 18:55:40.054027 ignition[1271]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 20 18:55:40.054027 ignition[1271]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 20 18:55:40.054027 ignition[1271]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jun 20 18:55:40.054027 ignition[1271]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jun 20 18:55:40.054027 ignition[1271]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jun 20 18:55:40.054027 ignition[1271]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jun 20 18:55:40.054027 ignition[1271]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jun 20 18:55:40.063829 ignition[1271]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Jun 20 18:55:40.686714 ignition[1271]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jun 20 18:55:40.955261 ignition[1271]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jun 20 18:55:40.955261 ignition[1271]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jun 20 18:55:40.957226 ignition[1271]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 20 18:55:40.957990 ignition[1271]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 20 18:55:40.957990 ignition[1271]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jun 20 18:55:40.957990 ignition[1271]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jun 20 18:55:40.957990 ignition[1271]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jun 20 18:55:40.957990 ignition[1271]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jun 20 18:55:40.957990 ignition[1271]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jun 20 18:55:40.957990 ignition[1271]: INFO : files: files passed Jun 20 18:55:40.957990 ignition[1271]: INFO : Ignition finished successfully Jun 20 18:55:40.960097 systemd[1]: Finished ignition-files.service - Ignition (files). Jun 20 18:55:40.963315 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jun 20 18:55:40.967242 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jun 20 18:55:40.969841 systemd[1]: ignition-quench.service: Deactivated successfully. Jun 20 18:55:40.969946 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jun 20 18:55:40.982901 initrd-setup-root-after-ignition[1300]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 20 18:55:40.982901 initrd-setup-root-after-ignition[1300]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jun 20 18:55:40.986127 initrd-setup-root-after-ignition[1304]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 20 18:55:40.985377 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 20 18:55:40.987203 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jun 20 18:55:40.995316 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jun 20 18:55:41.019616 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jun 20 18:55:41.019818 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jun 20 18:55:41.021398 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jun 20 18:55:41.022273 systemd[1]: Reached target initrd.target - Initrd Default Target. Jun 20 18:55:41.023100 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jun 20 18:55:41.025264 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jun 20 18:55:41.046917 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 20 18:55:41.052348 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jun 20 18:55:41.065329 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jun 20 18:55:41.066181 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 20 18:55:41.067124 systemd[1]: Stopped target timers.target - Timer Units. Jun 20 18:55:41.068161 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jun 20 18:55:41.068358 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 20 18:55:41.069581 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jun 20 18:55:41.070478 systemd[1]: Stopped target basic.target - Basic System. Jun 20 18:55:41.071295 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jun 20 18:55:41.072218 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jun 20 18:55:41.073000 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jun 20 18:55:41.073805 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jun 20 18:55:41.074605 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jun 20 18:55:41.075423 systemd[1]: Stopped target sysinit.target - System Initialization. Jun 20 18:55:41.076723 systemd[1]: Stopped target local-fs.target - Local File Systems. Jun 20 18:55:41.077501 systemd[1]: Stopped target swap.target - Swaps. Jun 20 18:55:41.078248 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jun 20 18:55:41.078438 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jun 20 18:55:41.079528 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jun 20 18:55:41.080445 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 20 18:55:41.081136 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jun 20 18:55:41.081280 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 20 18:55:41.081918 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jun 20 18:55:41.082112 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jun 20 18:55:41.083484 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jun 20 18:55:41.083671 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 20 18:55:41.084445 systemd[1]: ignition-files.service: Deactivated successfully. Jun 20 18:55:41.084604 systemd[1]: Stopped ignition-files.service - Ignition (files). Jun 20 18:55:41.095624 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jun 20 18:55:41.096690 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jun 20 18:55:41.096839 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jun 20 18:55:41.098885 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jun 20 18:55:41.099608 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jun 20 18:55:41.100204 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jun 20 18:55:41.101050 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jun 20 18:55:41.101671 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jun 20 18:55:41.105962 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jun 20 18:55:41.106524 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jun 20 18:55:41.112660 ignition[1324]: INFO : Ignition 2.20.0 Jun 20 18:55:41.113919 ignition[1324]: INFO : Stage: umount Jun 20 18:55:41.113919 ignition[1324]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 20 18:55:41.113919 ignition[1324]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jun 20 18:55:41.113919 ignition[1324]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jun 20 18:55:41.115784 ignition[1324]: INFO : PUT result: OK Jun 20 18:55:41.118366 ignition[1324]: INFO : umount: umount passed Jun 20 18:55:41.118788 ignition[1324]: INFO : Ignition finished successfully Jun 20 18:55:41.120541 systemd[1]: ignition-mount.service: Deactivated successfully. Jun 20 18:55:41.120657 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jun 20 18:55:41.121249 systemd[1]: ignition-disks.service: Deactivated successfully. Jun 20 18:55:41.121298 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jun 20 18:55:41.122502 systemd[1]: ignition-kargs.service: Deactivated successfully. Jun 20 18:55:41.122550 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jun 20 18:55:41.125225 systemd[1]: ignition-fetch.service: Deactivated successfully. Jun 20 18:55:41.125283 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jun 20 18:55:41.125840 systemd[1]: Stopped target network.target - Network. Jun 20 18:55:41.128047 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jun 20 18:55:41.128173 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jun 20 18:55:41.129419 systemd[1]: Stopped target paths.target - Path Units. Jun 20 18:55:41.131155 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jun 20 18:55:41.134143 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 20 18:55:41.134858 systemd[1]: Stopped target slices.target - Slice Units. Jun 20 18:55:41.136179 systemd[1]: Stopped target sockets.target - Socket Units. Jun 20 18:55:41.136826 systemd[1]: iscsid.socket: Deactivated successfully. Jun 20 18:55:41.136905 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jun 20 18:55:41.137463 systemd[1]: iscsiuio.socket: Deactivated successfully. Jun 20 18:55:41.137516 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 20 18:55:41.138059 systemd[1]: ignition-setup.service: Deactivated successfully. Jun 20 18:55:41.138146 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jun 20 18:55:41.138854 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jun 20 18:55:41.138912 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jun 20 18:55:41.139638 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jun 20 18:55:41.140354 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jun 20 18:55:41.142727 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jun 20 18:55:41.145657 systemd[1]: systemd-resolved.service: Deactivated successfully. Jun 20 18:55:41.145761 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jun 20 18:55:41.149816 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jun 20 18:55:41.150232 systemd[1]: systemd-networkd.service: Deactivated successfully. Jun 20 18:55:41.150376 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jun 20 18:55:41.152554 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jun 20 18:55:41.153819 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jun 20 18:55:41.153905 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jun 20 18:55:41.158186 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jun 20 18:55:41.158785 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jun 20 18:55:41.158860 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 20 18:55:41.159525 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jun 20 18:55:41.159587 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jun 20 18:55:41.164232 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jun 20 18:55:41.164301 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jun 20 18:55:41.164800 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jun 20 18:55:41.164862 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jun 20 18:55:41.165598 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 20 18:55:41.170766 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jun 20 18:55:41.170876 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jun 20 18:55:41.179627 systemd[1]: systemd-udevd.service: Deactivated successfully. Jun 20 18:55:41.179935 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 20 18:55:41.181788 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jun 20 18:55:41.181870 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jun 20 18:55:41.182913 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jun 20 18:55:41.182965 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jun 20 18:55:41.183881 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jun 20 18:55:41.183945 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jun 20 18:55:41.185166 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jun 20 18:55:41.185231 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jun 20 18:55:41.187059 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jun 20 18:55:41.187153 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 20 18:55:41.194307 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jun 20 18:55:41.194910 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jun 20 18:55:41.194989 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 20 18:55:41.198233 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 20 18:55:41.198316 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 18:55:41.202256 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jun 20 18:55:41.203599 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jun 20 18:55:41.204184 systemd[1]: network-cleanup.service: Deactivated successfully. Jun 20 18:55:41.204327 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jun 20 18:55:41.205552 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jun 20 18:55:41.205687 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jun 20 18:55:41.251435 systemd[1]: sysroot-boot.service: Deactivated successfully. Jun 20 18:55:41.251553 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jun 20 18:55:41.252976 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jun 20 18:55:41.253527 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jun 20 18:55:41.253617 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jun 20 18:55:41.263311 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jun 20 18:55:41.270993 systemd[1]: Switching root. Jun 20 18:55:41.313338 systemd-journald[179]: Journal stopped Jun 20 18:55:43.122266 systemd-journald[179]: Received SIGTERM from PID 1 (systemd). Jun 20 18:55:43.122326 kernel: SELinux: policy capability network_peer_controls=1 Jun 20 18:55:43.122341 kernel: SELinux: policy capability open_perms=1 Jun 20 18:55:43.122355 kernel: SELinux: policy capability extended_socket_class=1 Jun 20 18:55:43.122370 kernel: SELinux: policy capability always_check_network=0 Jun 20 18:55:43.122386 kernel: SELinux: policy capability cgroup_seclabel=1 Jun 20 18:55:43.122397 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jun 20 18:55:43.122409 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jun 20 18:55:43.122420 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jun 20 18:55:43.122432 kernel: audit: type=1403 audit(1750445741.721:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jun 20 18:55:43.122445 systemd[1]: Successfully loaded SELinux policy in 63.329ms. Jun 20 18:55:43.122465 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 10.388ms. Jun 20 18:55:43.122482 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jun 20 18:55:43.122500 systemd[1]: Detected virtualization amazon. Jun 20 18:55:43.122513 systemd[1]: Detected architecture x86-64. Jun 20 18:55:43.122525 systemd[1]: Detected first boot. Jun 20 18:55:43.122537 systemd[1]: Initializing machine ID from VM UUID. Jun 20 18:55:43.122549 zram_generator::config[1368]: No configuration found. Jun 20 18:55:43.122564 kernel: Guest personality initialized and is inactive Jun 20 18:55:43.122575 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Jun 20 18:55:43.122586 kernel: Initialized host personality Jun 20 18:55:43.122601 kernel: NET: Registered PF_VSOCK protocol family Jun 20 18:55:43.122613 systemd[1]: Populated /etc with preset unit settings. Jun 20 18:55:43.122626 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jun 20 18:55:43.122641 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jun 20 18:55:43.122653 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jun 20 18:55:43.122665 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jun 20 18:55:43.122677 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jun 20 18:55:43.122689 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jun 20 18:55:43.122702 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jun 20 18:55:43.122717 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jun 20 18:55:43.122730 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jun 20 18:55:43.122742 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jun 20 18:55:43.122754 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jun 20 18:55:43.122766 systemd[1]: Created slice user.slice - User and Session Slice. Jun 20 18:55:43.122778 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 20 18:55:43.122790 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 20 18:55:43.122802 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jun 20 18:55:43.122817 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jun 20 18:55:43.122830 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jun 20 18:55:43.122842 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 20 18:55:43.122855 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jun 20 18:55:43.122867 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 20 18:55:43.122879 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jun 20 18:55:43.122891 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jun 20 18:55:43.122904 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jun 20 18:55:43.122919 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jun 20 18:55:43.122932 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 20 18:55:43.122945 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 20 18:55:43.122957 systemd[1]: Reached target slices.target - Slice Units. Jun 20 18:55:43.122969 systemd[1]: Reached target swap.target - Swaps. Jun 20 18:55:43.122981 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jun 20 18:55:43.122993 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jun 20 18:55:43.123006 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jun 20 18:55:43.123018 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 20 18:55:43.123032 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 20 18:55:43.123044 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 20 18:55:43.123056 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jun 20 18:55:43.123068 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jun 20 18:55:43.130161 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jun 20 18:55:43.130176 systemd[1]: Mounting media.mount - External Media Directory... Jun 20 18:55:43.130189 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 18:55:43.130202 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jun 20 18:55:43.130215 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jun 20 18:55:43.130231 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jun 20 18:55:43.130245 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jun 20 18:55:43.130257 systemd[1]: Reached target machines.target - Containers. Jun 20 18:55:43.130270 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jun 20 18:55:43.130283 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 20 18:55:43.130295 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 20 18:55:43.130308 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jun 20 18:55:43.130321 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 20 18:55:43.130335 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 20 18:55:43.130348 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 20 18:55:43.130360 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jun 20 18:55:43.130372 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 20 18:55:43.130385 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jun 20 18:55:43.130398 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jun 20 18:55:43.130410 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jun 20 18:55:43.130423 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jun 20 18:55:43.130436 systemd[1]: Stopped systemd-fsck-usr.service. Jun 20 18:55:43.130452 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 20 18:55:43.130465 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 20 18:55:43.130478 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 20 18:55:43.130490 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jun 20 18:55:43.130503 kernel: fuse: init (API version 7.39) Jun 20 18:55:43.130517 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jun 20 18:55:43.130530 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jun 20 18:55:43.130542 kernel: loop: module loaded Jun 20 18:55:43.130557 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 20 18:55:43.130569 systemd[1]: verity-setup.service: Deactivated successfully. Jun 20 18:55:43.130581 systemd[1]: Stopped verity-setup.service. Jun 20 18:55:43.130594 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 18:55:43.130607 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jun 20 18:55:43.130622 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jun 20 18:55:43.130637 systemd[1]: Mounted media.mount - External Media Directory. Jun 20 18:55:43.130649 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jun 20 18:55:43.130661 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jun 20 18:55:43.130674 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jun 20 18:55:43.130689 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 20 18:55:43.130701 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jun 20 18:55:43.130713 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jun 20 18:55:43.130726 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 20 18:55:43.130738 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 20 18:55:43.130751 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 20 18:55:43.130763 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 20 18:55:43.130775 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jun 20 18:55:43.130787 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jun 20 18:55:43.130802 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 20 18:55:43.130814 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 20 18:55:43.130827 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 20 18:55:43.130839 kernel: ACPI: bus type drm_connector registered Jun 20 18:55:43.130850 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jun 20 18:55:43.130863 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jun 20 18:55:43.130875 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jun 20 18:55:43.130888 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jun 20 18:55:43.130901 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 20 18:55:43.130915 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jun 20 18:55:43.130962 systemd-journald[1451]: Collecting audit messages is disabled. Jun 20 18:55:43.130987 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jun 20 18:55:43.131003 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jun 20 18:55:43.131017 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 20 18:55:43.131029 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jun 20 18:55:43.131043 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 20 18:55:43.131059 systemd-journald[1451]: Journal started Jun 20 18:55:43.131127 systemd-journald[1451]: Runtime Journal (/run/log/journal/ec2460c633eb7aab0cb3ffd669539e3f) is 4.7M, max 38.1M, 33.4M free. Jun 20 18:55:42.735597 systemd[1]: Queued start job for default target multi-user.target. Jun 20 18:55:42.741618 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jun 20 18:55:42.742054 systemd[1]: systemd-journald.service: Deactivated successfully. Jun 20 18:55:43.138149 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jun 20 18:55:43.138196 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 20 18:55:43.141988 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 20 18:55:43.151096 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jun 20 18:55:43.165131 systemd[1]: Started systemd-journald.service - Journal Service. Jun 20 18:55:43.166135 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jun 20 18:55:43.166969 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 20 18:55:43.167237 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 20 18:55:43.168356 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jun 20 18:55:43.171357 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jun 20 18:55:43.172276 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 20 18:55:43.172950 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jun 20 18:55:43.173670 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jun 20 18:55:43.176045 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jun 20 18:55:43.176982 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jun 20 18:55:43.191054 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jun 20 18:55:43.192114 systemd[1]: Reached target network-pre.target - Preparation for Network. Jun 20 18:55:43.198026 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jun 20 18:55:43.200125 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jun 20 18:55:43.212845 kernel: loop0: detected capacity change from 0 to 147912 Jun 20 18:55:43.205259 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jun 20 18:55:43.213299 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jun 20 18:55:43.215598 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 20 18:55:43.219962 systemd-journald[1451]: Time spent on flushing to /var/log/journal/ec2460c633eb7aab0cb3ffd669539e3f is 30.690ms for 1021 entries. Jun 20 18:55:43.219962 systemd-journald[1451]: System Journal (/var/log/journal/ec2460c633eb7aab0cb3ffd669539e3f) is 8M, max 195.6M, 187.6M free. Jun 20 18:55:43.261368 systemd-journald[1451]: Received client request to flush runtime journal. Jun 20 18:55:43.235481 udevadm[1513]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jun 20 18:55:43.263654 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jun 20 18:55:43.275246 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jun 20 18:55:43.303260 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jun 20 18:55:43.317373 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 20 18:55:43.339407 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jun 20 18:55:43.366264 systemd-tmpfiles[1522]: ACLs are not supported, ignoring. Jun 20 18:55:43.370096 kernel: loop1: detected capacity change from 0 to 221472 Jun 20 18:55:43.368828 systemd-tmpfiles[1522]: ACLs are not supported, ignoring. Jun 20 18:55:43.377309 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 20 18:55:43.516096 kernel: loop2: detected capacity change from 0 to 138176 Jun 20 18:55:43.634106 kernel: loop3: detected capacity change from 0 to 62832 Jun 20 18:55:43.690104 kernel: loop4: detected capacity change from 0 to 147912 Jun 20 18:55:43.719151 kernel: loop5: detected capacity change from 0 to 221472 Jun 20 18:55:43.744058 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jun 20 18:55:43.759101 kernel: loop6: detected capacity change from 0 to 138176 Jun 20 18:55:43.788287 kernel: loop7: detected capacity change from 0 to 62832 Jun 20 18:55:43.809462 (sd-merge)[1530]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Jun 20 18:55:43.811122 (sd-merge)[1530]: Merged extensions into '/usr'. Jun 20 18:55:43.817935 systemd[1]: Reload requested from client PID 1484 ('systemd-sysext') (unit systemd-sysext.service)... Jun 20 18:55:43.817954 systemd[1]: Reloading... Jun 20 18:55:43.954094 zram_generator::config[1558]: No configuration found. Jun 20 18:55:44.170892 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 18:55:44.261003 systemd[1]: Reloading finished in 442 ms. Jun 20 18:55:44.274771 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jun 20 18:55:44.275528 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jun 20 18:55:44.283448 systemd[1]: Starting ensure-sysext.service... Jun 20 18:55:44.287287 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jun 20 18:55:44.290025 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 20 18:55:44.310127 systemd[1]: Reload requested from client PID 1610 ('systemctl') (unit ensure-sysext.service)... Jun 20 18:55:44.310145 systemd[1]: Reloading... Jun 20 18:55:44.316931 systemd-tmpfiles[1611]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jun 20 18:55:44.317565 systemd-tmpfiles[1611]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jun 20 18:55:44.318510 systemd-tmpfiles[1611]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jun 20 18:55:44.318855 systemd-tmpfiles[1611]: ACLs are not supported, ignoring. Jun 20 18:55:44.318972 systemd-tmpfiles[1611]: ACLs are not supported, ignoring. Jun 20 18:55:44.323224 systemd-tmpfiles[1611]: Detected autofs mount point /boot during canonicalization of boot. Jun 20 18:55:44.323238 systemd-tmpfiles[1611]: Skipping /boot Jun 20 18:55:44.348454 systemd-udevd[1612]: Using default interface naming scheme 'v255'. Jun 20 18:55:44.349354 systemd-tmpfiles[1611]: Detected autofs mount point /boot during canonicalization of boot. Jun 20 18:55:44.349365 systemd-tmpfiles[1611]: Skipping /boot Jun 20 18:55:44.408099 zram_generator::config[1654]: No configuration found. Jun 20 18:55:44.490194 (udev-worker)[1678]: Network interface NamePolicy= disabled on kernel command line. Jun 20 18:55:44.570287 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jun 20 18:55:44.575099 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Jun 20 18:55:44.586042 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 18:55:44.587527 kernel: ACPI: button: Power Button [PWRF] Jun 20 18:55:44.591135 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input3 Jun 20 18:55:44.604095 kernel: ACPI: button: Sleep Button [SLPF] Jun 20 18:55:44.630102 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input4 Jun 20 18:55:44.694095 kernel: mousedev: PS/2 mouse device common for all mice Jun 20 18:55:44.697170 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jun 20 18:55:44.698447 systemd[1]: Reloading finished in 387 ms. Jun 20 18:55:44.711235 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 20 18:55:44.712463 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jun 20 18:55:44.742417 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (1668) Jun 20 18:55:44.767063 systemd[1]: Finished ensure-sysext.service. Jun 20 18:55:44.781061 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 18:55:44.788334 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jun 20 18:55:44.793245 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jun 20 18:55:44.794026 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 20 18:55:44.803272 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 20 18:55:44.807325 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 20 18:55:44.809795 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 20 18:55:44.811933 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 20 18:55:44.813855 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 20 18:55:44.814250 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 20 18:55:44.819239 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jun 20 18:55:44.822236 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 20 18:55:44.826284 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 20 18:55:44.828190 systemd[1]: Reached target time-set.target - System Time Set. Jun 20 18:55:44.830981 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jun 20 18:55:44.841354 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 18:55:44.841758 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 18:55:44.856792 ldconfig[1477]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jun 20 18:55:44.861923 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jun 20 18:55:44.865713 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 20 18:55:44.867140 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 20 18:55:44.887346 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jun 20 18:55:44.895610 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 20 18:55:44.896043 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 20 18:55:44.897234 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 20 18:55:44.898144 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 20 18:55:44.904841 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 20 18:55:44.906012 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jun 20 18:55:44.911207 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 20 18:55:44.911393 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 20 18:55:44.912615 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 20 18:55:44.947203 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jun 20 18:55:44.953250 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jun 20 18:55:44.955280 augenrules[1843]: No rules Jun 20 18:55:44.955999 systemd[1]: audit-rules.service: Deactivated successfully. Jun 20 18:55:44.956240 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jun 20 18:55:44.985345 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jun 20 18:55:44.985926 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jun 20 18:55:44.987983 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jun 20 18:55:44.999550 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jun 20 18:55:45.000168 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jun 20 18:55:45.000838 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jun 20 18:55:45.004347 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jun 20 18:55:45.012743 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jun 20 18:55:45.035249 lvm[1859]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jun 20 18:55:45.039875 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 18:55:45.056194 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jun 20 18:55:45.060976 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jun 20 18:55:45.061785 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 20 18:55:45.068344 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jun 20 18:55:45.074064 lvm[1869]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jun 20 18:55:45.104515 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jun 20 18:55:45.108720 systemd-networkd[1783]: lo: Link UP Jun 20 18:55:45.108729 systemd-networkd[1783]: lo: Gained carrier Jun 20 18:55:45.110186 systemd-networkd[1783]: Enumeration completed Jun 20 18:55:45.110297 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 20 18:55:45.112301 systemd-networkd[1783]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 18:55:45.112313 systemd-networkd[1783]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 20 18:55:45.115978 systemd-networkd[1783]: eth0: Link UP Jun 20 18:55:45.116125 systemd-networkd[1783]: eth0: Gained carrier Jun 20 18:55:45.116144 systemd-networkd[1783]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 18:55:45.118610 systemd-resolved[1786]: Positive Trust Anchors: Jun 20 18:55:45.118622 systemd-resolved[1786]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 20 18:55:45.118660 systemd-resolved[1786]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jun 20 18:55:45.118771 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jun 20 18:55:45.121348 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jun 20 18:55:45.125231 systemd-networkd[1783]: eth0: DHCPv4 address 172.31.28.28/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jun 20 18:55:45.134749 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jun 20 18:55:45.137305 systemd-resolved[1786]: Defaulting to hostname 'linux'. Jun 20 18:55:45.139800 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 20 18:55:45.140390 systemd[1]: Reached target network.target - Network. Jun 20 18:55:45.140783 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 20 18:55:45.141171 systemd[1]: Reached target sysinit.target - System Initialization. Jun 20 18:55:45.141591 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jun 20 18:55:45.141942 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jun 20 18:55:45.142439 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jun 20 18:55:45.142845 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jun 20 18:55:45.143183 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jun 20 18:55:45.143492 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jun 20 18:55:45.143524 systemd[1]: Reached target paths.target - Path Units. Jun 20 18:55:45.143837 systemd[1]: Reached target timers.target - Timer Units. Jun 20 18:55:45.145172 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jun 20 18:55:45.146916 systemd[1]: Starting docker.socket - Docker Socket for the API... Jun 20 18:55:45.149837 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jun 20 18:55:45.150408 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jun 20 18:55:45.150777 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jun 20 18:55:45.153863 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jun 20 18:55:45.154780 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jun 20 18:55:45.155872 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jun 20 18:55:45.156336 systemd[1]: Reached target sockets.target - Socket Units. Jun 20 18:55:45.156691 systemd[1]: Reached target basic.target - Basic System. Jun 20 18:55:45.157089 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jun 20 18:55:45.157115 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jun 20 18:55:45.158156 systemd[1]: Starting containerd.service - containerd container runtime... Jun 20 18:55:45.161270 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jun 20 18:55:45.164527 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jun 20 18:55:45.167659 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jun 20 18:55:45.171418 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jun 20 18:55:45.172133 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jun 20 18:55:45.175227 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jun 20 18:55:45.178321 systemd[1]: Started ntpd.service - Network Time Service. Jun 20 18:55:45.179916 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jun 20 18:55:45.182518 systemd[1]: Starting setup-oem.service - Setup OEM... Jun 20 18:55:45.184823 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jun 20 18:55:45.197863 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jun 20 18:55:45.198657 jq[1880]: false Jun 20 18:55:45.204269 systemd[1]: Starting systemd-logind.service - User Login Management... Jun 20 18:55:45.205531 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jun 20 18:55:45.206026 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jun 20 18:55:45.215261 systemd[1]: Starting update-engine.service - Update Engine... Jun 20 18:55:45.217634 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jun 20 18:55:45.222509 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jun 20 18:55:45.222735 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jun 20 18:55:45.230512 jq[1891]: true Jun 20 18:55:45.264189 jq[1900]: true Jun 20 18:55:45.269128 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jun 20 18:55:45.269348 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jun 20 18:55:45.282815 dbus-daemon[1879]: [system] SELinux support is enabled Jun 20 18:55:45.284208 dbus-daemon[1879]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1783 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jun 20 18:55:45.291524 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jun 20 18:55:45.293219 extend-filesystems[1881]: Found loop4 Jun 20 18:55:45.293219 extend-filesystems[1881]: Found loop5 Jun 20 18:55:45.293219 extend-filesystems[1881]: Found loop6 Jun 20 18:55:45.293219 extend-filesystems[1881]: Found loop7 Jun 20 18:55:45.293219 extend-filesystems[1881]: Found nvme0n1 Jun 20 18:55:45.293219 extend-filesystems[1881]: Found nvme0n1p1 Jun 20 18:55:45.293219 extend-filesystems[1881]: Found nvme0n1p2 Jun 20 18:55:45.293219 extend-filesystems[1881]: Found nvme0n1p3 Jun 20 18:55:45.293219 extend-filesystems[1881]: Found usr Jun 20 18:55:45.293219 extend-filesystems[1881]: Found nvme0n1p4 Jun 20 18:55:45.293219 extend-filesystems[1881]: Found nvme0n1p6 Jun 20 18:55:45.293219 extend-filesystems[1881]: Found nvme0n1p7 Jun 20 18:55:45.293219 extend-filesystems[1881]: Found nvme0n1p9 Jun 20 18:55:45.293219 extend-filesystems[1881]: Checking size of /dev/nvme0n1p9 Jun 20 18:55:45.299498 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jun 20 18:55:45.299939 update_engine[1889]: I20250620 18:55:45.297952 1889 main.cc:92] Flatcar Update Engine starting Jun 20 18:55:45.299550 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jun 20 18:55:45.300173 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jun 20 18:55:45.300197 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jun 20 18:55:45.301696 systemd[1]: motdgen.service: Deactivated successfully. Jun 20 18:55:45.302625 dbus-daemon[1879]: [system] Successfully activated service 'org.freedesktop.systemd1' Jun 20 18:55:45.301921 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jun 20 18:55:45.307433 (ntainerd)[1906]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jun 20 18:55:45.312713 ntpd[1883]: ntpd 4.2.8p17@1.4004-o Fri Jun 20 16:33:02 UTC 2025 (1): Starting Jun 20 18:55:45.319960 update_engine[1889]: I20250620 18:55:45.316509 1889 update_check_scheduler.cc:74] Next update check in 7m24s Jun 20 18:55:45.319995 ntpd[1883]: 20 Jun 18:55:45 ntpd[1883]: ntpd 4.2.8p17@1.4004-o Fri Jun 20 16:33:02 UTC 2025 (1): Starting Jun 20 18:55:45.319995 ntpd[1883]: 20 Jun 18:55:45 ntpd[1883]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jun 20 18:55:45.319995 ntpd[1883]: 20 Jun 18:55:45 ntpd[1883]: ---------------------------------------------------- Jun 20 18:55:45.319995 ntpd[1883]: 20 Jun 18:55:45 ntpd[1883]: ntp-4 is maintained by Network Time Foundation, Jun 20 18:55:45.319995 ntpd[1883]: 20 Jun 18:55:45 ntpd[1883]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jun 20 18:55:45.319995 ntpd[1883]: 20 Jun 18:55:45 ntpd[1883]: corporation. Support and training for ntp-4 are Jun 20 18:55:45.319995 ntpd[1883]: 20 Jun 18:55:45 ntpd[1883]: available at https://www.nwtime.org/support Jun 20 18:55:45.319995 ntpd[1883]: 20 Jun 18:55:45 ntpd[1883]: ---------------------------------------------------- Jun 20 18:55:45.319995 ntpd[1883]: 20 Jun 18:55:45 ntpd[1883]: proto: precision = 0.056 usec (-24) Jun 20 18:55:45.319995 ntpd[1883]: 20 Jun 18:55:45 ntpd[1883]: basedate set to 2025-06-08 Jun 20 18:55:45.319995 ntpd[1883]: 20 Jun 18:55:45 ntpd[1883]: gps base set to 2025-06-08 (week 2370) Jun 20 18:55:45.312740 ntpd[1883]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jun 20 18:55:45.318245 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jun 20 18:55:45.312747 ntpd[1883]: ---------------------------------------------------- Jun 20 18:55:45.318661 systemd[1]: Started update-engine.service - Update Engine. Jun 20 18:55:45.312754 ntpd[1883]: ntp-4 is maintained by Network Time Foundation, Jun 20 18:55:45.312761 ntpd[1883]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jun 20 18:55:45.312767 ntpd[1883]: corporation. Support and training for ntp-4 are Jun 20 18:55:45.312775 ntpd[1883]: available at https://www.nwtime.org/support Jun 20 18:55:45.312782 ntpd[1883]: ---------------------------------------------------- Jun 20 18:55:45.317836 ntpd[1883]: proto: precision = 0.056 usec (-24) Jun 20 18:55:45.318937 ntpd[1883]: basedate set to 2025-06-08 Jun 20 18:55:45.318952 ntpd[1883]: gps base set to 2025-06-08 (week 2370) Jun 20 18:55:45.321699 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jun 20 18:55:45.329221 ntpd[1883]: 20 Jun 18:55:45 ntpd[1883]: Listen and drop on 0 v6wildcard [::]:123 Jun 20 18:55:45.329221 ntpd[1883]: 20 Jun 18:55:45 ntpd[1883]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jun 20 18:55:45.329221 ntpd[1883]: 20 Jun 18:55:45 ntpd[1883]: Listen normally on 2 lo 127.0.0.1:123 Jun 20 18:55:45.329221 ntpd[1883]: 20 Jun 18:55:45 ntpd[1883]: Listen normally on 3 eth0 172.31.28.28:123 Jun 20 18:55:45.329221 ntpd[1883]: 20 Jun 18:55:45 ntpd[1883]: Listen normally on 4 lo [::1]:123 Jun 20 18:55:45.329221 ntpd[1883]: 20 Jun 18:55:45 ntpd[1883]: bind(21) AF_INET6 fe80::434:4ff:fe5c:e3ff%2#123 flags 0x11 failed: Cannot assign requested address Jun 20 18:55:45.329221 ntpd[1883]: 20 Jun 18:55:45 ntpd[1883]: unable to create socket on eth0 (5) for fe80::434:4ff:fe5c:e3ff%2#123 Jun 20 18:55:45.329221 ntpd[1883]: 20 Jun 18:55:45 ntpd[1883]: failed to init interface for address fe80::434:4ff:fe5c:e3ff%2 Jun 20 18:55:45.329221 ntpd[1883]: 20 Jun 18:55:45 ntpd[1883]: Listening on routing socket on fd #21 for interface updates Jun 20 18:55:45.324452 ntpd[1883]: Listen and drop on 0 v6wildcard [::]:123 Jun 20 18:55:45.324492 ntpd[1883]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jun 20 18:55:45.326210 ntpd[1883]: Listen normally on 2 lo 127.0.0.1:123 Jun 20 18:55:45.326245 ntpd[1883]: Listen normally on 3 eth0 172.31.28.28:123 Jun 20 18:55:45.326277 ntpd[1883]: Listen normally on 4 lo [::1]:123 Jun 20 18:55:45.326318 ntpd[1883]: bind(21) AF_INET6 fe80::434:4ff:fe5c:e3ff%2#123 flags 0x11 failed: Cannot assign requested address Jun 20 18:55:45.326334 ntpd[1883]: unable to create socket on eth0 (5) for fe80::434:4ff:fe5c:e3ff%2#123 Jun 20 18:55:45.326347 ntpd[1883]: failed to init interface for address fe80::434:4ff:fe5c:e3ff%2 Jun 20 18:55:45.326372 ntpd[1883]: Listening on routing socket on fd #21 for interface updates Jun 20 18:55:45.345723 tar[1894]: linux-amd64/helm Jun 20 18:55:45.355327 ntpd[1883]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jun 20 18:55:45.356335 ntpd[1883]: 20 Jun 18:55:45 ntpd[1883]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jun 20 18:55:45.356335 ntpd[1883]: 20 Jun 18:55:45 ntpd[1883]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jun 20 18:55:45.355361 ntpd[1883]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jun 20 18:55:45.363220 extend-filesystems[1881]: Resized partition /dev/nvme0n1p9 Jun 20 18:55:45.365611 systemd[1]: Finished setup-oem.service - Setup OEM. Jun 20 18:55:45.371324 extend-filesystems[1945]: resize2fs 1.47.1 (20-May-2024) Jun 20 18:55:45.376101 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Jun 20 18:55:45.431814 coreos-metadata[1878]: Jun 20 18:55:45.431 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jun 20 18:55:45.431814 coreos-metadata[1878]: Jun 20 18:55:45.431 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jun 20 18:55:45.431814 coreos-metadata[1878]: Jun 20 18:55:45.431 INFO Fetch successful Jun 20 18:55:45.431814 coreos-metadata[1878]: Jun 20 18:55:45.431 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jun 20 18:55:45.431814 coreos-metadata[1878]: Jun 20 18:55:45.431 INFO Fetch successful Jun 20 18:55:45.431814 coreos-metadata[1878]: Jun 20 18:55:45.431 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jun 20 18:55:45.431814 coreos-metadata[1878]: Jun 20 18:55:45.431 INFO Fetch successful Jun 20 18:55:45.431814 coreos-metadata[1878]: Jun 20 18:55:45.431 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jun 20 18:55:45.431814 coreos-metadata[1878]: Jun 20 18:55:45.431 INFO Fetch successful Jun 20 18:55:45.431814 coreos-metadata[1878]: Jun 20 18:55:45.431 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jun 20 18:55:45.431814 coreos-metadata[1878]: Jun 20 18:55:45.431 INFO Fetch failed with 404: resource not found Jun 20 18:55:45.431814 coreos-metadata[1878]: Jun 20 18:55:45.431 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jun 20 18:55:45.431814 coreos-metadata[1878]: Jun 20 18:55:45.431 INFO Fetch successful Jun 20 18:55:45.431814 coreos-metadata[1878]: Jun 20 18:55:45.431 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jun 20 18:55:45.431814 coreos-metadata[1878]: Jun 20 18:55:45.431 INFO Fetch successful Jun 20 18:55:45.431814 coreos-metadata[1878]: Jun 20 18:55:45.431 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jun 20 18:55:45.469631 coreos-metadata[1878]: Jun 20 18:55:45.433 INFO Fetch successful Jun 20 18:55:45.469631 coreos-metadata[1878]: Jun 20 18:55:45.433 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jun 20 18:55:45.469631 coreos-metadata[1878]: Jun 20 18:55:45.433 INFO Fetch successful Jun 20 18:55:45.469631 coreos-metadata[1878]: Jun 20 18:55:45.433 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jun 20 18:55:45.469631 coreos-metadata[1878]: Jun 20 18:55:45.433 INFO Fetch successful Jun 20 18:55:45.480504 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Jun 20 18:55:45.491615 systemd[1]: extend-filesystems.service: Deactivated successfully. Jun 20 18:55:45.493024 extend-filesystems[1945]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jun 20 18:55:45.493024 extend-filesystems[1945]: old_desc_blocks = 1, new_desc_blocks = 1 Jun 20 18:55:45.493024 extend-filesystems[1945]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Jun 20 18:55:45.491825 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jun 20 18:55:45.506784 bash[1946]: Updated "/home/core/.ssh/authorized_keys" Jun 20 18:55:45.506868 extend-filesystems[1881]: Resized filesystem in /dev/nvme0n1p9 Jun 20 18:55:45.497323 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jun 20 18:55:45.498165 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jun 20 18:55:45.503862 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jun 20 18:55:45.514311 systemd[1]: Starting sshkeys.service... Jun 20 18:55:45.538554 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (1668) Jun 20 18:55:45.542829 systemd-logind[1888]: Watching system buttons on /dev/input/event1 (Power Button) Jun 20 18:55:45.542854 systemd-logind[1888]: Watching system buttons on /dev/input/event2 (Sleep Button) Jun 20 18:55:45.542872 systemd-logind[1888]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jun 20 18:55:45.544246 systemd-logind[1888]: New seat seat0. Jun 20 18:55:45.549814 systemd[1]: Started systemd-logind.service - User Login Management. Jun 20 18:55:45.575433 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jun 20 18:55:45.595193 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jun 20 18:55:45.616658 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jun 20 18:55:45.623456 dbus-daemon[1879]: [system] Successfully activated service 'org.freedesktop.hostname1' Jun 20 18:55:45.626180 dbus-daemon[1879]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1920 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jun 20 18:55:45.639184 systemd[1]: Starting polkit.service - Authorization Manager... Jun 20 18:55:45.693014 polkitd[2024]: Started polkitd version 121 Jun 20 18:55:45.750902 polkitd[2024]: Loading rules from directory /etc/polkit-1/rules.d Jun 20 18:55:45.750984 polkitd[2024]: Loading rules from directory /usr/share/polkit-1/rules.d Jun 20 18:55:45.757819 polkitd[2024]: Finished loading, compiling and executing 2 rules Jun 20 18:55:45.759745 dbus-daemon[1879]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jun 20 18:55:45.759890 systemd[1]: Started polkit.service - Authorization Manager. Jun 20 18:55:45.766524 polkitd[2024]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jun 20 18:55:45.808989 coreos-metadata[1991]: Jun 20 18:55:45.808 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jun 20 18:55:45.813250 coreos-metadata[1991]: Jun 20 18:55:45.813 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jun 20 18:55:45.814513 coreos-metadata[1991]: Jun 20 18:55:45.814 INFO Fetch successful Jun 20 18:55:45.814720 coreos-metadata[1991]: Jun 20 18:55:45.814 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jun 20 18:55:45.815588 coreos-metadata[1991]: Jun 20 18:55:45.815 INFO Fetch successful Jun 20 18:55:45.818659 unknown[1991]: wrote ssh authorized keys file for user: core Jun 20 18:55:45.830774 locksmithd[1921]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jun 20 18:55:45.840616 systemd-hostnamed[1920]: Hostname set to (transient) Jun 20 18:55:45.840724 systemd-resolved[1786]: System hostname changed to 'ip-172-31-28-28'. Jun 20 18:55:45.854564 update-ssh-keys[2074]: Updated "/home/core/.ssh/authorized_keys" Jun 20 18:55:45.856469 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jun 20 18:55:45.860596 systemd[1]: Finished sshkeys.service. Jun 20 18:55:45.959749 containerd[1906]: time="2025-06-20T18:55:45.959433462Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jun 20 18:55:46.014637 containerd[1906]: time="2025-06-20T18:55:46.014422138Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jun 20 18:55:46.021094 containerd[1906]: time="2025-06-20T18:55:46.018303436Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.94-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jun 20 18:55:46.021094 containerd[1906]: time="2025-06-20T18:55:46.018346977Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jun 20 18:55:46.021094 containerd[1906]: time="2025-06-20T18:55:46.018370398Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jun 20 18:55:46.021094 containerd[1906]: time="2025-06-20T18:55:46.018515610Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jun 20 18:55:46.021094 containerd[1906]: time="2025-06-20T18:55:46.018534711Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jun 20 18:55:46.021094 containerd[1906]: time="2025-06-20T18:55:46.018588903Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jun 20 18:55:46.021094 containerd[1906]: time="2025-06-20T18:55:46.018603848Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jun 20 18:55:46.021094 containerd[1906]: time="2025-06-20T18:55:46.018808127Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jun 20 18:55:46.021094 containerd[1906]: time="2025-06-20T18:55:46.018824800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jun 20 18:55:46.021094 containerd[1906]: time="2025-06-20T18:55:46.018841192Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jun 20 18:55:46.021094 containerd[1906]: time="2025-06-20T18:55:46.018854718Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jun 20 18:55:46.021383 containerd[1906]: time="2025-06-20T18:55:46.018921068Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jun 20 18:55:46.021676 containerd[1906]: time="2025-06-20T18:55:46.021656818Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jun 20 18:55:46.021906 containerd[1906]: time="2025-06-20T18:55:46.021885930Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jun 20 18:55:46.024891 containerd[1906]: time="2025-06-20T18:55:46.024571102Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jun 20 18:55:46.024891 containerd[1906]: time="2025-06-20T18:55:46.024723992Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jun 20 18:55:46.024891 containerd[1906]: time="2025-06-20T18:55:46.024771424Z" level=info msg="metadata content store policy set" policy=shared Jun 20 18:55:46.028402 containerd[1906]: time="2025-06-20T18:55:46.028375756Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jun 20 18:55:46.028510 containerd[1906]: time="2025-06-20T18:55:46.028498073Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jun 20 18:55:46.028587 containerd[1906]: time="2025-06-20T18:55:46.028577176Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jun 20 18:55:46.028635 containerd[1906]: time="2025-06-20T18:55:46.028626113Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jun 20 18:55:46.028679 containerd[1906]: time="2025-06-20T18:55:46.028670643Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jun 20 18:55:46.028848 containerd[1906]: time="2025-06-20T18:55:46.028834630Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jun 20 18:55:46.029195 containerd[1906]: time="2025-06-20T18:55:46.029181014Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jun 20 18:55:46.030658 containerd[1906]: time="2025-06-20T18:55:46.030217568Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jun 20 18:55:46.030658 containerd[1906]: time="2025-06-20T18:55:46.030239429Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jun 20 18:55:46.030658 containerd[1906]: time="2025-06-20T18:55:46.030256088Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jun 20 18:55:46.030658 containerd[1906]: time="2025-06-20T18:55:46.030271577Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jun 20 18:55:46.030658 containerd[1906]: time="2025-06-20T18:55:46.030284474Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jun 20 18:55:46.030658 containerd[1906]: time="2025-06-20T18:55:46.030296775Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jun 20 18:55:46.030658 containerd[1906]: time="2025-06-20T18:55:46.030310791Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jun 20 18:55:46.030658 containerd[1906]: time="2025-06-20T18:55:46.030324432Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jun 20 18:55:46.030658 containerd[1906]: time="2025-06-20T18:55:46.030337524Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jun 20 18:55:46.030658 containerd[1906]: time="2025-06-20T18:55:46.030360105Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jun 20 18:55:46.030658 containerd[1906]: time="2025-06-20T18:55:46.030371838Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jun 20 18:55:46.030658 containerd[1906]: time="2025-06-20T18:55:46.030392159Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jun 20 18:55:46.030658 containerd[1906]: time="2025-06-20T18:55:46.030404158Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jun 20 18:55:46.030658 containerd[1906]: time="2025-06-20T18:55:46.030416362Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jun 20 18:55:46.030972 containerd[1906]: time="2025-06-20T18:55:46.030436700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jun 20 18:55:46.030972 containerd[1906]: time="2025-06-20T18:55:46.030448415Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jun 20 18:55:46.030972 containerd[1906]: time="2025-06-20T18:55:46.030461247Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jun 20 18:55:46.030972 containerd[1906]: time="2025-06-20T18:55:46.030473214Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jun 20 18:55:46.030972 containerd[1906]: time="2025-06-20T18:55:46.030485172Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jun 20 18:55:46.030972 containerd[1906]: time="2025-06-20T18:55:46.030500208Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jun 20 18:55:46.030972 containerd[1906]: time="2025-06-20T18:55:46.030513949Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jun 20 18:55:46.030972 containerd[1906]: time="2025-06-20T18:55:46.030525835Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jun 20 18:55:46.030972 containerd[1906]: time="2025-06-20T18:55:46.030536647Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jun 20 18:55:46.030972 containerd[1906]: time="2025-06-20T18:55:46.030548873Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jun 20 18:55:46.030972 containerd[1906]: time="2025-06-20T18:55:46.030562593Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jun 20 18:55:46.030972 containerd[1906]: time="2025-06-20T18:55:46.030582656Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jun 20 18:55:46.030972 containerd[1906]: time="2025-06-20T18:55:46.030596481Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jun 20 18:55:46.030972 containerd[1906]: time="2025-06-20T18:55:46.030606589Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jun 20 18:55:46.031307 containerd[1906]: time="2025-06-20T18:55:46.031293872Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jun 20 18:55:46.031431 containerd[1906]: time="2025-06-20T18:55:46.031417427Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jun 20 18:55:46.031476 containerd[1906]: time="2025-06-20T18:55:46.031467075Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jun 20 18:55:46.031529 containerd[1906]: time="2025-06-20T18:55:46.031517785Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jun 20 18:55:46.031570 containerd[1906]: time="2025-06-20T18:55:46.031561336Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jun 20 18:55:46.031619 containerd[1906]: time="2025-06-20T18:55:46.031610796Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jun 20 18:55:46.031660 containerd[1906]: time="2025-06-20T18:55:46.031652155Z" level=info msg="NRI interface is disabled by configuration." Jun 20 18:55:46.031713 containerd[1906]: time="2025-06-20T18:55:46.031704140Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jun 20 18:55:46.032680 containerd[1906]: time="2025-06-20T18:55:46.032616472Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jun 20 18:55:46.032680 containerd[1906]: time="2025-06-20T18:55:46.032685006Z" level=info msg="Connect containerd service" Jun 20 18:55:46.032909 containerd[1906]: time="2025-06-20T18:55:46.032724780Z" level=info msg="using legacy CRI server" Jun 20 18:55:46.032909 containerd[1906]: time="2025-06-20T18:55:46.032732126Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jun 20 18:55:46.032909 containerd[1906]: time="2025-06-20T18:55:46.032857661Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jun 20 18:55:46.033458 containerd[1906]: time="2025-06-20T18:55:46.033432450Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 20 18:55:46.033684 containerd[1906]: time="2025-06-20T18:55:46.033566647Z" level=info msg="Start subscribing containerd event" Jun 20 18:55:46.033684 containerd[1906]: time="2025-06-20T18:55:46.033620757Z" level=info msg="Start recovering state" Jun 20 18:55:46.036092 containerd[1906]: time="2025-06-20T18:55:46.033746398Z" level=info msg="Start event monitor" Jun 20 18:55:46.036092 containerd[1906]: time="2025-06-20T18:55:46.033764483Z" level=info msg="Start snapshots syncer" Jun 20 18:55:46.036092 containerd[1906]: time="2025-06-20T18:55:46.033770630Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jun 20 18:55:46.036092 containerd[1906]: time="2025-06-20T18:55:46.033812280Z" level=info msg=serving... address=/run/containerd/containerd.sock Jun 20 18:55:46.036092 containerd[1906]: time="2025-06-20T18:55:46.033772836Z" level=info msg="Start cni network conf syncer for default" Jun 20 18:55:46.036092 containerd[1906]: time="2025-06-20T18:55:46.033840592Z" level=info msg="Start streaming server" Jun 20 18:55:46.036092 containerd[1906]: time="2025-06-20T18:55:46.033891536Z" level=info msg="containerd successfully booted in 0.077526s" Jun 20 18:55:46.034451 systemd[1]: Started containerd.service - containerd container runtime. Jun 20 18:55:46.042004 sshd_keygen[1923]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jun 20 18:55:46.068685 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jun 20 18:55:46.075957 systemd[1]: Starting issuegen.service - Generate /run/issue... Jun 20 18:55:46.082286 systemd[1]: issuegen.service: Deactivated successfully. Jun 20 18:55:46.082683 systemd[1]: Finished issuegen.service - Generate /run/issue. Jun 20 18:55:46.090838 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jun 20 18:55:46.099420 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jun 20 18:55:46.107505 systemd[1]: Started getty@tty1.service - Getty on tty1. Jun 20 18:55:46.116635 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jun 20 18:55:46.117773 systemd[1]: Reached target getty.target - Login Prompts. Jun 20 18:55:46.254460 tar[1894]: linux-amd64/LICENSE Jun 20 18:55:46.254684 tar[1894]: linux-amd64/README.md Jun 20 18:55:46.265511 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jun 20 18:55:46.313207 ntpd[1883]: bind(24) AF_INET6 fe80::434:4ff:fe5c:e3ff%2#123 flags 0x11 failed: Cannot assign requested address Jun 20 18:55:46.313251 ntpd[1883]: unable to create socket on eth0 (6) for fe80::434:4ff:fe5c:e3ff%2#123 Jun 20 18:55:46.313614 ntpd[1883]: 20 Jun 18:55:46 ntpd[1883]: bind(24) AF_INET6 fe80::434:4ff:fe5c:e3ff%2#123 flags 0x11 failed: Cannot assign requested address Jun 20 18:55:46.313614 ntpd[1883]: 20 Jun 18:55:46 ntpd[1883]: unable to create socket on eth0 (6) for fe80::434:4ff:fe5c:e3ff%2#123 Jun 20 18:55:46.313614 ntpd[1883]: 20 Jun 18:55:46 ntpd[1883]: failed to init interface for address fe80::434:4ff:fe5c:e3ff%2 Jun 20 18:55:46.313264 ntpd[1883]: failed to init interface for address fe80::434:4ff:fe5c:e3ff%2 Jun 20 18:55:46.898297 systemd-networkd[1783]: eth0: Gained IPv6LL Jun 20 18:55:46.900710 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jun 20 18:55:46.901666 systemd[1]: Reached target network-online.target - Network is Online. Jun 20 18:55:46.913427 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jun 20 18:55:46.916042 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 18:55:46.920170 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jun 20 18:55:46.975346 amazon-ssm-agent[2102]: Initializing new seelog logger Jun 20 18:55:46.975346 amazon-ssm-agent[2102]: New Seelog Logger Creation Complete Jun 20 18:55:46.975346 amazon-ssm-agent[2102]: 2025/06/20 18:55:46 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jun 20 18:55:46.975346 amazon-ssm-agent[2102]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jun 20 18:55:46.975346 amazon-ssm-agent[2102]: 2025/06/20 18:55:46 processing appconfig overrides Jun 20 18:55:46.975346 amazon-ssm-agent[2102]: 2025/06/20 18:55:46 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jun 20 18:55:46.975346 amazon-ssm-agent[2102]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jun 20 18:55:46.975346 amazon-ssm-agent[2102]: 2025/06/20 18:55:46 processing appconfig overrides Jun 20 18:55:46.974180 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jun 20 18:55:46.975781 amazon-ssm-agent[2102]: 2025-06-20 18:55:46 INFO Proxy environment variables: Jun 20 18:55:46.976282 amazon-ssm-agent[2102]: 2025/06/20 18:55:46 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jun 20 18:55:46.976282 amazon-ssm-agent[2102]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jun 20 18:55:46.976282 amazon-ssm-agent[2102]: 2025/06/20 18:55:46 processing appconfig overrides Jun 20 18:55:46.979653 amazon-ssm-agent[2102]: 2025/06/20 18:55:46 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jun 20 18:55:46.979653 amazon-ssm-agent[2102]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jun 20 18:55:46.979887 amazon-ssm-agent[2102]: 2025/06/20 18:55:46 processing appconfig overrides Jun 20 18:55:47.075279 amazon-ssm-agent[2102]: 2025-06-20 18:55:46 INFO https_proxy: Jun 20 18:55:47.174041 amazon-ssm-agent[2102]: 2025-06-20 18:55:46 INFO http_proxy: Jun 20 18:55:47.258204 amazon-ssm-agent[2102]: 2025-06-20 18:55:46 INFO no_proxy: Jun 20 18:55:47.258204 amazon-ssm-agent[2102]: 2025-06-20 18:55:46 INFO Checking if agent identity type OnPrem can be assumed Jun 20 18:55:47.258204 amazon-ssm-agent[2102]: 2025-06-20 18:55:46 INFO Checking if agent identity type EC2 can be assumed Jun 20 18:55:47.258204 amazon-ssm-agent[2102]: 2025-06-20 18:55:47 INFO Agent will take identity from EC2 Jun 20 18:55:47.258204 amazon-ssm-agent[2102]: 2025-06-20 18:55:47 INFO [amazon-ssm-agent] using named pipe channel for IPC Jun 20 18:55:47.258204 amazon-ssm-agent[2102]: 2025-06-20 18:55:47 INFO [amazon-ssm-agent] using named pipe channel for IPC Jun 20 18:55:47.258204 amazon-ssm-agent[2102]: 2025-06-20 18:55:47 INFO [amazon-ssm-agent] using named pipe channel for IPC Jun 20 18:55:47.258204 amazon-ssm-agent[2102]: 2025-06-20 18:55:47 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Jun 20 18:55:47.258204 amazon-ssm-agent[2102]: 2025-06-20 18:55:47 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Jun 20 18:55:47.258204 amazon-ssm-agent[2102]: 2025-06-20 18:55:47 INFO [amazon-ssm-agent] Starting Core Agent Jun 20 18:55:47.258204 amazon-ssm-agent[2102]: 2025-06-20 18:55:47 INFO [amazon-ssm-agent] registrar detected. Attempting registration Jun 20 18:55:47.258204 amazon-ssm-agent[2102]: 2025-06-20 18:55:47 INFO [Registrar] Starting registrar module Jun 20 18:55:47.258204 amazon-ssm-agent[2102]: 2025-06-20 18:55:47 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Jun 20 18:55:47.258204 amazon-ssm-agent[2102]: 2025-06-20 18:55:47 INFO [EC2Identity] EC2 registration was successful. Jun 20 18:55:47.258204 amazon-ssm-agent[2102]: 2025-06-20 18:55:47 INFO [CredentialRefresher] credentialRefresher has started Jun 20 18:55:47.258204 amazon-ssm-agent[2102]: 2025-06-20 18:55:47 INFO [CredentialRefresher] Starting credentials refresher loop Jun 20 18:55:47.258204 amazon-ssm-agent[2102]: 2025-06-20 18:55:47 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jun 20 18:55:47.271846 amazon-ssm-agent[2102]: 2025-06-20 18:55:47 INFO [CredentialRefresher] Next credential rotation will be in 31.183327318783334 minutes Jun 20 18:55:48.271988 amazon-ssm-agent[2102]: 2025-06-20 18:55:48 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jun 20 18:55:48.372664 amazon-ssm-agent[2102]: 2025-06-20 18:55:48 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2122) started Jun 20 18:55:48.473437 amazon-ssm-agent[2102]: 2025-06-20 18:55:48 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jun 20 18:55:48.516773 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jun 20 18:55:48.523397 systemd[1]: Started sshd@0-172.31.28.28:22-139.178.68.195:57340.service - OpenSSH per-connection server daemon (139.178.68.195:57340). Jun 20 18:55:48.602899 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 18:55:48.603964 systemd[1]: Reached target multi-user.target - Multi-User System. Jun 20 18:55:48.604910 systemd[1]: Startup finished in 691ms (kernel) + 6.992s (initrd) + 6.945s (userspace) = 14.629s. Jun 20 18:55:48.607884 (kubelet)[2141]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 18:55:48.721003 sshd[2134]: Accepted publickey for core from 139.178.68.195 port 57340 ssh2: RSA SHA256:sF0tjKSFADzF6g6JG756y/3bgw4kb0C1NHj6dI7T2go Jun 20 18:55:48.723562 sshd-session[2134]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:55:48.734193 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jun 20 18:55:48.739361 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jun 20 18:55:48.742460 systemd-logind[1888]: New session 1 of user core. Jun 20 18:55:48.752098 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jun 20 18:55:48.759399 systemd[1]: Starting user@500.service - User Manager for UID 500... Jun 20 18:55:48.763809 (systemd)[2148]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jun 20 18:55:48.766631 systemd-logind[1888]: New session c1 of user core. Jun 20 18:55:48.920377 systemd[2148]: Queued start job for default target default.target. Jun 20 18:55:48.925177 systemd[2148]: Created slice app.slice - User Application Slice. Jun 20 18:55:48.925209 systemd[2148]: Reached target paths.target - Paths. Jun 20 18:55:48.925255 systemd[2148]: Reached target timers.target - Timers. Jun 20 18:55:48.926665 systemd[2148]: Starting dbus.socket - D-Bus User Message Bus Socket... Jun 20 18:55:48.938849 systemd[2148]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jun 20 18:55:48.938965 systemd[2148]: Reached target sockets.target - Sockets. Jun 20 18:55:48.939024 systemd[2148]: Reached target basic.target - Basic System. Jun 20 18:55:48.939064 systemd[2148]: Reached target default.target - Main User Target. Jun 20 18:55:48.939118 systemd[2148]: Startup finished in 165ms. Jun 20 18:55:48.939235 systemd[1]: Started user@500.service - User Manager for UID 500. Jun 20 18:55:48.947302 systemd[1]: Started session-1.scope - Session 1 of User core. Jun 20 18:55:49.098378 systemd[1]: Started sshd@1-172.31.28.28:22-139.178.68.195:57346.service - OpenSSH per-connection server daemon (139.178.68.195:57346). Jun 20 18:55:49.264519 sshd[2164]: Accepted publickey for core from 139.178.68.195 port 57346 ssh2: RSA SHA256:sF0tjKSFADzF6g6JG756y/3bgw4kb0C1NHj6dI7T2go Jun 20 18:55:49.265886 sshd-session[2164]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:55:49.271279 systemd-logind[1888]: New session 2 of user core. Jun 20 18:55:49.276246 systemd[1]: Started session-2.scope - Session 2 of User core. Jun 20 18:55:49.313151 ntpd[1883]: Listen normally on 7 eth0 [fe80::434:4ff:fe5c:e3ff%2]:123 Jun 20 18:55:49.313496 ntpd[1883]: 20 Jun 18:55:49 ntpd[1883]: Listen normally on 7 eth0 [fe80::434:4ff:fe5c:e3ff%2]:123 Jun 20 18:55:49.398849 sshd[2166]: Connection closed by 139.178.68.195 port 57346 Jun 20 18:55:49.399401 sshd-session[2164]: pam_unix(sshd:session): session closed for user core Jun 20 18:55:49.403364 systemd-logind[1888]: Session 2 logged out. Waiting for processes to exit. Jun 20 18:55:49.403954 systemd[1]: sshd@1-172.31.28.28:22-139.178.68.195:57346.service: Deactivated successfully. Jun 20 18:55:49.405942 systemd[1]: session-2.scope: Deactivated successfully. Jun 20 18:55:49.408878 systemd-logind[1888]: Removed session 2. Jun 20 18:55:49.435729 systemd[1]: Started sshd@2-172.31.28.28:22-139.178.68.195:57354.service - OpenSSH per-connection server daemon (139.178.68.195:57354). Jun 20 18:55:49.637542 sshd[2172]: Accepted publickey for core from 139.178.68.195 port 57354 ssh2: RSA SHA256:sF0tjKSFADzF6g6JG756y/3bgw4kb0C1NHj6dI7T2go Jun 20 18:55:49.637942 sshd-session[2172]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:55:49.644608 systemd-logind[1888]: New session 3 of user core. Jun 20 18:55:49.649242 systemd[1]: Started session-3.scope - Session 3 of User core. Jun 20 18:55:49.689656 kubelet[2141]: E0620 18:55:49.689465 2141 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 18:55:49.692211 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 18:55:49.692361 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 18:55:49.692680 systemd[1]: kubelet.service: Consumed 1.022s CPU time, 265.7M memory peak. Jun 20 18:55:49.767873 sshd[2174]: Connection closed by 139.178.68.195 port 57354 Jun 20 18:55:49.768625 sshd-session[2172]: pam_unix(sshd:session): session closed for user core Jun 20 18:55:49.772191 systemd[1]: sshd@2-172.31.28.28:22-139.178.68.195:57354.service: Deactivated successfully. Jun 20 18:55:49.774392 systemd[1]: session-3.scope: Deactivated successfully. Jun 20 18:55:49.775908 systemd-logind[1888]: Session 3 logged out. Waiting for processes to exit. Jun 20 18:55:49.777447 systemd-logind[1888]: Removed session 3. Jun 20 18:55:49.804413 systemd[1]: Started sshd@3-172.31.28.28:22-139.178.68.195:57360.service - OpenSSH per-connection server daemon (139.178.68.195:57360). Jun 20 18:55:49.968190 sshd[2181]: Accepted publickey for core from 139.178.68.195 port 57360 ssh2: RSA SHA256:sF0tjKSFADzF6g6JG756y/3bgw4kb0C1NHj6dI7T2go Jun 20 18:55:49.969152 sshd-session[2181]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:55:49.974352 systemd-logind[1888]: New session 4 of user core. Jun 20 18:55:49.989395 systemd[1]: Started session-4.scope - Session 4 of User core. Jun 20 18:55:50.108032 sshd[2183]: Connection closed by 139.178.68.195 port 57360 Jun 20 18:55:50.108626 sshd-session[2181]: pam_unix(sshd:session): session closed for user core Jun 20 18:55:50.111341 systemd[1]: sshd@3-172.31.28.28:22-139.178.68.195:57360.service: Deactivated successfully. Jun 20 18:55:50.113024 systemd[1]: session-4.scope: Deactivated successfully. Jun 20 18:55:50.114320 systemd-logind[1888]: Session 4 logged out. Waiting for processes to exit. Jun 20 18:55:50.115215 systemd-logind[1888]: Removed session 4. Jun 20 18:55:50.142464 systemd[1]: Started sshd@4-172.31.28.28:22-139.178.68.195:57376.service - OpenSSH per-connection server daemon (139.178.68.195:57376). Jun 20 18:55:50.304780 sshd[2189]: Accepted publickey for core from 139.178.68.195 port 57376 ssh2: RSA SHA256:sF0tjKSFADzF6g6JG756y/3bgw4kb0C1NHj6dI7T2go Jun 20 18:55:50.306105 sshd-session[2189]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:55:50.310793 systemd-logind[1888]: New session 5 of user core. Jun 20 18:55:50.322323 systemd[1]: Started session-5.scope - Session 5 of User core. Jun 20 18:55:50.449388 sudo[2192]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jun 20 18:55:50.449684 sudo[2192]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 20 18:55:50.462876 sudo[2192]: pam_unix(sudo:session): session closed for user root Jun 20 18:55:50.485843 sshd[2191]: Connection closed by 139.178.68.195 port 57376 Jun 20 18:55:50.486581 sshd-session[2189]: pam_unix(sshd:session): session closed for user core Jun 20 18:55:50.489810 systemd[1]: sshd@4-172.31.28.28:22-139.178.68.195:57376.service: Deactivated successfully. Jun 20 18:55:50.491508 systemd[1]: session-5.scope: Deactivated successfully. Jun 20 18:55:50.493039 systemd-logind[1888]: Session 5 logged out. Waiting for processes to exit. Jun 20 18:55:50.493906 systemd-logind[1888]: Removed session 5. Jun 20 18:55:50.517172 systemd[1]: Started sshd@5-172.31.28.28:22-139.178.68.195:57390.service - OpenSSH per-connection server daemon (139.178.68.195:57390). Jun 20 18:55:50.680629 sshd[2198]: Accepted publickey for core from 139.178.68.195 port 57390 ssh2: RSA SHA256:sF0tjKSFADzF6g6JG756y/3bgw4kb0C1NHj6dI7T2go Jun 20 18:55:50.681922 sshd-session[2198]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:55:50.686338 systemd-logind[1888]: New session 6 of user core. Jun 20 18:55:50.693290 systemd[1]: Started session-6.scope - Session 6 of User core. Jun 20 18:55:50.792153 sudo[2202]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jun 20 18:55:50.792432 sudo[2202]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 20 18:55:50.796173 sudo[2202]: pam_unix(sudo:session): session closed for user root Jun 20 18:55:50.801653 sudo[2201]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jun 20 18:55:50.801929 sudo[2201]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 20 18:55:50.815435 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jun 20 18:55:50.843356 augenrules[2224]: No rules Jun 20 18:55:50.844663 systemd[1]: audit-rules.service: Deactivated successfully. Jun 20 18:55:50.844887 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jun 20 18:55:50.845753 sudo[2201]: pam_unix(sudo:session): session closed for user root Jun 20 18:55:50.868583 sshd[2200]: Connection closed by 139.178.68.195 port 57390 Jun 20 18:55:50.869066 sshd-session[2198]: pam_unix(sshd:session): session closed for user core Jun 20 18:55:50.871877 systemd[1]: sshd@5-172.31.28.28:22-139.178.68.195:57390.service: Deactivated successfully. Jun 20 18:55:50.873593 systemd[1]: session-6.scope: Deactivated successfully. Jun 20 18:55:50.874838 systemd-logind[1888]: Session 6 logged out. Waiting for processes to exit. Jun 20 18:55:50.876052 systemd-logind[1888]: Removed session 6. Jun 20 18:55:50.905410 systemd[1]: Started sshd@6-172.31.28.28:22-139.178.68.195:57404.service - OpenSSH per-connection server daemon (139.178.68.195:57404). Jun 20 18:55:51.062117 sshd[2233]: Accepted publickey for core from 139.178.68.195 port 57404 ssh2: RSA SHA256:sF0tjKSFADzF6g6JG756y/3bgw4kb0C1NHj6dI7T2go Jun 20 18:55:51.063855 sshd-session[2233]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:55:51.068708 systemd-logind[1888]: New session 7 of user core. Jun 20 18:55:51.079351 systemd[1]: Started session-7.scope - Session 7 of User core. Jun 20 18:55:51.173948 sudo[2236]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jun 20 18:55:51.174284 sudo[2236]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 20 18:55:51.780411 systemd[1]: Starting docker.service - Docker Application Container Engine... Jun 20 18:55:51.782531 (dockerd)[2252]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jun 20 18:55:53.861522 systemd-resolved[1786]: Clock change detected. Flushing caches. Jun 20 18:55:53.960446 dockerd[2252]: time="2025-06-20T18:55:53.960387853Z" level=info msg="Starting up" Jun 20 18:55:54.204657 dockerd[2252]: time="2025-06-20T18:55:54.204531648Z" level=info msg="Loading containers: start." Jun 20 18:55:54.369315 kernel: Initializing XFRM netlink socket Jun 20 18:55:54.397701 (udev-worker)[2276]: Network interface NamePolicy= disabled on kernel command line. Jun 20 18:55:54.450498 systemd-networkd[1783]: docker0: Link UP Jun 20 18:55:54.473784 dockerd[2252]: time="2025-06-20T18:55:54.473541357Z" level=info msg="Loading containers: done." Jun 20 18:55:54.489311 dockerd[2252]: time="2025-06-20T18:55:54.489262643Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jun 20 18:55:54.489460 dockerd[2252]: time="2025-06-20T18:55:54.489360420Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Jun 20 18:55:54.489489 dockerd[2252]: time="2025-06-20T18:55:54.489463448Z" level=info msg="Daemon has completed initialization" Jun 20 18:55:54.521395 dockerd[2252]: time="2025-06-20T18:55:54.521337673Z" level=info msg="API listen on /run/docker.sock" Jun 20 18:55:54.521967 systemd[1]: Started docker.service - Docker Application Container Engine. Jun 20 18:55:55.527621 containerd[1906]: time="2025-06-20T18:55:55.527580011Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\"" Jun 20 18:55:56.084996 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4104143893.mount: Deactivated successfully. Jun 20 18:55:58.105760 containerd[1906]: time="2025-06-20T18:55:58.105703699Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:55:58.107644 containerd[1906]: time="2025-06-20T18:55:58.107592513Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.10: active requests=0, bytes read=28077744" Jun 20 18:55:58.110117 containerd[1906]: time="2025-06-20T18:55:58.110058130Z" level=info msg="ImageCreate event name:\"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:55:58.115879 containerd[1906]: time="2025-06-20T18:55:58.113694818Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:55:58.115879 containerd[1906]: time="2025-06-20T18:55:58.115596599Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.10\" with image id \"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\", size \"28074544\" in 2.587977175s" Jun 20 18:55:58.115879 containerd[1906]: time="2025-06-20T18:55:58.115640291Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\" returns image reference \"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\"" Jun 20 18:55:58.117008 containerd[1906]: time="2025-06-20T18:55:58.116972469Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\"" Jun 20 18:55:59.933423 containerd[1906]: time="2025-06-20T18:55:59.933372329Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:55:59.934727 containerd[1906]: time="2025-06-20T18:55:59.934592963Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.10: active requests=0, bytes read=24713294" Jun 20 18:55:59.936100 containerd[1906]: time="2025-06-20T18:55:59.935773444Z" level=info msg="ImageCreate event name:\"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:55:59.938660 containerd[1906]: time="2025-06-20T18:55:59.938631090Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:55:59.939465 containerd[1906]: time="2025-06-20T18:55:59.939433342Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.10\" with image id \"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\", size \"26315128\" in 1.822322986s" Jun 20 18:55:59.939465 containerd[1906]: time="2025-06-20T18:55:59.939466553Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\" returns image reference \"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\"" Jun 20 18:55:59.940596 containerd[1906]: time="2025-06-20T18:55:59.940567761Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\"" Jun 20 18:56:01.491578 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jun 20 18:56:01.501703 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 18:56:01.721363 containerd[1906]: time="2025-06-20T18:56:01.719738220Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:56:01.724621 containerd[1906]: time="2025-06-20T18:56:01.724482959Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.10: active requests=0, bytes read=18783671" Jun 20 18:56:01.728256 containerd[1906]: time="2025-06-20T18:56:01.727020709Z" level=info msg="ImageCreate event name:\"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:56:01.733939 containerd[1906]: time="2025-06-20T18:56:01.733892848Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:56:01.734954 containerd[1906]: time="2025-06-20T18:56:01.734773569Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.10\" with image id \"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\", size \"20385523\" in 1.794175778s" Jun 20 18:56:01.735513 containerd[1906]: time="2025-06-20T18:56:01.735491868Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\" returns image reference \"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\"" Jun 20 18:56:01.737062 containerd[1906]: time="2025-06-20T18:56:01.737020730Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\"" Jun 20 18:56:01.795693 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 18:56:01.818701 (kubelet)[2512]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 18:56:01.895101 kubelet[2512]: E0620 18:56:01.895046 2512 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 18:56:01.899281 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 18:56:01.899493 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 18:56:01.900326 systemd[1]: kubelet.service: Consumed 189ms CPU time, 108.7M memory peak. Jun 20 18:56:02.841854 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1891592190.mount: Deactivated successfully. Jun 20 18:56:03.398425 containerd[1906]: time="2025-06-20T18:56:03.398374332Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:56:03.399485 containerd[1906]: time="2025-06-20T18:56:03.399422672Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.10: active requests=0, bytes read=30383943" Jun 20 18:56:03.400831 containerd[1906]: time="2025-06-20T18:56:03.400767625Z" level=info msg="ImageCreate event name:\"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:56:03.403343 containerd[1906]: time="2025-06-20T18:56:03.403301527Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:56:03.404478 containerd[1906]: time="2025-06-20T18:56:03.404274985Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.10\" with image id \"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\", repo tag \"registry.k8s.io/kube-proxy:v1.31.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\", size \"30382962\" in 1.667122777s" Jun 20 18:56:03.404478 containerd[1906]: time="2025-06-20T18:56:03.404314892Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\" returns image reference \"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\"" Jun 20 18:56:03.405042 containerd[1906]: time="2025-06-20T18:56:03.404981953Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jun 20 18:56:03.879337 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2355476610.mount: Deactivated successfully. Jun 20 18:56:04.907449 containerd[1906]: time="2025-06-20T18:56:04.907393873Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:56:04.908941 containerd[1906]: time="2025-06-20T18:56:04.908886519Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Jun 20 18:56:04.910204 containerd[1906]: time="2025-06-20T18:56:04.909763649Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:56:04.912733 containerd[1906]: time="2025-06-20T18:56:04.912698586Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:56:04.914028 containerd[1906]: time="2025-06-20T18:56:04.913988015Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.50897076s" Jun 20 18:56:04.914111 containerd[1906]: time="2025-06-20T18:56:04.914032309Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jun 20 18:56:04.914874 containerd[1906]: time="2025-06-20T18:56:04.914852271Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jun 20 18:56:05.391233 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount919955318.mount: Deactivated successfully. Jun 20 18:56:05.405766 containerd[1906]: time="2025-06-20T18:56:05.405702272Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:56:05.407490 containerd[1906]: time="2025-06-20T18:56:05.407418161Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jun 20 18:56:05.409731 containerd[1906]: time="2025-06-20T18:56:05.409675123Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:56:05.413118 containerd[1906]: time="2025-06-20T18:56:05.413062938Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:56:05.413839 containerd[1906]: time="2025-06-20T18:56:05.413631141Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 498.638629ms" Jun 20 18:56:05.413839 containerd[1906]: time="2025-06-20T18:56:05.413661381Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jun 20 18:56:05.414370 containerd[1906]: time="2025-06-20T18:56:05.414347501Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jun 20 18:56:05.947308 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount923280027.mount: Deactivated successfully. Jun 20 18:56:08.188427 containerd[1906]: time="2025-06-20T18:56:08.188364715Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:56:08.189890 containerd[1906]: time="2025-06-20T18:56:08.189835465Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780013" Jun 20 18:56:08.191774 containerd[1906]: time="2025-06-20T18:56:08.191687641Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:56:08.200780 containerd[1906]: time="2025-06-20T18:56:08.200245254Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:56:08.201684 containerd[1906]: time="2025-06-20T18:56:08.201638407Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 2.787256788s" Jun 20 18:56:08.201790 containerd[1906]: time="2025-06-20T18:56:08.201689183Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Jun 20 18:56:10.991987 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 18:56:10.992239 systemd[1]: kubelet.service: Consumed 189ms CPU time, 108.7M memory peak. Jun 20 18:56:10.998589 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 18:56:11.037694 systemd[1]: Reload requested from client PID 2664 ('systemctl') (unit session-7.scope)... Jun 20 18:56:11.037713 systemd[1]: Reloading... Jun 20 18:56:11.163266 zram_generator::config[2708]: No configuration found. Jun 20 18:56:11.346746 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 18:56:11.465065 systemd[1]: Reloading finished in 426 ms. Jun 20 18:56:11.507681 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 18:56:11.513947 (kubelet)[2764]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 20 18:56:11.514787 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 18:56:11.515623 systemd[1]: kubelet.service: Deactivated successfully. Jun 20 18:56:11.515879 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 18:56:11.515937 systemd[1]: kubelet.service: Consumed 131ms CPU time, 98.2M memory peak. Jun 20 18:56:11.521487 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 18:56:11.729036 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 18:56:11.734656 (kubelet)[2776]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 20 18:56:11.786524 kubelet[2776]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 20 18:56:11.786524 kubelet[2776]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jun 20 18:56:11.786524 kubelet[2776]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 20 18:56:11.788742 kubelet[2776]: I0620 18:56:11.788678 2776 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 20 18:56:12.261953 kubelet[2776]: I0620 18:56:12.261910 2776 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jun 20 18:56:12.261953 kubelet[2776]: I0620 18:56:12.261945 2776 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 20 18:56:12.262889 kubelet[2776]: I0620 18:56:12.262326 2776 server.go:934] "Client rotation is on, will bootstrap in background" Jun 20 18:56:12.307970 kubelet[2776]: E0620 18:56:12.307912 2776 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.28.28:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.28.28:6443: connect: connection refused" logger="UnhandledError" Jun 20 18:56:12.310994 kubelet[2776]: I0620 18:56:12.310935 2776 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 20 18:56:12.329606 kubelet[2776]: E0620 18:56:12.329558 2776 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jun 20 18:56:12.329606 kubelet[2776]: I0620 18:56:12.329596 2776 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jun 20 18:56:12.336583 kubelet[2776]: I0620 18:56:12.336548 2776 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 20 18:56:12.338869 kubelet[2776]: I0620 18:56:12.338827 2776 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jun 20 18:56:12.339047 kubelet[2776]: I0620 18:56:12.339007 2776 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 20 18:56:12.339274 kubelet[2776]: I0620 18:56:12.339042 2776 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-28-28","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jun 20 18:56:12.339379 kubelet[2776]: I0620 18:56:12.339281 2776 topology_manager.go:138] "Creating topology manager with none policy" Jun 20 18:56:12.339379 kubelet[2776]: I0620 18:56:12.339292 2776 container_manager_linux.go:300] "Creating device plugin manager" Jun 20 18:56:12.339436 kubelet[2776]: I0620 18:56:12.339390 2776 state_mem.go:36] "Initialized new in-memory state store" Jun 20 18:56:12.343190 kubelet[2776]: I0620 18:56:12.343151 2776 kubelet.go:408] "Attempting to sync node with API server" Jun 20 18:56:12.343190 kubelet[2776]: I0620 18:56:12.343186 2776 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 20 18:56:12.344553 kubelet[2776]: I0620 18:56:12.344511 2776 kubelet.go:314] "Adding apiserver pod source" Jun 20 18:56:12.344553 kubelet[2776]: I0620 18:56:12.344548 2776 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 20 18:56:12.346760 kubelet[2776]: W0620 18:56:12.346549 2776 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.28.28:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-28-28&limit=500&resourceVersion=0": dial tcp 172.31.28.28:6443: connect: connection refused Jun 20 18:56:12.346760 kubelet[2776]: E0620 18:56:12.346609 2776 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.28.28:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-28-28&limit=500&resourceVersion=0\": dial tcp 172.31.28.28:6443: connect: connection refused" logger="UnhandledError" Jun 20 18:56:12.349078 kubelet[2776]: W0620 18:56:12.348921 2776 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.28.28:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.28.28:6443: connect: connection refused Jun 20 18:56:12.349078 kubelet[2776]: E0620 18:56:12.348971 2776 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.28.28:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.28.28:6443: connect: connection refused" logger="UnhandledError" Jun 20 18:56:12.349308 kubelet[2776]: I0620 18:56:12.349295 2776 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jun 20 18:56:12.357240 kubelet[2776]: I0620 18:56:12.356370 2776 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jun 20 18:56:12.357449 kubelet[2776]: W0620 18:56:12.357426 2776 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jun 20 18:56:12.359332 kubelet[2776]: I0620 18:56:12.359280 2776 server.go:1274] "Started kubelet" Jun 20 18:56:12.366009 kubelet[2776]: I0620 18:56:12.365963 2776 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jun 20 18:56:12.368278 kubelet[2776]: I0620 18:56:12.366872 2776 server.go:449] "Adding debug handlers to kubelet server" Jun 20 18:56:12.370403 kubelet[2776]: I0620 18:56:12.370353 2776 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jun 20 18:56:12.370728 kubelet[2776]: I0620 18:56:12.370698 2776 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 20 18:56:12.372150 kubelet[2776]: I0620 18:56:12.372123 2776 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 20 18:56:12.374335 kubelet[2776]: E0620 18:56:12.370263 2776 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.28.28:6443/api/v1/namespaces/default/events\": dial tcp 172.31.28.28:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-28-28.184ad527dc0ead0e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-28-28,UID:ip-172-31-28-28,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-28-28,},FirstTimestamp:2025-06-20 18:56:12.359249166 +0000 UTC m=+0.620288347,LastTimestamp:2025-06-20 18:56:12.359249166 +0000 UTC m=+0.620288347,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-28-28,}" Jun 20 18:56:12.374335 kubelet[2776]: I0620 18:56:12.374128 2776 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jun 20 18:56:12.381231 kubelet[2776]: E0620 18:56:12.381118 2776 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-28-28\" not found" Jun 20 18:56:12.381231 kubelet[2776]: I0620 18:56:12.381156 2776 volume_manager.go:289] "Starting Kubelet Volume Manager" Jun 20 18:56:12.382706 kubelet[2776]: I0620 18:56:12.382684 2776 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jun 20 18:56:12.382748 kubelet[2776]: I0620 18:56:12.382740 2776 reconciler.go:26] "Reconciler: start to sync state" Jun 20 18:56:12.383120 kubelet[2776]: W0620 18:56:12.383080 2776 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.28.28:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.28.28:6443: connect: connection refused Jun 20 18:56:12.383159 kubelet[2776]: E0620 18:56:12.383128 2776 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.28.28:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.28.28:6443: connect: connection refused" logger="UnhandledError" Jun 20 18:56:12.383585 kubelet[2776]: E0620 18:56:12.383556 2776 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.28.28:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-28?timeout=10s\": dial tcp 172.31.28.28:6443: connect: connection refused" interval="200ms" Jun 20 18:56:12.385242 kubelet[2776]: E0620 18:56:12.385201 2776 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 20 18:56:12.386031 kubelet[2776]: I0620 18:56:12.385631 2776 factory.go:221] Registration of the containerd container factory successfully Jun 20 18:56:12.386031 kubelet[2776]: I0620 18:56:12.385640 2776 factory.go:221] Registration of the systemd container factory successfully Jun 20 18:56:12.386031 kubelet[2776]: I0620 18:56:12.385720 2776 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jun 20 18:56:12.398056 kubelet[2776]: I0620 18:56:12.397386 2776 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 20 18:56:12.399604 kubelet[2776]: I0620 18:56:12.399301 2776 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 20 18:56:12.399604 kubelet[2776]: I0620 18:56:12.399325 2776 status_manager.go:217] "Starting to sync pod status with apiserver" Jun 20 18:56:12.399604 kubelet[2776]: I0620 18:56:12.399346 2776 kubelet.go:2321] "Starting kubelet main sync loop" Jun 20 18:56:12.399604 kubelet[2776]: E0620 18:56:12.399382 2776 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 20 18:56:12.408619 kubelet[2776]: W0620 18:56:12.408564 2776 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.28.28:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.28.28:6443: connect: connection refused Jun 20 18:56:12.408729 kubelet[2776]: E0620 18:56:12.408625 2776 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.28.28:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.28.28:6443: connect: connection refused" logger="UnhandledError" Jun 20 18:56:12.416821 kubelet[2776]: I0620 18:56:12.416796 2776 cpu_manager.go:214] "Starting CPU manager" policy="none" Jun 20 18:56:12.416821 kubelet[2776]: I0620 18:56:12.416812 2776 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jun 20 18:56:12.416821 kubelet[2776]: I0620 18:56:12.416827 2776 state_mem.go:36] "Initialized new in-memory state store" Jun 20 18:56:12.421575 kubelet[2776]: I0620 18:56:12.421528 2776 policy_none.go:49] "None policy: Start" Jun 20 18:56:12.422286 kubelet[2776]: I0620 18:56:12.422267 2776 memory_manager.go:170] "Starting memorymanager" policy="None" Jun 20 18:56:12.422391 kubelet[2776]: I0620 18:56:12.422379 2776 state_mem.go:35] "Initializing new in-memory state store" Jun 20 18:56:12.437470 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jun 20 18:56:12.448825 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jun 20 18:56:12.452913 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jun 20 18:56:12.460483 kubelet[2776]: I0620 18:56:12.460346 2776 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 20 18:56:12.460589 kubelet[2776]: I0620 18:56:12.460521 2776 eviction_manager.go:189] "Eviction manager: starting control loop" Jun 20 18:56:12.460589 kubelet[2776]: I0620 18:56:12.460544 2776 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jun 20 18:56:12.462803 kubelet[2776]: I0620 18:56:12.462558 2776 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 20 18:56:12.464178 kubelet[2776]: E0620 18:56:12.464145 2776 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-28-28\" not found" Jun 20 18:56:12.509922 systemd[1]: Created slice kubepods-burstable-pod00a3461947ae8c28bdbd7b322d3efa45.slice - libcontainer container kubepods-burstable-pod00a3461947ae8c28bdbd7b322d3efa45.slice. Jun 20 18:56:12.531638 systemd[1]: Created slice kubepods-burstable-podf020bc12febf3610d2a2b5ee834dde6d.slice - libcontainer container kubepods-burstable-podf020bc12febf3610d2a2b5ee834dde6d.slice. Jun 20 18:56:12.542423 systemd[1]: Created slice kubepods-burstable-pod8622643682c89cf05bd11f1f02ecda26.slice - libcontainer container kubepods-burstable-pod8622643682c89cf05bd11f1f02ecda26.slice. Jun 20 18:56:12.562632 kubelet[2776]: I0620 18:56:12.562593 2776 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-28-28" Jun 20 18:56:12.563012 kubelet[2776]: E0620 18:56:12.562980 2776 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.28.28:6443/api/v1/nodes\": dial tcp 172.31.28.28:6443: connect: connection refused" node="ip-172-31-28-28" Jun 20 18:56:12.585086 kubelet[2776]: E0620 18:56:12.585043 2776 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.28.28:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-28?timeout=10s\": dial tcp 172.31.28.28:6443: connect: connection refused" interval="400ms" Jun 20 18:56:12.683514 kubelet[2776]: I0620 18:56:12.683430 2776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/00a3461947ae8c28bdbd7b322d3efa45-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-28-28\" (UID: \"00a3461947ae8c28bdbd7b322d3efa45\") " pod="kube-system/kube-controller-manager-ip-172-31-28-28" Jun 20 18:56:12.683514 kubelet[2776]: I0620 18:56:12.683479 2776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8622643682c89cf05bd11f1f02ecda26-k8s-certs\") pod \"kube-apiserver-ip-172-31-28-28\" (UID: \"8622643682c89cf05bd11f1f02ecda26\") " pod="kube-system/kube-apiserver-ip-172-31-28-28" Jun 20 18:56:12.683868 kubelet[2776]: I0620 18:56:12.683540 2776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8622643682c89cf05bd11f1f02ecda26-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-28-28\" (UID: \"8622643682c89cf05bd11f1f02ecda26\") " pod="kube-system/kube-apiserver-ip-172-31-28-28" Jun 20 18:56:12.683868 kubelet[2776]: I0620 18:56:12.683594 2776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/00a3461947ae8c28bdbd7b322d3efa45-ca-certs\") pod \"kube-controller-manager-ip-172-31-28-28\" (UID: \"00a3461947ae8c28bdbd7b322d3efa45\") " pod="kube-system/kube-controller-manager-ip-172-31-28-28" Jun 20 18:56:12.683868 kubelet[2776]: I0620 18:56:12.683623 2776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/00a3461947ae8c28bdbd7b322d3efa45-k8s-certs\") pod \"kube-controller-manager-ip-172-31-28-28\" (UID: \"00a3461947ae8c28bdbd7b322d3efa45\") " pod="kube-system/kube-controller-manager-ip-172-31-28-28" Jun 20 18:56:12.683868 kubelet[2776]: I0620 18:56:12.683656 2776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/00a3461947ae8c28bdbd7b322d3efa45-kubeconfig\") pod \"kube-controller-manager-ip-172-31-28-28\" (UID: \"00a3461947ae8c28bdbd7b322d3efa45\") " pod="kube-system/kube-controller-manager-ip-172-31-28-28" Jun 20 18:56:12.683868 kubelet[2776]: I0620 18:56:12.683679 2776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/00a3461947ae8c28bdbd7b322d3efa45-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-28-28\" (UID: \"00a3461947ae8c28bdbd7b322d3efa45\") " pod="kube-system/kube-controller-manager-ip-172-31-28-28" Jun 20 18:56:12.684015 kubelet[2776]: I0620 18:56:12.683758 2776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f020bc12febf3610d2a2b5ee834dde6d-kubeconfig\") pod \"kube-scheduler-ip-172-31-28-28\" (UID: \"f020bc12febf3610d2a2b5ee834dde6d\") " pod="kube-system/kube-scheduler-ip-172-31-28-28" Jun 20 18:56:12.684015 kubelet[2776]: I0620 18:56:12.683810 2776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8622643682c89cf05bd11f1f02ecda26-ca-certs\") pod \"kube-apiserver-ip-172-31-28-28\" (UID: \"8622643682c89cf05bd11f1f02ecda26\") " pod="kube-system/kube-apiserver-ip-172-31-28-28" Jun 20 18:56:12.765415 kubelet[2776]: I0620 18:56:12.765315 2776 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-28-28" Jun 20 18:56:12.765684 kubelet[2776]: E0620 18:56:12.765657 2776 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.28.28:6443/api/v1/nodes\": dial tcp 172.31.28.28:6443: connect: connection refused" node="ip-172-31-28-28" Jun 20 18:56:12.830052 containerd[1906]: time="2025-06-20T18:56:12.829933734Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-28-28,Uid:00a3461947ae8c28bdbd7b322d3efa45,Namespace:kube-system,Attempt:0,}" Jun 20 18:56:12.840687 containerd[1906]: time="2025-06-20T18:56:12.840639484Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-28-28,Uid:f020bc12febf3610d2a2b5ee834dde6d,Namespace:kube-system,Attempt:0,}" Jun 20 18:56:12.846419 containerd[1906]: time="2025-06-20T18:56:12.846378775Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-28-28,Uid:8622643682c89cf05bd11f1f02ecda26,Namespace:kube-system,Attempt:0,}" Jun 20 18:56:12.986163 kubelet[2776]: E0620 18:56:12.986113 2776 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.28.28:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-28?timeout=10s\": dial tcp 172.31.28.28:6443: connect: connection refused" interval="800ms" Jun 20 18:56:13.167552 kubelet[2776]: I0620 18:56:13.167313 2776 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-28-28" Jun 20 18:56:13.167651 kubelet[2776]: E0620 18:56:13.167585 2776 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.28.28:6443/api/v1/nodes\": dial tcp 172.31.28.28:6443: connect: connection refused" node="ip-172-31-28-28" Jun 20 18:56:13.260814 kubelet[2776]: W0620 18:56:13.260753 2776 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.28.28:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.28.28:6443: connect: connection refused Jun 20 18:56:13.260962 kubelet[2776]: E0620 18:56:13.260820 2776 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.28.28:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.28.28:6443: connect: connection refused" logger="UnhandledError" Jun 20 18:56:13.327441 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1703343558.mount: Deactivated successfully. Jun 20 18:56:13.340063 containerd[1906]: time="2025-06-20T18:56:13.340005678Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 20 18:56:13.348060 containerd[1906]: time="2025-06-20T18:56:13.347870559Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jun 20 18:56:13.350000 containerd[1906]: time="2025-06-20T18:56:13.349957721Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 20 18:56:13.352393 containerd[1906]: time="2025-06-20T18:56:13.352350256Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 20 18:56:13.356241 containerd[1906]: time="2025-06-20T18:56:13.356067663Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jun 20 18:56:13.358417 containerd[1906]: time="2025-06-20T18:56:13.358244124Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 20 18:56:13.360628 containerd[1906]: time="2025-06-20T18:56:13.360584875Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 20 18:56:13.361650 containerd[1906]: time="2025-06-20T18:56:13.361611776Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 531.575812ms" Jun 20 18:56:13.362515 containerd[1906]: time="2025-06-20T18:56:13.362471334Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jun 20 18:56:13.365030 containerd[1906]: time="2025-06-20T18:56:13.364893797Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 518.428054ms" Jun 20 18:56:13.375980 containerd[1906]: time="2025-06-20T18:56:13.375930560Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 535.20155ms" Jun 20 18:56:13.439342 kubelet[2776]: W0620 18:56:13.432620 2776 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.28.28:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.28.28:6443: connect: connection refused Jun 20 18:56:13.439342 kubelet[2776]: E0620 18:56:13.432720 2776 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.28.28:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.28.28:6443: connect: connection refused" logger="UnhandledError" Jun 20 18:56:13.560369 containerd[1906]: time="2025-06-20T18:56:13.554887689Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 20 18:56:13.560369 containerd[1906]: time="2025-06-20T18:56:13.556650625Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 20 18:56:13.560369 containerd[1906]: time="2025-06-20T18:56:13.556668196Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 18:56:13.560369 containerd[1906]: time="2025-06-20T18:56:13.556735896Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 18:56:13.573955 containerd[1906]: time="2025-06-20T18:56:13.573776275Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 20 18:56:13.573955 containerd[1906]: time="2025-06-20T18:56:13.573848691Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 20 18:56:13.573955 containerd[1906]: time="2025-06-20T18:56:13.573870771Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 18:56:13.574193 containerd[1906]: time="2025-06-20T18:56:13.574084609Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 18:56:13.577296 containerd[1906]: time="2025-06-20T18:56:13.577187516Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 20 18:56:13.577296 containerd[1906]: time="2025-06-20T18:56:13.577259667Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 20 18:56:13.577296 containerd[1906]: time="2025-06-20T18:56:13.577275673Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 18:56:13.577695 containerd[1906]: time="2025-06-20T18:56:13.577643382Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 18:56:13.596443 systemd[1]: Started cri-containerd-173127a87035844a941584b3076d34d364f9f911b8954dc0d185a8370f874476.scope - libcontainer container 173127a87035844a941584b3076d34d364f9f911b8954dc0d185a8370f874476. Jun 20 18:56:13.604287 systemd[1]: Started cri-containerd-19c1eaccf18d85f8bc26fda22bb82b298788a22148987b49cd51f7056038d7f5.scope - libcontainer container 19c1eaccf18d85f8bc26fda22bb82b298788a22148987b49cd51f7056038d7f5. Jun 20 18:56:13.653712 systemd[1]: Started cri-containerd-eb826aab906ee2d956790caa16ac6c6e8f96a2b2ffbcff44eee262d76f23c605.scope - libcontainer container eb826aab906ee2d956790caa16ac6c6e8f96a2b2ffbcff44eee262d76f23c605. Jun 20 18:56:13.711062 containerd[1906]: time="2025-06-20T18:56:13.710755520Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-28-28,Uid:f020bc12febf3610d2a2b5ee834dde6d,Namespace:kube-system,Attempt:0,} returns sandbox id \"19c1eaccf18d85f8bc26fda22bb82b298788a22148987b49cd51f7056038d7f5\"" Jun 20 18:56:13.717462 containerd[1906]: time="2025-06-20T18:56:13.717344840Z" level=info msg="CreateContainer within sandbox \"19c1eaccf18d85f8bc26fda22bb82b298788a22148987b49cd51f7056038d7f5\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jun 20 18:56:13.719230 containerd[1906]: time="2025-06-20T18:56:13.718870795Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-28-28,Uid:00a3461947ae8c28bdbd7b322d3efa45,Namespace:kube-system,Attempt:0,} returns sandbox id \"173127a87035844a941584b3076d34d364f9f911b8954dc0d185a8370f874476\"" Jun 20 18:56:13.723093 containerd[1906]: time="2025-06-20T18:56:13.722948980Z" level=info msg="CreateContainer within sandbox \"173127a87035844a941584b3076d34d364f9f911b8954dc0d185a8370f874476\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jun 20 18:56:13.741978 containerd[1906]: time="2025-06-20T18:56:13.741859651Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-28-28,Uid:8622643682c89cf05bd11f1f02ecda26,Namespace:kube-system,Attempt:0,} returns sandbox id \"eb826aab906ee2d956790caa16ac6c6e8f96a2b2ffbcff44eee262d76f23c605\"" Jun 20 18:56:13.747386 containerd[1906]: time="2025-06-20T18:56:13.747342743Z" level=info msg="CreateContainer within sandbox \"eb826aab906ee2d956790caa16ac6c6e8f96a2b2ffbcff44eee262d76f23c605\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jun 20 18:56:13.762788 containerd[1906]: time="2025-06-20T18:56:13.762634731Z" level=info msg="CreateContainer within sandbox \"19c1eaccf18d85f8bc26fda22bb82b298788a22148987b49cd51f7056038d7f5\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"a895e0542a759f57324f638e43e456f3ecc3f45020c2ff460f501ef188c82ffe\"" Jun 20 18:56:13.763464 containerd[1906]: time="2025-06-20T18:56:13.763373078Z" level=info msg="StartContainer for \"a895e0542a759f57324f638e43e456f3ecc3f45020c2ff460f501ef188c82ffe\"" Jun 20 18:56:13.771470 containerd[1906]: time="2025-06-20T18:56:13.770430574Z" level=info msg="CreateContainer within sandbox \"173127a87035844a941584b3076d34d364f9f911b8954dc0d185a8370f874476\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"40a93ddbc8243ad603b5703a986af163c9db62c70a15c98289a45efe54ad72c1\"" Jun 20 18:56:13.771470 containerd[1906]: time="2025-06-20T18:56:13.771015195Z" level=info msg="StartContainer for \"40a93ddbc8243ad603b5703a986af163c9db62c70a15c98289a45efe54ad72c1\"" Jun 20 18:56:13.785753 containerd[1906]: time="2025-06-20T18:56:13.785721546Z" level=info msg="CreateContainer within sandbox \"eb826aab906ee2d956790caa16ac6c6e8f96a2b2ffbcff44eee262d76f23c605\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"dc7f8b3a837dcafb7dc476e8de70aa4d7427ef78641955a2e0585a4c4aa8e2a3\"" Jun 20 18:56:13.786848 kubelet[2776]: E0620 18:56:13.786811 2776 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.28.28:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-28?timeout=10s\": dial tcp 172.31.28.28:6443: connect: connection refused" interval="1.6s" Jun 20 18:56:13.787664 containerd[1906]: time="2025-06-20T18:56:13.787608626Z" level=info msg="StartContainer for \"dc7f8b3a837dcafb7dc476e8de70aa4d7427ef78641955a2e0585a4c4aa8e2a3\"" Jun 20 18:56:13.793622 systemd[1]: Started cri-containerd-a895e0542a759f57324f638e43e456f3ecc3f45020c2ff460f501ef188c82ffe.scope - libcontainer container a895e0542a759f57324f638e43e456f3ecc3f45020c2ff460f501ef188c82ffe. Jun 20 18:56:13.810697 systemd[1]: Started cri-containerd-40a93ddbc8243ad603b5703a986af163c9db62c70a15c98289a45efe54ad72c1.scope - libcontainer container 40a93ddbc8243ad603b5703a986af163c9db62c70a15c98289a45efe54ad72c1. Jun 20 18:56:13.834401 systemd[1]: Started cri-containerd-dc7f8b3a837dcafb7dc476e8de70aa4d7427ef78641955a2e0585a4c4aa8e2a3.scope - libcontainer container dc7f8b3a837dcafb7dc476e8de70aa4d7427ef78641955a2e0585a4c4aa8e2a3. Jun 20 18:56:13.841733 kubelet[2776]: W0620 18:56:13.841672 2776 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.28.28:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-28-28&limit=500&resourceVersion=0": dial tcp 172.31.28.28:6443: connect: connection refused Jun 20 18:56:13.841871 kubelet[2776]: E0620 18:56:13.841743 2776 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.28.28:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-28-28&limit=500&resourceVersion=0\": dial tcp 172.31.28.28:6443: connect: connection refused" logger="UnhandledError" Jun 20 18:56:13.864734 kubelet[2776]: W0620 18:56:13.864683 2776 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.28.28:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.28.28:6443: connect: connection refused Jun 20 18:56:13.865069 kubelet[2776]: E0620 18:56:13.864962 2776 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.28.28:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.28.28:6443: connect: connection refused" logger="UnhandledError" Jun 20 18:56:13.882487 containerd[1906]: time="2025-06-20T18:56:13.882362273Z" level=info msg="StartContainer for \"a895e0542a759f57324f638e43e456f3ecc3f45020c2ff460f501ef188c82ffe\" returns successfully" Jun 20 18:56:13.898629 containerd[1906]: time="2025-06-20T18:56:13.898326339Z" level=info msg="StartContainer for \"40a93ddbc8243ad603b5703a986af163c9db62c70a15c98289a45efe54ad72c1\" returns successfully" Jun 20 18:56:13.898629 containerd[1906]: time="2025-06-20T18:56:13.898469474Z" level=info msg="StartContainer for \"dc7f8b3a837dcafb7dc476e8de70aa4d7427ef78641955a2e0585a4c4aa8e2a3\" returns successfully" Jun 20 18:56:13.973530 kubelet[2776]: I0620 18:56:13.972464 2776 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-28-28" Jun 20 18:56:13.973530 kubelet[2776]: E0620 18:56:13.973439 2776 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.28.28:6443/api/v1/nodes\": dial tcp 172.31.28.28:6443: connect: connection refused" node="ip-172-31-28-28" Jun 20 18:56:14.327308 kubelet[2776]: E0620 18:56:14.327027 2776 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.28.28:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.28.28:6443: connect: connection refused" logger="UnhandledError" Jun 20 18:56:15.577269 kubelet[2776]: I0620 18:56:15.576375 2776 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-28-28" Jun 20 18:56:16.808009 kubelet[2776]: E0620 18:56:16.807950 2776 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-28-28\" not found" node="ip-172-31-28-28" Jun 20 18:56:16.891569 kubelet[2776]: I0620 18:56:16.891272 2776 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-28-28" Jun 20 18:56:16.891569 kubelet[2776]: E0620 18:56:16.891319 2776 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ip-172-31-28-28\": node \"ip-172-31-28-28\" not found" Jun 20 18:56:17.352599 kubelet[2776]: I0620 18:56:17.352559 2776 apiserver.go:52] "Watching apiserver" Jun 20 18:56:17.383322 kubelet[2776]: I0620 18:56:17.383275 2776 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jun 20 18:56:17.417983 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jun 20 18:56:18.695633 systemd[1]: Reload requested from client PID 3054 ('systemctl') (unit session-7.scope)... Jun 20 18:56:18.695658 systemd[1]: Reloading... Jun 20 18:56:18.808255 zram_generator::config[3102]: No configuration found. Jun 20 18:56:18.931912 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 18:56:19.064923 systemd[1]: Reloading finished in 368 ms. Jun 20 18:56:19.100928 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 18:56:19.116944 systemd[1]: kubelet.service: Deactivated successfully. Jun 20 18:56:19.117190 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 18:56:19.117263 systemd[1]: kubelet.service: Consumed 995ms CPU time, 128.6M memory peak. Jun 20 18:56:19.123538 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 18:56:19.356187 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 18:56:19.361324 (kubelet)[3159]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 20 18:56:19.429366 kubelet[3159]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 20 18:56:19.429366 kubelet[3159]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jun 20 18:56:19.429366 kubelet[3159]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 20 18:56:19.429366 kubelet[3159]: I0620 18:56:19.428914 3159 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 20 18:56:19.439251 kubelet[3159]: I0620 18:56:19.439188 3159 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jun 20 18:56:19.439251 kubelet[3159]: I0620 18:56:19.439229 3159 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 20 18:56:19.439490 kubelet[3159]: I0620 18:56:19.439474 3159 server.go:934] "Client rotation is on, will bootstrap in background" Jun 20 18:56:19.440828 kubelet[3159]: I0620 18:56:19.440787 3159 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jun 20 18:56:19.442692 kubelet[3159]: I0620 18:56:19.442548 3159 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 20 18:56:19.453407 kubelet[3159]: E0620 18:56:19.453358 3159 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jun 20 18:56:19.453407 kubelet[3159]: I0620 18:56:19.453396 3159 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jun 20 18:56:19.458289 kubelet[3159]: I0620 18:56:19.457162 3159 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 20 18:56:19.458289 kubelet[3159]: I0620 18:56:19.457275 3159 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jun 20 18:56:19.458289 kubelet[3159]: I0620 18:56:19.457391 3159 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 20 18:56:19.458289 kubelet[3159]: I0620 18:56:19.457414 3159 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-28-28","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jun 20 18:56:19.458525 kubelet[3159]: I0620 18:56:19.457577 3159 topology_manager.go:138] "Creating topology manager with none policy" Jun 20 18:56:19.458525 kubelet[3159]: I0620 18:56:19.457587 3159 container_manager_linux.go:300] "Creating device plugin manager" Jun 20 18:56:19.458525 kubelet[3159]: I0620 18:56:19.457622 3159 state_mem.go:36] "Initialized new in-memory state store" Jun 20 18:56:19.458525 kubelet[3159]: I0620 18:56:19.457707 3159 kubelet.go:408] "Attempting to sync node with API server" Jun 20 18:56:19.458525 kubelet[3159]: I0620 18:56:19.457718 3159 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 20 18:56:19.458525 kubelet[3159]: I0620 18:56:19.457745 3159 kubelet.go:314] "Adding apiserver pod source" Jun 20 18:56:19.458525 kubelet[3159]: I0620 18:56:19.457759 3159 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 20 18:56:19.458525 kubelet[3159]: I0620 18:56:19.458338 3159 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jun 20 18:56:19.460889 kubelet[3159]: I0620 18:56:19.460864 3159 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jun 20 18:56:19.461320 kubelet[3159]: I0620 18:56:19.461304 3159 server.go:1274] "Started kubelet" Jun 20 18:56:19.462966 kubelet[3159]: I0620 18:56:19.462943 3159 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 20 18:56:19.470328 kubelet[3159]: I0620 18:56:19.469095 3159 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jun 20 18:56:19.472241 kubelet[3159]: I0620 18:56:19.471699 3159 server.go:449] "Adding debug handlers to kubelet server" Jun 20 18:56:19.472565 kubelet[3159]: I0620 18:56:19.472537 3159 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jun 20 18:56:19.472712 kubelet[3159]: I0620 18:56:19.472697 3159 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 20 18:56:19.472910 kubelet[3159]: I0620 18:56:19.472895 3159 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jun 20 18:56:19.475574 kubelet[3159]: I0620 18:56:19.475552 3159 volume_manager.go:289] "Starting Kubelet Volume Manager" Jun 20 18:56:19.477243 kubelet[3159]: E0620 18:56:19.475813 3159 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-28-28\" not found" Jun 20 18:56:19.477243 kubelet[3159]: I0620 18:56:19.476065 3159 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jun 20 18:56:19.477243 kubelet[3159]: I0620 18:56:19.476164 3159 reconciler.go:26] "Reconciler: start to sync state" Jun 20 18:56:19.484055 kubelet[3159]: I0620 18:56:19.484026 3159 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 20 18:56:19.485161 kubelet[3159]: I0620 18:56:19.485143 3159 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 20 18:56:19.485323 kubelet[3159]: I0620 18:56:19.485315 3159 status_manager.go:217] "Starting to sync pod status with apiserver" Jun 20 18:56:19.485390 kubelet[3159]: I0620 18:56:19.485384 3159 kubelet.go:2321] "Starting kubelet main sync loop" Jun 20 18:56:19.485471 kubelet[3159]: E0620 18:56:19.485459 3159 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 20 18:56:19.488254 kubelet[3159]: I0620 18:56:19.487680 3159 factory.go:221] Registration of the systemd container factory successfully Jun 20 18:56:19.488254 kubelet[3159]: I0620 18:56:19.487766 3159 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jun 20 18:56:19.501833 kubelet[3159]: I0620 18:56:19.501802 3159 factory.go:221] Registration of the containerd container factory successfully Jun 20 18:56:19.502335 kubelet[3159]: E0620 18:56:19.502309 3159 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 20 18:56:19.551892 kubelet[3159]: I0620 18:56:19.551852 3159 cpu_manager.go:214] "Starting CPU manager" policy="none" Jun 20 18:56:19.552077 kubelet[3159]: I0620 18:56:19.552066 3159 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jun 20 18:56:19.552143 kubelet[3159]: I0620 18:56:19.552136 3159 state_mem.go:36] "Initialized new in-memory state store" Jun 20 18:56:19.552354 kubelet[3159]: I0620 18:56:19.552337 3159 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jun 20 18:56:19.552474 kubelet[3159]: I0620 18:56:19.552439 3159 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jun 20 18:56:19.552528 kubelet[3159]: I0620 18:56:19.552522 3159 policy_none.go:49] "None policy: Start" Jun 20 18:56:19.553085 kubelet[3159]: I0620 18:56:19.553071 3159 memory_manager.go:170] "Starting memorymanager" policy="None" Jun 20 18:56:19.553201 kubelet[3159]: I0620 18:56:19.553181 3159 state_mem.go:35] "Initializing new in-memory state store" Jun 20 18:56:19.553372 kubelet[3159]: I0620 18:56:19.553356 3159 state_mem.go:75] "Updated machine memory state" Jun 20 18:56:19.561154 kubelet[3159]: I0620 18:56:19.561050 3159 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 20 18:56:19.561283 kubelet[3159]: I0620 18:56:19.561242 3159 eviction_manager.go:189] "Eviction manager: starting control loop" Jun 20 18:56:19.561283 kubelet[3159]: I0620 18:56:19.561253 3159 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jun 20 18:56:19.561490 kubelet[3159]: I0620 18:56:19.561438 3159 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 20 18:56:19.595139 kubelet[3159]: E0620 18:56:19.595109 3159 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-28-28\" already exists" pod="kube-system/kube-apiserver-ip-172-31-28-28" Jun 20 18:56:19.664981 kubelet[3159]: I0620 18:56:19.664701 3159 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-28-28" Jun 20 18:56:19.672358 kubelet[3159]: I0620 18:56:19.672038 3159 kubelet_node_status.go:111] "Node was previously registered" node="ip-172-31-28-28" Jun 20 18:56:19.672358 kubelet[3159]: I0620 18:56:19.672108 3159 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-28-28" Jun 20 18:56:19.677949 kubelet[3159]: I0620 18:56:19.677911 3159 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/00a3461947ae8c28bdbd7b322d3efa45-ca-certs\") pod \"kube-controller-manager-ip-172-31-28-28\" (UID: \"00a3461947ae8c28bdbd7b322d3efa45\") " pod="kube-system/kube-controller-manager-ip-172-31-28-28" Jun 20 18:56:19.677949 kubelet[3159]: I0620 18:56:19.677950 3159 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/00a3461947ae8c28bdbd7b322d3efa45-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-28-28\" (UID: \"00a3461947ae8c28bdbd7b322d3efa45\") " pod="kube-system/kube-controller-manager-ip-172-31-28-28" Jun 20 18:56:19.678101 kubelet[3159]: I0620 18:56:19.677970 3159 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8622643682c89cf05bd11f1f02ecda26-ca-certs\") pod \"kube-apiserver-ip-172-31-28-28\" (UID: \"8622643682c89cf05bd11f1f02ecda26\") " pod="kube-system/kube-apiserver-ip-172-31-28-28" Jun 20 18:56:19.678101 kubelet[3159]: I0620 18:56:19.677987 3159 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8622643682c89cf05bd11f1f02ecda26-k8s-certs\") pod \"kube-apiserver-ip-172-31-28-28\" (UID: \"8622643682c89cf05bd11f1f02ecda26\") " pod="kube-system/kube-apiserver-ip-172-31-28-28" Jun 20 18:56:19.678101 kubelet[3159]: I0620 18:56:19.678002 3159 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8622643682c89cf05bd11f1f02ecda26-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-28-28\" (UID: \"8622643682c89cf05bd11f1f02ecda26\") " pod="kube-system/kube-apiserver-ip-172-31-28-28" Jun 20 18:56:19.678101 kubelet[3159]: I0620 18:56:19.678017 3159 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/00a3461947ae8c28bdbd7b322d3efa45-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-28-28\" (UID: \"00a3461947ae8c28bdbd7b322d3efa45\") " pod="kube-system/kube-controller-manager-ip-172-31-28-28" Jun 20 18:56:19.678101 kubelet[3159]: I0620 18:56:19.678032 3159 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/00a3461947ae8c28bdbd7b322d3efa45-k8s-certs\") pod \"kube-controller-manager-ip-172-31-28-28\" (UID: \"00a3461947ae8c28bdbd7b322d3efa45\") " pod="kube-system/kube-controller-manager-ip-172-31-28-28" Jun 20 18:56:19.678305 kubelet[3159]: I0620 18:56:19.678050 3159 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/00a3461947ae8c28bdbd7b322d3efa45-kubeconfig\") pod \"kube-controller-manager-ip-172-31-28-28\" (UID: \"00a3461947ae8c28bdbd7b322d3efa45\") " pod="kube-system/kube-controller-manager-ip-172-31-28-28" Jun 20 18:56:19.678305 kubelet[3159]: I0620 18:56:19.678066 3159 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f020bc12febf3610d2a2b5ee834dde6d-kubeconfig\") pod \"kube-scheduler-ip-172-31-28-28\" (UID: \"f020bc12febf3610d2a2b5ee834dde6d\") " pod="kube-system/kube-scheduler-ip-172-31-28-28" Jun 20 18:56:19.712892 sudo[3192]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jun 20 18:56:19.713207 sudo[3192]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jun 20 18:56:20.301097 sudo[3192]: pam_unix(sudo:session): session closed for user root Jun 20 18:56:20.465625 kubelet[3159]: I0620 18:56:20.464288 3159 apiserver.go:52] "Watching apiserver" Jun 20 18:56:20.476737 kubelet[3159]: I0620 18:56:20.476661 3159 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jun 20 18:56:20.545662 kubelet[3159]: E0620 18:56:20.545426 3159 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-28-28\" already exists" pod="kube-system/kube-apiserver-ip-172-31-28-28" Jun 20 18:56:20.578032 kubelet[3159]: I0620 18:56:20.577773 3159 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-28-28" podStartSLOduration=3.577749849 podStartE2EDuration="3.577749849s" podCreationTimestamp="2025-06-20 18:56:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 18:56:20.567081835 +0000 UTC m=+1.200827620" watchObservedRunningTime="2025-06-20 18:56:20.577749849 +0000 UTC m=+1.211495634" Jun 20 18:56:20.588698 kubelet[3159]: I0620 18:56:20.588189 3159 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-28-28" podStartSLOduration=1.5881678959999999 podStartE2EDuration="1.588167896s" podCreationTimestamp="2025-06-20 18:56:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 18:56:20.578550031 +0000 UTC m=+1.212295815" watchObservedRunningTime="2025-06-20 18:56:20.588167896 +0000 UTC m=+1.221913682" Jun 20 18:56:20.599737 kubelet[3159]: I0620 18:56:20.599268 3159 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-28-28" podStartSLOduration=1.59924723 podStartE2EDuration="1.59924723s" podCreationTimestamp="2025-06-20 18:56:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 18:56:20.588633352 +0000 UTC m=+1.222379132" watchObservedRunningTime="2025-06-20 18:56:20.59924723 +0000 UTC m=+1.232993014" Jun 20 18:56:22.206057 sudo[2236]: pam_unix(sudo:session): session closed for user root Jun 20 18:56:22.227956 sshd[2235]: Connection closed by 139.178.68.195 port 57404 Jun 20 18:56:22.228956 sshd-session[2233]: pam_unix(sshd:session): session closed for user core Jun 20 18:56:22.232078 systemd[1]: sshd@6-172.31.28.28:22-139.178.68.195:57404.service: Deactivated successfully. Jun 20 18:56:22.234175 systemd[1]: session-7.scope: Deactivated successfully. Jun 20 18:56:22.234662 systemd[1]: session-7.scope: Consumed 5.015s CPU time, 208.2M memory peak. Jun 20 18:56:22.236539 systemd-logind[1888]: Session 7 logged out. Waiting for processes to exit. Jun 20 18:56:22.237618 systemd-logind[1888]: Removed session 7. Jun 20 18:56:25.469995 kubelet[3159]: I0620 18:56:25.469949 3159 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jun 20 18:56:25.471500 containerd[1906]: time="2025-06-20T18:56:25.471208222Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jun 20 18:56:25.471893 kubelet[3159]: I0620 18:56:25.471863 3159 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jun 20 18:56:26.111259 systemd[1]: Created slice kubepods-besteffort-pod62b6e636_b638_4faa_a9f2_d8b31e89f866.slice - libcontainer container kubepods-besteffort-pod62b6e636_b638_4faa_a9f2_d8b31e89f866.slice. Jun 20 18:56:26.115051 kubelet[3159]: I0620 18:56:26.115009 3159 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/016e655e-394d-46be-bb35-eb13463c6ac4-xtables-lock\") pod \"cilium-dbgms\" (UID: \"016e655e-394d-46be-bb35-eb13463c6ac4\") " pod="kube-system/cilium-dbgms" Jun 20 18:56:26.115191 kubelet[3159]: I0620 18:56:26.115070 3159 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/016e655e-394d-46be-bb35-eb13463c6ac4-host-proc-sys-kernel\") pod \"cilium-dbgms\" (UID: \"016e655e-394d-46be-bb35-eb13463c6ac4\") " pod="kube-system/cilium-dbgms" Jun 20 18:56:26.115191 kubelet[3159]: I0620 18:56:26.115095 3159 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/016e655e-394d-46be-bb35-eb13463c6ac4-hubble-tls\") pod \"cilium-dbgms\" (UID: \"016e655e-394d-46be-bb35-eb13463c6ac4\") " pod="kube-system/cilium-dbgms" Jun 20 18:56:26.115191 kubelet[3159]: I0620 18:56:26.115117 3159 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6p2hj\" (UniqueName: \"kubernetes.io/projected/016e655e-394d-46be-bb35-eb13463c6ac4-kube-api-access-6p2hj\") pod \"cilium-dbgms\" (UID: \"016e655e-394d-46be-bb35-eb13463c6ac4\") " pod="kube-system/cilium-dbgms" Jun 20 18:56:26.115191 kubelet[3159]: I0620 18:56:26.115142 3159 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/016e655e-394d-46be-bb35-eb13463c6ac4-cilium-cgroup\") pod \"cilium-dbgms\" (UID: \"016e655e-394d-46be-bb35-eb13463c6ac4\") " pod="kube-system/cilium-dbgms" Jun 20 18:56:26.115191 kubelet[3159]: I0620 18:56:26.115169 3159 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/62b6e636-b638-4faa-a9f2-d8b31e89f866-kube-proxy\") pod \"kube-proxy-rfgdg\" (UID: \"62b6e636-b638-4faa-a9f2-d8b31e89f866\") " pod="kube-system/kube-proxy-rfgdg" Jun 20 18:56:26.115434 kubelet[3159]: I0620 18:56:26.115189 3159 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/62b6e636-b638-4faa-a9f2-d8b31e89f866-lib-modules\") pod \"kube-proxy-rfgdg\" (UID: \"62b6e636-b638-4faa-a9f2-d8b31e89f866\") " pod="kube-system/kube-proxy-rfgdg" Jun 20 18:56:26.115434 kubelet[3159]: I0620 18:56:26.115239 3159 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/016e655e-394d-46be-bb35-eb13463c6ac4-etc-cni-netd\") pod \"cilium-dbgms\" (UID: \"016e655e-394d-46be-bb35-eb13463c6ac4\") " pod="kube-system/cilium-dbgms" Jun 20 18:56:26.115434 kubelet[3159]: I0620 18:56:26.115265 3159 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/016e655e-394d-46be-bb35-eb13463c6ac4-clustermesh-secrets\") pod \"cilium-dbgms\" (UID: \"016e655e-394d-46be-bb35-eb13463c6ac4\") " pod="kube-system/cilium-dbgms" Jun 20 18:56:26.115434 kubelet[3159]: I0620 18:56:26.115292 3159 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/016e655e-394d-46be-bb35-eb13463c6ac4-lib-modules\") pod \"cilium-dbgms\" (UID: \"016e655e-394d-46be-bb35-eb13463c6ac4\") " pod="kube-system/cilium-dbgms" Jun 20 18:56:26.115434 kubelet[3159]: I0620 18:56:26.115314 3159 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/016e655e-394d-46be-bb35-eb13463c6ac4-hostproc\") pod \"cilium-dbgms\" (UID: \"016e655e-394d-46be-bb35-eb13463c6ac4\") " pod="kube-system/cilium-dbgms" Jun 20 18:56:26.115434 kubelet[3159]: I0620 18:56:26.115338 3159 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/016e655e-394d-46be-bb35-eb13463c6ac4-cni-path\") pod \"cilium-dbgms\" (UID: \"016e655e-394d-46be-bb35-eb13463c6ac4\") " pod="kube-system/cilium-dbgms" Jun 20 18:56:26.115679 kubelet[3159]: I0620 18:56:26.115362 3159 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/016e655e-394d-46be-bb35-eb13463c6ac4-bpf-maps\") pod \"cilium-dbgms\" (UID: \"016e655e-394d-46be-bb35-eb13463c6ac4\") " pod="kube-system/cilium-dbgms" Jun 20 18:56:26.115679 kubelet[3159]: I0620 18:56:26.115390 3159 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/016e655e-394d-46be-bb35-eb13463c6ac4-cilium-run\") pod \"cilium-dbgms\" (UID: \"016e655e-394d-46be-bb35-eb13463c6ac4\") " pod="kube-system/cilium-dbgms" Jun 20 18:56:26.115679 kubelet[3159]: I0620 18:56:26.115430 3159 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/016e655e-394d-46be-bb35-eb13463c6ac4-cilium-config-path\") pod \"cilium-dbgms\" (UID: \"016e655e-394d-46be-bb35-eb13463c6ac4\") " pod="kube-system/cilium-dbgms" Jun 20 18:56:26.115679 kubelet[3159]: I0620 18:56:26.115453 3159 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/016e655e-394d-46be-bb35-eb13463c6ac4-host-proc-sys-net\") pod \"cilium-dbgms\" (UID: \"016e655e-394d-46be-bb35-eb13463c6ac4\") " pod="kube-system/cilium-dbgms" Jun 20 18:56:26.115679 kubelet[3159]: I0620 18:56:26.115478 3159 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/62b6e636-b638-4faa-a9f2-d8b31e89f866-xtables-lock\") pod \"kube-proxy-rfgdg\" (UID: \"62b6e636-b638-4faa-a9f2-d8b31e89f866\") " pod="kube-system/kube-proxy-rfgdg" Jun 20 18:56:26.115819 kubelet[3159]: I0620 18:56:26.115505 3159 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ld2fk\" (UniqueName: \"kubernetes.io/projected/62b6e636-b638-4faa-a9f2-d8b31e89f866-kube-api-access-ld2fk\") pod \"kube-proxy-rfgdg\" (UID: \"62b6e636-b638-4faa-a9f2-d8b31e89f866\") " pod="kube-system/kube-proxy-rfgdg" Jun 20 18:56:26.128898 systemd[1]: Created slice kubepods-burstable-pod016e655e_394d_46be_bb35_eb13463c6ac4.slice - libcontainer container kubepods-burstable-pod016e655e_394d_46be_bb35_eb13463c6ac4.slice. Jun 20 18:56:26.238253 kubelet[3159]: E0620 18:56:26.237413 3159 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jun 20 18:56:26.238253 kubelet[3159]: E0620 18:56:26.237456 3159 projected.go:194] Error preparing data for projected volume kube-api-access-6p2hj for pod kube-system/cilium-dbgms: configmap "kube-root-ca.crt" not found Jun 20 18:56:26.238610 kubelet[3159]: E0620 18:56:26.238487 3159 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/016e655e-394d-46be-bb35-eb13463c6ac4-kube-api-access-6p2hj podName:016e655e-394d-46be-bb35-eb13463c6ac4 nodeName:}" failed. No retries permitted until 2025-06-20 18:56:26.738457859 +0000 UTC m=+7.372203635 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-6p2hj" (UniqueName: "kubernetes.io/projected/016e655e-394d-46be-bb35-eb13463c6ac4-kube-api-access-6p2hj") pod "cilium-dbgms" (UID: "016e655e-394d-46be-bb35-eb13463c6ac4") : configmap "kube-root-ca.crt" not found Jun 20 18:56:26.240455 kubelet[3159]: E0620 18:56:26.239153 3159 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jun 20 18:56:26.240455 kubelet[3159]: E0620 18:56:26.239183 3159 projected.go:194] Error preparing data for projected volume kube-api-access-ld2fk for pod kube-system/kube-proxy-rfgdg: configmap "kube-root-ca.crt" not found Jun 20 18:56:26.240455 kubelet[3159]: E0620 18:56:26.239247 3159 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/62b6e636-b638-4faa-a9f2-d8b31e89f866-kube-api-access-ld2fk podName:62b6e636-b638-4faa-a9f2-d8b31e89f866 nodeName:}" failed. No retries permitted until 2025-06-20 18:56:26.739205781 +0000 UTC m=+7.372951547 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-ld2fk" (UniqueName: "kubernetes.io/projected/62b6e636-b638-4faa-a9f2-d8b31e89f866-kube-api-access-ld2fk") pod "kube-proxy-rfgdg" (UID: "62b6e636-b638-4faa-a9f2-d8b31e89f866") : configmap "kube-root-ca.crt" not found Jun 20 18:56:26.559627 systemd[1]: Created slice kubepods-besteffort-pod70026d22_746e_4af5_bc9e_220e8faac69a.slice - libcontainer container kubepods-besteffort-pod70026d22_746e_4af5_bc9e_220e8faac69a.slice. Jun 20 18:56:26.620285 kubelet[3159]: I0620 18:56:26.620234 3159 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/70026d22-746e-4af5-bc9e-220e8faac69a-cilium-config-path\") pod \"cilium-operator-5d85765b45-6pmlr\" (UID: \"70026d22-746e-4af5-bc9e-220e8faac69a\") " pod="kube-system/cilium-operator-5d85765b45-6pmlr" Jun 20 18:56:26.620755 kubelet[3159]: I0620 18:56:26.620312 3159 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjpxs\" (UniqueName: \"kubernetes.io/projected/70026d22-746e-4af5-bc9e-220e8faac69a-kube-api-access-rjpxs\") pod \"cilium-operator-5d85765b45-6pmlr\" (UID: \"70026d22-746e-4af5-bc9e-220e8faac69a\") " pod="kube-system/cilium-operator-5d85765b45-6pmlr" Jun 20 18:56:26.864007 containerd[1906]: time="2025-06-20T18:56:26.863599330Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-6pmlr,Uid:70026d22-746e-4af5-bc9e-220e8faac69a,Namespace:kube-system,Attempt:0,}" Jun 20 18:56:26.904723 containerd[1906]: time="2025-06-20T18:56:26.904545870Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 20 18:56:26.904723 containerd[1906]: time="2025-06-20T18:56:26.904677716Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 20 18:56:26.904723 containerd[1906]: time="2025-06-20T18:56:26.904690206Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 18:56:26.905255 containerd[1906]: time="2025-06-20T18:56:26.905057153Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 18:56:26.924441 systemd[1]: Started cri-containerd-936789afe269089ea4fe39069054dee026650c2c8dd560c043cf09e1c5c7041c.scope - libcontainer container 936789afe269089ea4fe39069054dee026650c2c8dd560c043cf09e1c5c7041c. Jun 20 18:56:26.972529 containerd[1906]: time="2025-06-20T18:56:26.972488258Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-6pmlr,Uid:70026d22-746e-4af5-bc9e-220e8faac69a,Namespace:kube-system,Attempt:0,} returns sandbox id \"936789afe269089ea4fe39069054dee026650c2c8dd560c043cf09e1c5c7041c\"" Jun 20 18:56:26.975972 containerd[1906]: time="2025-06-20T18:56:26.975934805Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jun 20 18:56:27.022017 containerd[1906]: time="2025-06-20T18:56:27.021968197Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rfgdg,Uid:62b6e636-b638-4faa-a9f2-d8b31e89f866,Namespace:kube-system,Attempt:0,}" Jun 20 18:56:27.034166 containerd[1906]: time="2025-06-20T18:56:27.034118260Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dbgms,Uid:016e655e-394d-46be-bb35-eb13463c6ac4,Namespace:kube-system,Attempt:0,}" Jun 20 18:56:27.078551 containerd[1906]: time="2025-06-20T18:56:27.078212255Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 20 18:56:27.078551 containerd[1906]: time="2025-06-20T18:56:27.078284847Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 20 18:56:27.078551 containerd[1906]: time="2025-06-20T18:56:27.078300771Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 18:56:27.078551 containerd[1906]: time="2025-06-20T18:56:27.078396063Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 18:56:27.081872 containerd[1906]: time="2025-06-20T18:56:27.081395623Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 20 18:56:27.081872 containerd[1906]: time="2025-06-20T18:56:27.081444708Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 20 18:56:27.081872 containerd[1906]: time="2025-06-20T18:56:27.081460692Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 18:56:27.081872 containerd[1906]: time="2025-06-20T18:56:27.081529361Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 18:56:27.099474 systemd[1]: Started cri-containerd-c4958c52bed2fad40678d574209cf5ad4ee641646fb9193263eb78c84aa37cd5.scope - libcontainer container c4958c52bed2fad40678d574209cf5ad4ee641646fb9193263eb78c84aa37cd5. Jun 20 18:56:27.103671 systemd[1]: Started cri-containerd-bc1424bac79c0662f00f8d9c8827414f56cf254afcc4e306dc4ea74a8773b642.scope - libcontainer container bc1424bac79c0662f00f8d9c8827414f56cf254afcc4e306dc4ea74a8773b642. Jun 20 18:56:27.133241 containerd[1906]: time="2025-06-20T18:56:27.132985726Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dbgms,Uid:016e655e-394d-46be-bb35-eb13463c6ac4,Namespace:kube-system,Attempt:0,} returns sandbox id \"c4958c52bed2fad40678d574209cf5ad4ee641646fb9193263eb78c84aa37cd5\"" Jun 20 18:56:27.140353 containerd[1906]: time="2025-06-20T18:56:27.140067376Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rfgdg,Uid:62b6e636-b638-4faa-a9f2-d8b31e89f866,Namespace:kube-system,Attempt:0,} returns sandbox id \"bc1424bac79c0662f00f8d9c8827414f56cf254afcc4e306dc4ea74a8773b642\"" Jun 20 18:56:27.143984 containerd[1906]: time="2025-06-20T18:56:27.143954679Z" level=info msg="CreateContainer within sandbox \"bc1424bac79c0662f00f8d9c8827414f56cf254afcc4e306dc4ea74a8773b642\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jun 20 18:56:27.171244 containerd[1906]: time="2025-06-20T18:56:27.171177164Z" level=info msg="CreateContainer within sandbox \"bc1424bac79c0662f00f8d9c8827414f56cf254afcc4e306dc4ea74a8773b642\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"d2e00e14d6fc3fd5490ff449d331f5f5ba5459662ec8d4297b53673edd7f2c43\"" Jun 20 18:56:27.172044 containerd[1906]: time="2025-06-20T18:56:27.171960982Z" level=info msg="StartContainer for \"d2e00e14d6fc3fd5490ff449d331f5f5ba5459662ec8d4297b53673edd7f2c43\"" Jun 20 18:56:27.199415 systemd[1]: Started cri-containerd-d2e00e14d6fc3fd5490ff449d331f5f5ba5459662ec8d4297b53673edd7f2c43.scope - libcontainer container d2e00e14d6fc3fd5490ff449d331f5f5ba5459662ec8d4297b53673edd7f2c43. Jun 20 18:56:27.236490 containerd[1906]: time="2025-06-20T18:56:27.236444928Z" level=info msg="StartContainer for \"d2e00e14d6fc3fd5490ff449d331f5f5ba5459662ec8d4297b53673edd7f2c43\" returns successfully" Jun 20 18:56:27.572766 kubelet[3159]: I0620 18:56:27.572691 3159 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-rfgdg" podStartSLOduration=1.5726723059999999 podStartE2EDuration="1.572672306s" podCreationTimestamp="2025-06-20 18:56:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 18:56:27.572586034 +0000 UTC m=+8.206331819" watchObservedRunningTime="2025-06-20 18:56:27.572672306 +0000 UTC m=+8.206418091" Jun 20 18:56:28.295773 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1455319601.mount: Deactivated successfully. Jun 20 18:56:29.160025 containerd[1906]: time="2025-06-20T18:56:29.159976219Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:56:29.161929 containerd[1906]: time="2025-06-20T18:56:29.161860886Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jun 20 18:56:29.164276 containerd[1906]: time="2025-06-20T18:56:29.164197950Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:56:29.165814 containerd[1906]: time="2025-06-20T18:56:29.165727706Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.189749618s" Jun 20 18:56:29.166078 containerd[1906]: time="2025-06-20T18:56:29.165946909Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jun 20 18:56:29.167749 containerd[1906]: time="2025-06-20T18:56:29.167061302Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jun 20 18:56:29.168557 containerd[1906]: time="2025-06-20T18:56:29.168477149Z" level=info msg="CreateContainer within sandbox \"936789afe269089ea4fe39069054dee026650c2c8dd560c043cf09e1c5c7041c\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jun 20 18:56:29.195517 containerd[1906]: time="2025-06-20T18:56:29.195454236Z" level=info msg="CreateContainer within sandbox \"936789afe269089ea4fe39069054dee026650c2c8dd560c043cf09e1c5c7041c\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"ec1e03099fb14592e265a8073898823bca2d18745bd0d01483d788b18502d457\"" Jun 20 18:56:29.196136 containerd[1906]: time="2025-06-20T18:56:29.196076758Z" level=info msg="StartContainer for \"ec1e03099fb14592e265a8073898823bca2d18745bd0d01483d788b18502d457\"" Jun 20 18:56:29.231451 systemd[1]: Started cri-containerd-ec1e03099fb14592e265a8073898823bca2d18745bd0d01483d788b18502d457.scope - libcontainer container ec1e03099fb14592e265a8073898823bca2d18745bd0d01483d788b18502d457. Jun 20 18:56:29.259955 containerd[1906]: time="2025-06-20T18:56:29.259904746Z" level=info msg="StartContainer for \"ec1e03099fb14592e265a8073898823bca2d18745bd0d01483d788b18502d457\" returns successfully" Jun 20 18:56:29.277689 systemd[1]: run-containerd-runc-k8s.io-ec1e03099fb14592e265a8073898823bca2d18745bd0d01483d788b18502d457-runc.ZdMY7I.mount: Deactivated successfully. Jun 20 18:56:29.700798 kubelet[3159]: I0620 18:56:29.700725 3159 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-6pmlr" podStartSLOduration=1.508041306 podStartE2EDuration="3.700702512s" podCreationTimestamp="2025-06-20 18:56:26 +0000 UTC" firstStartedPulling="2025-06-20 18:56:26.974173233 +0000 UTC m=+7.607919010" lastFinishedPulling="2025-06-20 18:56:29.16683444 +0000 UTC m=+9.800580216" observedRunningTime="2025-06-20 18:56:29.671708048 +0000 UTC m=+10.305453832" watchObservedRunningTime="2025-06-20 18:56:29.700702512 +0000 UTC m=+10.334448304" Jun 20 18:56:32.527192 update_engine[1889]: I20250620 18:56:32.526510 1889 update_attempter.cc:509] Updating boot flags... Jun 20 18:56:32.632386 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (3594) Jun 20 18:56:32.855075 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (3593) Jun 20 18:56:34.450728 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2339563951.mount: Deactivated successfully. Jun 20 18:56:36.856470 containerd[1906]: time="2025-06-20T18:56:36.856413348Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:56:36.858805 containerd[1906]: time="2025-06-20T18:56:36.858741018Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jun 20 18:56:36.862246 containerd[1906]: time="2025-06-20T18:56:36.860901223Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:56:36.863106 containerd[1906]: time="2025-06-20T18:56:36.863059619Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 7.695963567s" Jun 20 18:56:36.863248 containerd[1906]: time="2025-06-20T18:56:36.863211409Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jun 20 18:56:36.868630 containerd[1906]: time="2025-06-20T18:56:36.868590812Z" level=info msg="CreateContainer within sandbox \"c4958c52bed2fad40678d574209cf5ad4ee641646fb9193263eb78c84aa37cd5\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jun 20 18:56:36.969816 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3462383298.mount: Deactivated successfully. Jun 20 18:56:36.972306 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1860367563.mount: Deactivated successfully. Jun 20 18:56:36.980121 containerd[1906]: time="2025-06-20T18:56:36.980057684Z" level=info msg="CreateContainer within sandbox \"c4958c52bed2fad40678d574209cf5ad4ee641646fb9193263eb78c84aa37cd5\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"b45fbda7f961d567afa697ccae7fdede90a6d9677efeb763890449be21124f53\"" Jun 20 18:56:36.980907 containerd[1906]: time="2025-06-20T18:56:36.980874817Z" level=info msg="StartContainer for \"b45fbda7f961d567afa697ccae7fdede90a6d9677efeb763890449be21124f53\"" Jun 20 18:56:37.095673 systemd[1]: Started cri-containerd-b45fbda7f961d567afa697ccae7fdede90a6d9677efeb763890449be21124f53.scope - libcontainer container b45fbda7f961d567afa697ccae7fdede90a6d9677efeb763890449be21124f53. Jun 20 18:56:37.141627 containerd[1906]: time="2025-06-20T18:56:37.141362805Z" level=info msg="StartContainer for \"b45fbda7f961d567afa697ccae7fdede90a6d9677efeb763890449be21124f53\" returns successfully" Jun 20 18:56:37.150088 systemd[1]: cri-containerd-b45fbda7f961d567afa697ccae7fdede90a6d9677efeb763890449be21124f53.scope: Deactivated successfully. Jun 20 18:56:37.383709 containerd[1906]: time="2025-06-20T18:56:37.377714845Z" level=info msg="shim disconnected" id=b45fbda7f961d567afa697ccae7fdede90a6d9677efeb763890449be21124f53 namespace=k8s.io Jun 20 18:56:37.383709 containerd[1906]: time="2025-06-20T18:56:37.383711210Z" level=warning msg="cleaning up after shim disconnected" id=b45fbda7f961d567afa697ccae7fdede90a6d9677efeb763890449be21124f53 namespace=k8s.io Jun 20 18:56:37.383937 containerd[1906]: time="2025-06-20T18:56:37.383727413Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 18:56:37.615037 containerd[1906]: time="2025-06-20T18:56:37.614881731Z" level=info msg="CreateContainer within sandbox \"c4958c52bed2fad40678d574209cf5ad4ee641646fb9193263eb78c84aa37cd5\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jun 20 18:56:37.641494 containerd[1906]: time="2025-06-20T18:56:37.641444352Z" level=info msg="CreateContainer within sandbox \"c4958c52bed2fad40678d574209cf5ad4ee641646fb9193263eb78c84aa37cd5\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"10fd0b73b4897e9e46e1e3c6f341cbacec61421f406c2a2766576f4d615fddb8\"" Jun 20 18:56:37.643755 containerd[1906]: time="2025-06-20T18:56:37.642869294Z" level=info msg="StartContainer for \"10fd0b73b4897e9e46e1e3c6f341cbacec61421f406c2a2766576f4d615fddb8\"" Jun 20 18:56:37.673415 systemd[1]: Started cri-containerd-10fd0b73b4897e9e46e1e3c6f341cbacec61421f406c2a2766576f4d615fddb8.scope - libcontainer container 10fd0b73b4897e9e46e1e3c6f341cbacec61421f406c2a2766576f4d615fddb8. Jun 20 18:56:37.701034 containerd[1906]: time="2025-06-20T18:56:37.700994491Z" level=info msg="StartContainer for \"10fd0b73b4897e9e46e1e3c6f341cbacec61421f406c2a2766576f4d615fddb8\" returns successfully" Jun 20 18:56:37.716081 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jun 20 18:56:37.716453 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jun 20 18:56:37.716959 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jun 20 18:56:37.722765 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 20 18:56:37.723040 systemd[1]: cri-containerd-10fd0b73b4897e9e46e1e3c6f341cbacec61421f406c2a2766576f4d615fddb8.scope: Deactivated successfully. Jun 20 18:56:37.761124 containerd[1906]: time="2025-06-20T18:56:37.761068323Z" level=info msg="shim disconnected" id=10fd0b73b4897e9e46e1e3c6f341cbacec61421f406c2a2766576f4d615fddb8 namespace=k8s.io Jun 20 18:56:37.761552 containerd[1906]: time="2025-06-20T18:56:37.761343788Z" level=warning msg="cleaning up after shim disconnected" id=10fd0b73b4897e9e46e1e3c6f341cbacec61421f406c2a2766576f4d615fddb8 namespace=k8s.io Jun 20 18:56:37.761552 containerd[1906]: time="2025-06-20T18:56:37.761372476Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 18:56:37.781243 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 20 18:56:37.966302 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b45fbda7f961d567afa697ccae7fdede90a6d9677efeb763890449be21124f53-rootfs.mount: Deactivated successfully. Jun 20 18:56:38.611729 containerd[1906]: time="2025-06-20T18:56:38.611674235Z" level=info msg="CreateContainer within sandbox \"c4958c52bed2fad40678d574209cf5ad4ee641646fb9193263eb78c84aa37cd5\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jun 20 18:56:38.642842 containerd[1906]: time="2025-06-20T18:56:38.642796660Z" level=info msg="CreateContainer within sandbox \"c4958c52bed2fad40678d574209cf5ad4ee641646fb9193263eb78c84aa37cd5\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"989ba8e972d8926795423b21b9600d15a17edd8fd581ad93b776abcf3a35bc44\"" Jun 20 18:56:38.643746 containerd[1906]: time="2025-06-20T18:56:38.643707823Z" level=info msg="StartContainer for \"989ba8e972d8926795423b21b9600d15a17edd8fd581ad93b776abcf3a35bc44\"" Jun 20 18:56:38.679461 systemd[1]: Started cri-containerd-989ba8e972d8926795423b21b9600d15a17edd8fd581ad93b776abcf3a35bc44.scope - libcontainer container 989ba8e972d8926795423b21b9600d15a17edd8fd581ad93b776abcf3a35bc44. Jun 20 18:56:38.711731 containerd[1906]: time="2025-06-20T18:56:38.711492474Z" level=info msg="StartContainer for \"989ba8e972d8926795423b21b9600d15a17edd8fd581ad93b776abcf3a35bc44\" returns successfully" Jun 20 18:56:38.727069 systemd[1]: cri-containerd-989ba8e972d8926795423b21b9600d15a17edd8fd581ad93b776abcf3a35bc44.scope: Deactivated successfully. Jun 20 18:56:38.727360 systemd[1]: cri-containerd-989ba8e972d8926795423b21b9600d15a17edd8fd581ad93b776abcf3a35bc44.scope: Consumed 19ms CPU time, 3.1M memory peak, 1M read from disk. Jun 20 18:56:38.753847 containerd[1906]: time="2025-06-20T18:56:38.753790124Z" level=info msg="shim disconnected" id=989ba8e972d8926795423b21b9600d15a17edd8fd581ad93b776abcf3a35bc44 namespace=k8s.io Jun 20 18:56:38.753847 containerd[1906]: time="2025-06-20T18:56:38.753840683Z" level=warning msg="cleaning up after shim disconnected" id=989ba8e972d8926795423b21b9600d15a17edd8fd581ad93b776abcf3a35bc44 namespace=k8s.io Jun 20 18:56:38.754060 containerd[1906]: time="2025-06-20T18:56:38.753865581Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 18:56:38.965821 systemd[1]: run-containerd-runc-k8s.io-989ba8e972d8926795423b21b9600d15a17edd8fd581ad93b776abcf3a35bc44-runc.az8Cw8.mount: Deactivated successfully. Jun 20 18:56:38.965933 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-989ba8e972d8926795423b21b9600d15a17edd8fd581ad93b776abcf3a35bc44-rootfs.mount: Deactivated successfully. Jun 20 18:56:39.616394 containerd[1906]: time="2025-06-20T18:56:39.616041814Z" level=info msg="CreateContainer within sandbox \"c4958c52bed2fad40678d574209cf5ad4ee641646fb9193263eb78c84aa37cd5\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jun 20 18:56:39.644061 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3213165267.mount: Deactivated successfully. Jun 20 18:56:39.645877 containerd[1906]: time="2025-06-20T18:56:39.645821187Z" level=info msg="CreateContainer within sandbox \"c4958c52bed2fad40678d574209cf5ad4ee641646fb9193263eb78c84aa37cd5\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"e729957ffd21f0dba79e8645732f6d7636e7f2d3fe1d78440a5ea7e47a35d8bb\"" Jun 20 18:56:39.646401 containerd[1906]: time="2025-06-20T18:56:39.646369828Z" level=info msg="StartContainer for \"e729957ffd21f0dba79e8645732f6d7636e7f2d3fe1d78440a5ea7e47a35d8bb\"" Jun 20 18:56:39.676400 systemd[1]: Started cri-containerd-e729957ffd21f0dba79e8645732f6d7636e7f2d3fe1d78440a5ea7e47a35d8bb.scope - libcontainer container e729957ffd21f0dba79e8645732f6d7636e7f2d3fe1d78440a5ea7e47a35d8bb. Jun 20 18:56:39.705595 systemd[1]: cri-containerd-e729957ffd21f0dba79e8645732f6d7636e7f2d3fe1d78440a5ea7e47a35d8bb.scope: Deactivated successfully. Jun 20 18:56:39.708195 containerd[1906]: time="2025-06-20T18:56:39.707714969Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod016e655e_394d_46be_bb35_eb13463c6ac4.slice/cri-containerd-e729957ffd21f0dba79e8645732f6d7636e7f2d3fe1d78440a5ea7e47a35d8bb.scope/memory.events\": no such file or directory" Jun 20 18:56:39.709633 kubelet[3159]: E0620 18:56:39.709566 3159 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod016e655e_394d_46be_bb35_eb13463c6ac4.slice/cri-containerd-e729957ffd21f0dba79e8645732f6d7636e7f2d3fe1d78440a5ea7e47a35d8bb.scope\": RecentStats: unable to find data in memory cache]" Jun 20 18:56:39.712435 containerd[1906]: time="2025-06-20T18:56:39.712353409Z" level=info msg="StartContainer for \"e729957ffd21f0dba79e8645732f6d7636e7f2d3fe1d78440a5ea7e47a35d8bb\" returns successfully" Jun 20 18:56:39.742769 containerd[1906]: time="2025-06-20T18:56:39.742716595Z" level=info msg="shim disconnected" id=e729957ffd21f0dba79e8645732f6d7636e7f2d3fe1d78440a5ea7e47a35d8bb namespace=k8s.io Jun 20 18:56:39.742769 containerd[1906]: time="2025-06-20T18:56:39.742764286Z" level=warning msg="cleaning up after shim disconnected" id=e729957ffd21f0dba79e8645732f6d7636e7f2d3fe1d78440a5ea7e47a35d8bb namespace=k8s.io Jun 20 18:56:39.742769 containerd[1906]: time="2025-06-20T18:56:39.742772242Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 18:56:39.965869 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e729957ffd21f0dba79e8645732f6d7636e7f2d3fe1d78440a5ea7e47a35d8bb-rootfs.mount: Deactivated successfully. Jun 20 18:56:40.621824 containerd[1906]: time="2025-06-20T18:56:40.621670680Z" level=info msg="CreateContainer within sandbox \"c4958c52bed2fad40678d574209cf5ad4ee641646fb9193263eb78c84aa37cd5\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jun 20 18:56:40.639878 containerd[1906]: time="2025-06-20T18:56:40.639601409Z" level=info msg="CreateContainer within sandbox \"c4958c52bed2fad40678d574209cf5ad4ee641646fb9193263eb78c84aa37cd5\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"63aa61da7be374fa943c00518c74631a9c4367ced05cee037926d92cf3afba68\"" Jun 20 18:56:40.643574 containerd[1906]: time="2025-06-20T18:56:40.643518780Z" level=info msg="StartContainer for \"63aa61da7be374fa943c00518c74631a9c4367ced05cee037926d92cf3afba68\"" Jun 20 18:56:40.709409 systemd[1]: Started cri-containerd-63aa61da7be374fa943c00518c74631a9c4367ced05cee037926d92cf3afba68.scope - libcontainer container 63aa61da7be374fa943c00518c74631a9c4367ced05cee037926d92cf3afba68. Jun 20 18:56:40.741498 containerd[1906]: time="2025-06-20T18:56:40.741333566Z" level=info msg="StartContainer for \"63aa61da7be374fa943c00518c74631a9c4367ced05cee037926d92cf3afba68\" returns successfully" Jun 20 18:56:41.003966 kubelet[3159]: I0620 18:56:41.003818 3159 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jun 20 18:56:41.057853 systemd[1]: Created slice kubepods-burstable-pod4e2cd9fe_a133_4ae9_8d4d_cb0f353b58bf.slice - libcontainer container kubepods-burstable-pod4e2cd9fe_a133_4ae9_8d4d_cb0f353b58bf.slice. Jun 20 18:56:41.068453 systemd[1]: Created slice kubepods-burstable-podea4aa17b_e8e9_4a78_b1e4_9373a2a277aa.slice - libcontainer container kubepods-burstable-podea4aa17b_e8e9_4a78_b1e4_9373a2a277aa.slice. Jun 20 18:56:41.121827 kubelet[3159]: I0620 18:56:41.121607 3159 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zlc5r\" (UniqueName: \"kubernetes.io/projected/4e2cd9fe-a133-4ae9-8d4d-cb0f353b58bf-kube-api-access-zlc5r\") pod \"coredns-7c65d6cfc9-5f2n5\" (UID: \"4e2cd9fe-a133-4ae9-8d4d-cb0f353b58bf\") " pod="kube-system/coredns-7c65d6cfc9-5f2n5" Jun 20 18:56:41.121827 kubelet[3159]: I0620 18:56:41.121746 3159 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ea4aa17b-e8e9-4a78-b1e4-9373a2a277aa-config-volume\") pod \"coredns-7c65d6cfc9-jffx6\" (UID: \"ea4aa17b-e8e9-4a78-b1e4-9373a2a277aa\") " pod="kube-system/coredns-7c65d6cfc9-jffx6" Jun 20 18:56:41.122188 kubelet[3159]: I0620 18:56:41.121981 3159 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpd89\" (UniqueName: \"kubernetes.io/projected/ea4aa17b-e8e9-4a78-b1e4-9373a2a277aa-kube-api-access-vpd89\") pod \"coredns-7c65d6cfc9-jffx6\" (UID: \"ea4aa17b-e8e9-4a78-b1e4-9373a2a277aa\") " pod="kube-system/coredns-7c65d6cfc9-jffx6" Jun 20 18:56:41.122188 kubelet[3159]: I0620 18:56:41.122030 3159 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4e2cd9fe-a133-4ae9-8d4d-cb0f353b58bf-config-volume\") pod \"coredns-7c65d6cfc9-5f2n5\" (UID: \"4e2cd9fe-a133-4ae9-8d4d-cb0f353b58bf\") " pod="kube-system/coredns-7c65d6cfc9-5f2n5" Jun 20 18:56:41.367832 containerd[1906]: time="2025-06-20T18:56:41.367422355Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-5f2n5,Uid:4e2cd9fe-a133-4ae9-8d4d-cb0f353b58bf,Namespace:kube-system,Attempt:0,}" Jun 20 18:56:41.372725 containerd[1906]: time="2025-06-20T18:56:41.372313716Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-jffx6,Uid:ea4aa17b-e8e9-4a78-b1e4-9373a2a277aa,Namespace:kube-system,Attempt:0,}" Jun 20 18:56:43.074985 systemd-networkd[1783]: cilium_host: Link UP Jun 20 18:56:43.076288 systemd-networkd[1783]: cilium_net: Link UP Jun 20 18:56:43.077262 systemd-networkd[1783]: cilium_net: Gained carrier Jun 20 18:56:43.077481 systemd-networkd[1783]: cilium_host: Gained carrier Jun 20 18:56:43.080645 (udev-worker)[4109]: Network interface NamePolicy= disabled on kernel command line. Jun 20 18:56:43.081773 (udev-worker)[4111]: Network interface NamePolicy= disabled on kernel command line. Jun 20 18:56:43.190754 (udev-worker)[4193]: Network interface NamePolicy= disabled on kernel command line. Jun 20 18:56:43.198764 systemd-networkd[1783]: cilium_vxlan: Link UP Jun 20 18:56:43.198774 systemd-networkd[1783]: cilium_vxlan: Gained carrier Jun 20 18:56:43.614586 systemd-networkd[1783]: cilium_net: Gained IPv6LL Jun 20 18:56:43.614841 systemd-networkd[1783]: cilium_host: Gained IPv6LL Jun 20 18:56:43.745518 kernel: NET: Registered PF_ALG protocol family Jun 20 18:56:44.428399 systemd-networkd[1783]: lxc_health: Link UP Jun 20 18:56:44.431563 (udev-worker)[4192]: Network interface NamePolicy= disabled on kernel command line. Jun 20 18:56:44.447566 systemd-networkd[1783]: lxc_health: Gained carrier Jun 20 18:56:44.895331 systemd-networkd[1783]: cilium_vxlan: Gained IPv6LL Jun 20 18:56:45.016889 kernel: eth0: renamed from tmp9dcbe Jun 20 18:56:45.017268 systemd-networkd[1783]: lxce728f95f2f42: Link UP Jun 20 18:56:45.027292 systemd-networkd[1783]: lxce728f95f2f42: Gained carrier Jun 20 18:56:45.043971 systemd-networkd[1783]: lxc3d115e6eb32e: Link UP Jun 20 18:56:45.045261 kernel: eth0: renamed from tmp674ad Jun 20 18:56:45.052481 systemd-networkd[1783]: lxc3d115e6eb32e: Gained carrier Jun 20 18:56:45.106001 kubelet[3159]: I0620 18:56:45.105867 3159 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-dbgms" podStartSLOduration=9.375459883 podStartE2EDuration="19.105843874s" podCreationTimestamp="2025-06-20 18:56:26 +0000 UTC" firstStartedPulling="2025-06-20 18:56:27.13519382 +0000 UTC m=+7.768939586" lastFinishedPulling="2025-06-20 18:56:36.865577811 +0000 UTC m=+17.499323577" observedRunningTime="2025-06-20 18:56:41.641682746 +0000 UTC m=+22.275428532" watchObservedRunningTime="2025-06-20 18:56:45.105843874 +0000 UTC m=+25.739589661" Jun 20 18:56:46.046464 systemd-networkd[1783]: lxc_health: Gained IPv6LL Jun 20 18:56:46.814587 systemd-networkd[1783]: lxce728f95f2f42: Gained IPv6LL Jun 20 18:56:46.878498 systemd-networkd[1783]: lxc3d115e6eb32e: Gained IPv6LL Jun 20 18:56:49.272573 containerd[1906]: time="2025-06-20T18:56:49.269980281Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 20 18:56:49.272573 containerd[1906]: time="2025-06-20T18:56:49.270067238Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 20 18:56:49.272573 containerd[1906]: time="2025-06-20T18:56:49.270092418Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 18:56:49.272573 containerd[1906]: time="2025-06-20T18:56:49.270855577Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 18:56:49.293802 containerd[1906]: time="2025-06-20T18:56:49.293689283Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 20 18:56:49.293954 containerd[1906]: time="2025-06-20T18:56:49.293870369Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 20 18:56:49.298250 containerd[1906]: time="2025-06-20T18:56:49.296411047Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 18:56:49.298250 containerd[1906]: time="2025-06-20T18:56:49.296549484Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 18:56:49.372396 systemd[1]: Started cri-containerd-9dcbe5a8650751f991472193acb27cdc942f921d108b6f72eea679b66aa1f9eb.scope - libcontainer container 9dcbe5a8650751f991472193acb27cdc942f921d108b6f72eea679b66aa1f9eb. Jun 20 18:56:49.383452 systemd[1]: Started cri-containerd-674ad9b7a1d5b3b6d3e66519c3c91924f74919618528a79cd55956d00dd2330a.scope - libcontainer container 674ad9b7a1d5b3b6d3e66519c3c91924f74919618528a79cd55956d00dd2330a. Jun 20 18:56:49.499250 containerd[1906]: time="2025-06-20T18:56:49.492704062Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-jffx6,Uid:ea4aa17b-e8e9-4a78-b1e4-9373a2a277aa,Namespace:kube-system,Attempt:0,} returns sandbox id \"9dcbe5a8650751f991472193acb27cdc942f921d108b6f72eea679b66aa1f9eb\"" Jun 20 18:56:49.505061 containerd[1906]: time="2025-06-20T18:56:49.504915811Z" level=info msg="CreateContainer within sandbox \"9dcbe5a8650751f991472193acb27cdc942f921d108b6f72eea679b66aa1f9eb\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 20 18:56:49.518361 containerd[1906]: time="2025-06-20T18:56:49.518310041Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-5f2n5,Uid:4e2cd9fe-a133-4ae9-8d4d-cb0f353b58bf,Namespace:kube-system,Attempt:0,} returns sandbox id \"674ad9b7a1d5b3b6d3e66519c3c91924f74919618528a79cd55956d00dd2330a\"" Jun 20 18:56:49.523751 containerd[1906]: time="2025-06-20T18:56:49.523630090Z" level=info msg="CreateContainer within sandbox \"674ad9b7a1d5b3b6d3e66519c3c91924f74919618528a79cd55956d00dd2330a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 20 18:56:49.556176 containerd[1906]: time="2025-06-20T18:56:49.556091891Z" level=info msg="CreateContainer within sandbox \"9dcbe5a8650751f991472193acb27cdc942f921d108b6f72eea679b66aa1f9eb\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b324925516130ddc01a954a6769f26b0b684dccec2adcff2e33bc48c51251283\"" Jun 20 18:56:49.557522 containerd[1906]: time="2025-06-20T18:56:49.556679804Z" level=info msg="StartContainer for \"b324925516130ddc01a954a6769f26b0b684dccec2adcff2e33bc48c51251283\"" Jun 20 18:56:49.565010 containerd[1906]: time="2025-06-20T18:56:49.564971213Z" level=info msg="CreateContainer within sandbox \"674ad9b7a1d5b3b6d3e66519c3c91924f74919618528a79cd55956d00dd2330a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4b4bdeac6ec925b1dda47e7cbf4f43e10d759263bf516c38dd41afb89071cff2\"" Jun 20 18:56:49.565645 containerd[1906]: time="2025-06-20T18:56:49.565620727Z" level=info msg="StartContainer for \"4b4bdeac6ec925b1dda47e7cbf4f43e10d759263bf516c38dd41afb89071cff2\"" Jun 20 18:56:49.588421 systemd[1]: Started cri-containerd-b324925516130ddc01a954a6769f26b0b684dccec2adcff2e33bc48c51251283.scope - libcontainer container b324925516130ddc01a954a6769f26b0b684dccec2adcff2e33bc48c51251283. Jun 20 18:56:49.598666 systemd[1]: Started cri-containerd-4b4bdeac6ec925b1dda47e7cbf4f43e10d759263bf516c38dd41afb89071cff2.scope - libcontainer container 4b4bdeac6ec925b1dda47e7cbf4f43e10d759263bf516c38dd41afb89071cff2. Jun 20 18:56:49.641326 containerd[1906]: time="2025-06-20T18:56:49.640709622Z" level=info msg="StartContainer for \"b324925516130ddc01a954a6769f26b0b684dccec2adcff2e33bc48c51251283\" returns successfully" Jun 20 18:56:49.661315 containerd[1906]: time="2025-06-20T18:56:49.660534488Z" level=info msg="StartContainer for \"4b4bdeac6ec925b1dda47e7cbf4f43e10d759263bf516c38dd41afb89071cff2\" returns successfully" Jun 20 18:56:49.688917 kubelet[3159]: I0620 18:56:49.688857 3159 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-jffx6" podStartSLOduration=23.688838684 podStartE2EDuration="23.688838684s" podCreationTimestamp="2025-06-20 18:56:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 18:56:49.688715952 +0000 UTC m=+30.322461739" watchObservedRunningTime="2025-06-20 18:56:49.688838684 +0000 UTC m=+30.322584470" Jun 20 18:56:49.704836 kubelet[3159]: I0620 18:56:49.704773 3159 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-5f2n5" podStartSLOduration=23.70475826 podStartE2EDuration="23.70475826s" podCreationTimestamp="2025-06-20 18:56:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 18:56:49.703718096 +0000 UTC m=+30.337463880" watchObservedRunningTime="2025-06-20 18:56:49.70475826 +0000 UTC m=+30.338504043" Jun 20 18:56:50.278972 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2707319886.mount: Deactivated successfully. Jun 20 18:56:51.861335 ntpd[1883]: Listen normally on 8 cilium_host 192.168.0.22:123 Jun 20 18:56:51.861412 ntpd[1883]: Listen normally on 9 cilium_net [fe80::68ee:a5ff:fe50:9bd6%4]:123 Jun 20 18:56:51.861758 ntpd[1883]: 20 Jun 18:56:51 ntpd[1883]: Listen normally on 8 cilium_host 192.168.0.22:123 Jun 20 18:56:51.861758 ntpd[1883]: 20 Jun 18:56:51 ntpd[1883]: Listen normally on 9 cilium_net [fe80::68ee:a5ff:fe50:9bd6%4]:123 Jun 20 18:56:51.861758 ntpd[1883]: 20 Jun 18:56:51 ntpd[1883]: Listen normally on 10 cilium_host [fe80::9c43:71ff:fea4:b1fb%5]:123 Jun 20 18:56:51.861758 ntpd[1883]: 20 Jun 18:56:51 ntpd[1883]: Listen normally on 11 cilium_vxlan [fe80::9cad:48ff:fe30:dcd4%6]:123 Jun 20 18:56:51.861758 ntpd[1883]: 20 Jun 18:56:51 ntpd[1883]: Listen normally on 12 lxc_health [fe80::f05d:42ff:fe67:c0ff%8]:123 Jun 20 18:56:51.861758 ntpd[1883]: 20 Jun 18:56:51 ntpd[1883]: Listen normally on 13 lxce728f95f2f42 [fe80::705c:17ff:fedd:dded%10]:123 Jun 20 18:56:51.861758 ntpd[1883]: 20 Jun 18:56:51 ntpd[1883]: Listen normally on 14 lxc3d115e6eb32e [fe80::7064:afff:fe9d:b6c%12]:123 Jun 20 18:56:51.861467 ntpd[1883]: Listen normally on 10 cilium_host [fe80::9c43:71ff:fea4:b1fb%5]:123 Jun 20 18:56:51.861500 ntpd[1883]: Listen normally on 11 cilium_vxlan [fe80::9cad:48ff:fe30:dcd4%6]:123 Jun 20 18:56:51.861532 ntpd[1883]: Listen normally on 12 lxc_health [fe80::f05d:42ff:fe67:c0ff%8]:123 Jun 20 18:56:51.861568 ntpd[1883]: Listen normally on 13 lxce728f95f2f42 [fe80::705c:17ff:fedd:dded%10]:123 Jun 20 18:56:51.861596 ntpd[1883]: Listen normally on 14 lxc3d115e6eb32e [fe80::7064:afff:fe9d:b6c%12]:123 Jun 20 18:56:53.228586 systemd[1]: Started sshd@7-172.31.28.28:22-139.178.68.195:38804.service - OpenSSH per-connection server daemon (139.178.68.195:38804). Jun 20 18:56:53.419923 sshd[4712]: Accepted publickey for core from 139.178.68.195 port 38804 ssh2: RSA SHA256:sF0tjKSFADzF6g6JG756y/3bgw4kb0C1NHj6dI7T2go Jun 20 18:56:53.421990 sshd-session[4712]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:56:53.427846 systemd-logind[1888]: New session 8 of user core. Jun 20 18:56:53.431418 systemd[1]: Started session-8.scope - Session 8 of User core. Jun 20 18:56:54.212387 sshd[4714]: Connection closed by 139.178.68.195 port 38804 Jun 20 18:56:54.212981 sshd-session[4712]: pam_unix(sshd:session): session closed for user core Jun 20 18:56:54.216962 systemd[1]: sshd@7-172.31.28.28:22-139.178.68.195:38804.service: Deactivated successfully. Jun 20 18:56:54.218653 systemd[1]: session-8.scope: Deactivated successfully. Jun 20 18:56:54.219542 systemd-logind[1888]: Session 8 logged out. Waiting for processes to exit. Jun 20 18:56:54.220744 systemd-logind[1888]: Removed session 8. Jun 20 18:56:59.249566 systemd[1]: Started sshd@8-172.31.28.28:22-139.178.68.195:44362.service - OpenSSH per-connection server daemon (139.178.68.195:44362). Jun 20 18:56:59.416359 sshd[4730]: Accepted publickey for core from 139.178.68.195 port 44362 ssh2: RSA SHA256:sF0tjKSFADzF6g6JG756y/3bgw4kb0C1NHj6dI7T2go Jun 20 18:56:59.417691 sshd-session[4730]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:56:59.422589 systemd-logind[1888]: New session 9 of user core. Jun 20 18:56:59.429442 systemd[1]: Started session-9.scope - Session 9 of User core. Jun 20 18:56:59.645646 sshd[4732]: Connection closed by 139.178.68.195 port 44362 Jun 20 18:56:59.646572 sshd-session[4730]: pam_unix(sshd:session): session closed for user core Jun 20 18:56:59.652175 systemd[1]: sshd@8-172.31.28.28:22-139.178.68.195:44362.service: Deactivated successfully. Jun 20 18:56:59.653996 systemd[1]: session-9.scope: Deactivated successfully. Jun 20 18:56:59.654855 systemd-logind[1888]: Session 9 logged out. Waiting for processes to exit. Jun 20 18:56:59.656001 systemd-logind[1888]: Removed session 9. Jun 20 18:57:04.684563 systemd[1]: Started sshd@9-172.31.28.28:22-139.178.68.195:58068.service - OpenSSH per-connection server daemon (139.178.68.195:58068). Jun 20 18:57:04.857535 sshd[4745]: Accepted publickey for core from 139.178.68.195 port 58068 ssh2: RSA SHA256:sF0tjKSFADzF6g6JG756y/3bgw4kb0C1NHj6dI7T2go Jun 20 18:57:04.858891 sshd-session[4745]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:57:04.864595 systemd-logind[1888]: New session 10 of user core. Jun 20 18:57:04.869394 systemd[1]: Started session-10.scope - Session 10 of User core. Jun 20 18:57:05.061030 sshd[4747]: Connection closed by 139.178.68.195 port 58068 Jun 20 18:57:05.061655 sshd-session[4745]: pam_unix(sshd:session): session closed for user core Jun 20 18:57:05.065316 systemd[1]: sshd@9-172.31.28.28:22-139.178.68.195:58068.service: Deactivated successfully. Jun 20 18:57:05.067234 systemd[1]: session-10.scope: Deactivated successfully. Jun 20 18:57:05.068023 systemd-logind[1888]: Session 10 logged out. Waiting for processes to exit. Jun 20 18:57:05.069392 systemd-logind[1888]: Removed session 10. Jun 20 18:57:10.100743 systemd[1]: Started sshd@10-172.31.28.28:22-139.178.68.195:58074.service - OpenSSH per-connection server daemon (139.178.68.195:58074). Jun 20 18:57:10.268447 sshd[4761]: Accepted publickey for core from 139.178.68.195 port 58074 ssh2: RSA SHA256:sF0tjKSFADzF6g6JG756y/3bgw4kb0C1NHj6dI7T2go Jun 20 18:57:10.269978 sshd-session[4761]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:57:10.275325 systemd-logind[1888]: New session 11 of user core. Jun 20 18:57:10.279528 systemd[1]: Started session-11.scope - Session 11 of User core. Jun 20 18:57:10.470007 sshd[4763]: Connection closed by 139.178.68.195 port 58074 Jun 20 18:57:10.470823 sshd-session[4761]: pam_unix(sshd:session): session closed for user core Jun 20 18:57:10.473887 systemd[1]: sshd@10-172.31.28.28:22-139.178.68.195:58074.service: Deactivated successfully. Jun 20 18:57:10.475821 systemd[1]: session-11.scope: Deactivated successfully. Jun 20 18:57:10.477679 systemd-logind[1888]: Session 11 logged out. Waiting for processes to exit. Jun 20 18:57:10.478764 systemd-logind[1888]: Removed session 11. Jun 20 18:57:10.506526 systemd[1]: Started sshd@11-172.31.28.28:22-139.178.68.195:58076.service - OpenSSH per-connection server daemon (139.178.68.195:58076). Jun 20 18:57:10.665401 sshd[4776]: Accepted publickey for core from 139.178.68.195 port 58076 ssh2: RSA SHA256:sF0tjKSFADzF6g6JG756y/3bgw4kb0C1NHj6dI7T2go Jun 20 18:57:10.666845 sshd-session[4776]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:57:10.671651 systemd-logind[1888]: New session 12 of user core. Jun 20 18:57:10.678463 systemd[1]: Started session-12.scope - Session 12 of User core. Jun 20 18:57:10.907243 sshd[4778]: Connection closed by 139.178.68.195 port 58076 Jun 20 18:57:10.908904 sshd-session[4776]: pam_unix(sshd:session): session closed for user core Jun 20 18:57:10.914791 systemd-logind[1888]: Session 12 logged out. Waiting for processes to exit. Jun 20 18:57:10.915997 systemd[1]: sshd@11-172.31.28.28:22-139.178.68.195:58076.service: Deactivated successfully. Jun 20 18:57:10.919830 systemd[1]: session-12.scope: Deactivated successfully. Jun 20 18:57:10.924384 systemd-logind[1888]: Removed session 12. Jun 20 18:57:10.945647 systemd[1]: Started sshd@12-172.31.28.28:22-139.178.68.195:58088.service - OpenSSH per-connection server daemon (139.178.68.195:58088). Jun 20 18:57:11.119083 sshd[4789]: Accepted publickey for core from 139.178.68.195 port 58088 ssh2: RSA SHA256:sF0tjKSFADzF6g6JG756y/3bgw4kb0C1NHj6dI7T2go Jun 20 18:57:11.120855 sshd-session[4789]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:57:11.125404 systemd-logind[1888]: New session 13 of user core. Jun 20 18:57:11.128471 systemd[1]: Started session-13.scope - Session 13 of User core. Jun 20 18:57:11.336491 sshd[4791]: Connection closed by 139.178.68.195 port 58088 Jun 20 18:57:11.337116 sshd-session[4789]: pam_unix(sshd:session): session closed for user core Jun 20 18:57:11.340792 systemd[1]: sshd@12-172.31.28.28:22-139.178.68.195:58088.service: Deactivated successfully. Jun 20 18:57:11.342591 systemd[1]: session-13.scope: Deactivated successfully. Jun 20 18:57:11.343640 systemd-logind[1888]: Session 13 logged out. Waiting for processes to exit. Jun 20 18:57:11.344818 systemd-logind[1888]: Removed session 13. Jun 20 18:57:16.374672 systemd[1]: Started sshd@13-172.31.28.28:22-139.178.68.195:55076.service - OpenSSH per-connection server daemon (139.178.68.195:55076). Jun 20 18:57:16.543857 sshd[4804]: Accepted publickey for core from 139.178.68.195 port 55076 ssh2: RSA SHA256:sF0tjKSFADzF6g6JG756y/3bgw4kb0C1NHj6dI7T2go Jun 20 18:57:16.545397 sshd-session[4804]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:57:16.550747 systemd-logind[1888]: New session 14 of user core. Jun 20 18:57:16.557442 systemd[1]: Started session-14.scope - Session 14 of User core. Jun 20 18:57:16.742682 sshd[4806]: Connection closed by 139.178.68.195 port 55076 Jun 20 18:57:16.743428 sshd-session[4804]: pam_unix(sshd:session): session closed for user core Jun 20 18:57:16.746815 systemd[1]: sshd@13-172.31.28.28:22-139.178.68.195:55076.service: Deactivated successfully. Jun 20 18:57:16.749279 systemd[1]: session-14.scope: Deactivated successfully. Jun 20 18:57:16.751055 systemd-logind[1888]: Session 14 logged out. Waiting for processes to exit. Jun 20 18:57:16.752711 systemd-logind[1888]: Removed session 14. Jun 20 18:57:21.781581 systemd[1]: Started sshd@14-172.31.28.28:22-139.178.68.195:55086.service - OpenSSH per-connection server daemon (139.178.68.195:55086). Jun 20 18:57:21.944779 sshd[4820]: Accepted publickey for core from 139.178.68.195 port 55086 ssh2: RSA SHA256:sF0tjKSFADzF6g6JG756y/3bgw4kb0C1NHj6dI7T2go Jun 20 18:57:21.946279 sshd-session[4820]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:57:21.951194 systemd-logind[1888]: New session 15 of user core. Jun 20 18:57:21.959518 systemd[1]: Started session-15.scope - Session 15 of User core. Jun 20 18:57:22.144492 sshd[4822]: Connection closed by 139.178.68.195 port 55086 Jun 20 18:57:22.145395 sshd-session[4820]: pam_unix(sshd:session): session closed for user core Jun 20 18:57:22.155204 systemd[1]: sshd@14-172.31.28.28:22-139.178.68.195:55086.service: Deactivated successfully. Jun 20 18:57:22.159140 systemd[1]: session-15.scope: Deactivated successfully. Jun 20 18:57:22.160613 systemd-logind[1888]: Session 15 logged out. Waiting for processes to exit. Jun 20 18:57:22.161707 systemd-logind[1888]: Removed session 15. Jun 20 18:57:22.178544 systemd[1]: Started sshd@15-172.31.28.28:22-139.178.68.195:55096.service - OpenSSH per-connection server daemon (139.178.68.195:55096). Jun 20 18:57:22.361619 sshd[4833]: Accepted publickey for core from 139.178.68.195 port 55096 ssh2: RSA SHA256:sF0tjKSFADzF6g6JG756y/3bgw4kb0C1NHj6dI7T2go Jun 20 18:57:22.362964 sshd-session[4833]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:57:22.368026 systemd-logind[1888]: New session 16 of user core. Jun 20 18:57:22.371472 systemd[1]: Started session-16.scope - Session 16 of User core. Jun 20 18:57:23.029531 sshd[4835]: Connection closed by 139.178.68.195 port 55096 Jun 20 18:57:23.030471 sshd-session[4833]: pam_unix(sshd:session): session closed for user core Jun 20 18:57:23.033691 systemd[1]: sshd@15-172.31.28.28:22-139.178.68.195:55096.service: Deactivated successfully. Jun 20 18:57:23.035630 systemd[1]: session-16.scope: Deactivated successfully. Jun 20 18:57:23.036924 systemd-logind[1888]: Session 16 logged out. Waiting for processes to exit. Jun 20 18:57:23.038011 systemd-logind[1888]: Removed session 16. Jun 20 18:57:23.068509 systemd[1]: Started sshd@16-172.31.28.28:22-139.178.68.195:55100.service - OpenSSH per-connection server daemon (139.178.68.195:55100). Jun 20 18:57:23.232883 sshd[4845]: Accepted publickey for core from 139.178.68.195 port 55100 ssh2: RSA SHA256:sF0tjKSFADzF6g6JG756y/3bgw4kb0C1NHj6dI7T2go Jun 20 18:57:23.234375 sshd-session[4845]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:57:23.239170 systemd-logind[1888]: New session 17 of user core. Jun 20 18:57:23.244394 systemd[1]: Started session-17.scope - Session 17 of User core. Jun 20 18:57:25.066176 sshd[4847]: Connection closed by 139.178.68.195 port 55100 Jun 20 18:57:25.068319 sshd-session[4845]: pam_unix(sshd:session): session closed for user core Jun 20 18:57:25.081211 systemd[1]: sshd@16-172.31.28.28:22-139.178.68.195:55100.service: Deactivated successfully. Jun 20 18:57:25.087526 systemd[1]: session-17.scope: Deactivated successfully. Jun 20 18:57:25.089796 systemd-logind[1888]: Session 17 logged out. Waiting for processes to exit. Jun 20 18:57:25.108609 systemd[1]: Started sshd@17-172.31.28.28:22-139.178.68.195:40564.service - OpenSSH per-connection server daemon (139.178.68.195:40564). Jun 20 18:57:25.110452 systemd-logind[1888]: Removed session 17. Jun 20 18:57:25.294537 sshd[4863]: Accepted publickey for core from 139.178.68.195 port 40564 ssh2: RSA SHA256:sF0tjKSFADzF6g6JG756y/3bgw4kb0C1NHj6dI7T2go Jun 20 18:57:25.296031 sshd-session[4863]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:57:25.301297 systemd-logind[1888]: New session 18 of user core. Jun 20 18:57:25.306419 systemd[1]: Started session-18.scope - Session 18 of User core. Jun 20 18:57:25.820850 sshd[4866]: Connection closed by 139.178.68.195 port 40564 Jun 20 18:57:25.821577 sshd-session[4863]: pam_unix(sshd:session): session closed for user core Jun 20 18:57:25.825145 systemd[1]: sshd@17-172.31.28.28:22-139.178.68.195:40564.service: Deactivated successfully. Jun 20 18:57:25.825475 systemd-logind[1888]: Session 18 logged out. Waiting for processes to exit. Jun 20 18:57:25.828119 systemd[1]: session-18.scope: Deactivated successfully. Jun 20 18:57:25.830540 systemd-logind[1888]: Removed session 18. Jun 20 18:57:25.856383 systemd[1]: Started sshd@18-172.31.28.28:22-139.178.68.195:40566.service - OpenSSH per-connection server daemon (139.178.68.195:40566). Jun 20 18:57:26.016927 sshd[4876]: Accepted publickey for core from 139.178.68.195 port 40566 ssh2: RSA SHA256:sF0tjKSFADzF6g6JG756y/3bgw4kb0C1NHj6dI7T2go Jun 20 18:57:26.020724 sshd-session[4876]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:57:26.026203 systemd-logind[1888]: New session 19 of user core. Jun 20 18:57:26.031547 systemd[1]: Started session-19.scope - Session 19 of User core. Jun 20 18:57:26.211590 sshd[4878]: Connection closed by 139.178.68.195 port 40566 Jun 20 18:57:26.212488 sshd-session[4876]: pam_unix(sshd:session): session closed for user core Jun 20 18:57:26.215543 systemd[1]: sshd@18-172.31.28.28:22-139.178.68.195:40566.service: Deactivated successfully. Jun 20 18:57:26.217831 systemd[1]: session-19.scope: Deactivated successfully. Jun 20 18:57:26.219418 systemd-logind[1888]: Session 19 logged out. Waiting for processes to exit. Jun 20 18:57:26.220759 systemd-logind[1888]: Removed session 19. Jun 20 18:57:31.254548 systemd[1]: Started sshd@19-172.31.28.28:22-139.178.68.195:40582.service - OpenSSH per-connection server daemon (139.178.68.195:40582). Jun 20 18:57:31.418869 sshd[4895]: Accepted publickey for core from 139.178.68.195 port 40582 ssh2: RSA SHA256:sF0tjKSFADzF6g6JG756y/3bgw4kb0C1NHj6dI7T2go Jun 20 18:57:31.420394 sshd-session[4895]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:57:31.425324 systemd-logind[1888]: New session 20 of user core. Jun 20 18:57:31.430416 systemd[1]: Started session-20.scope - Session 20 of User core. Jun 20 18:57:31.626729 sshd[4897]: Connection closed by 139.178.68.195 port 40582 Jun 20 18:57:31.626877 sshd-session[4895]: pam_unix(sshd:session): session closed for user core Jun 20 18:57:31.634076 systemd[1]: sshd@19-172.31.28.28:22-139.178.68.195:40582.service: Deactivated successfully. Jun 20 18:57:31.636894 systemd[1]: session-20.scope: Deactivated successfully. Jun 20 18:57:31.638139 systemd-logind[1888]: Session 20 logged out. Waiting for processes to exit. Jun 20 18:57:31.639681 systemd-logind[1888]: Removed session 20. Jun 20 18:57:36.661534 systemd[1]: Started sshd@20-172.31.28.28:22-139.178.68.195:56898.service - OpenSSH per-connection server daemon (139.178.68.195:56898). Jun 20 18:57:36.821789 sshd[4909]: Accepted publickey for core from 139.178.68.195 port 56898 ssh2: RSA SHA256:sF0tjKSFADzF6g6JG756y/3bgw4kb0C1NHj6dI7T2go Jun 20 18:57:36.823085 sshd-session[4909]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:57:36.827692 systemd-logind[1888]: New session 21 of user core. Jun 20 18:57:36.834432 systemd[1]: Started session-21.scope - Session 21 of User core. Jun 20 18:57:37.021669 sshd[4911]: Connection closed by 139.178.68.195 port 56898 Jun 20 18:57:37.022281 sshd-session[4909]: pam_unix(sshd:session): session closed for user core Jun 20 18:57:37.025210 systemd[1]: sshd@20-172.31.28.28:22-139.178.68.195:56898.service: Deactivated successfully. Jun 20 18:57:37.027008 systemd[1]: session-21.scope: Deactivated successfully. Jun 20 18:57:37.028455 systemd-logind[1888]: Session 21 logged out. Waiting for processes to exit. Jun 20 18:57:37.029522 systemd-logind[1888]: Removed session 21. Jun 20 18:57:42.056520 systemd[1]: Started sshd@21-172.31.28.28:22-139.178.68.195:56904.service - OpenSSH per-connection server daemon (139.178.68.195:56904). Jun 20 18:57:42.209823 sshd[4923]: Accepted publickey for core from 139.178.68.195 port 56904 ssh2: RSA SHA256:sF0tjKSFADzF6g6JG756y/3bgw4kb0C1NHj6dI7T2go Jun 20 18:57:42.211342 sshd-session[4923]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:57:42.215885 systemd-logind[1888]: New session 22 of user core. Jun 20 18:57:42.220440 systemd[1]: Started session-22.scope - Session 22 of User core. Jun 20 18:57:42.402471 sshd[4925]: Connection closed by 139.178.68.195 port 56904 Jun 20 18:57:42.403348 sshd-session[4923]: pam_unix(sshd:session): session closed for user core Jun 20 18:57:42.406016 systemd[1]: sshd@21-172.31.28.28:22-139.178.68.195:56904.service: Deactivated successfully. Jun 20 18:57:42.407909 systemd[1]: session-22.scope: Deactivated successfully. Jun 20 18:57:42.409311 systemd-logind[1888]: Session 22 logged out. Waiting for processes to exit. Jun 20 18:57:42.410434 systemd-logind[1888]: Removed session 22. Jun 20 18:57:42.442581 systemd[1]: Started sshd@22-172.31.28.28:22-139.178.68.195:56908.service - OpenSSH per-connection server daemon (139.178.68.195:56908). Jun 20 18:57:42.603333 sshd[4937]: Accepted publickey for core from 139.178.68.195 port 56908 ssh2: RSA SHA256:sF0tjKSFADzF6g6JG756y/3bgw4kb0C1NHj6dI7T2go Jun 20 18:57:42.604725 sshd-session[4937]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:57:42.609319 systemd-logind[1888]: New session 23 of user core. Jun 20 18:57:42.617432 systemd[1]: Started session-23.scope - Session 23 of User core. Jun 20 18:57:44.134122 containerd[1906]: time="2025-06-20T18:57:44.133731021Z" level=info msg="StopContainer for \"ec1e03099fb14592e265a8073898823bca2d18745bd0d01483d788b18502d457\" with timeout 30 (s)" Jun 20 18:57:44.136543 containerd[1906]: time="2025-06-20T18:57:44.136420500Z" level=info msg="Stop container \"ec1e03099fb14592e265a8073898823bca2d18745bd0d01483d788b18502d457\" with signal terminated" Jun 20 18:57:44.152361 systemd[1]: run-containerd-runc-k8s.io-63aa61da7be374fa943c00518c74631a9c4367ced05cee037926d92cf3afba68-runc.pJ991O.mount: Deactivated successfully. Jun 20 18:57:44.158202 systemd[1]: cri-containerd-ec1e03099fb14592e265a8073898823bca2d18745bd0d01483d788b18502d457.scope: Deactivated successfully. Jun 20 18:57:44.158590 systemd[1]: cri-containerd-ec1e03099fb14592e265a8073898823bca2d18745bd0d01483d788b18502d457.scope: Consumed 413ms CPU time, 32.9M memory peak, 7.5M read from disk, 4K written to disk. Jun 20 18:57:44.171998 containerd[1906]: time="2025-06-20T18:57:44.171831238Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 20 18:57:44.180914 containerd[1906]: time="2025-06-20T18:57:44.180786575Z" level=info msg="StopContainer for \"63aa61da7be374fa943c00518c74631a9c4367ced05cee037926d92cf3afba68\" with timeout 2 (s)" Jun 20 18:57:44.181250 containerd[1906]: time="2025-06-20T18:57:44.181179230Z" level=info msg="Stop container \"63aa61da7be374fa943c00518c74631a9c4367ced05cee037926d92cf3afba68\" with signal terminated" Jun 20 18:57:44.186159 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ec1e03099fb14592e265a8073898823bca2d18745bd0d01483d788b18502d457-rootfs.mount: Deactivated successfully. Jun 20 18:57:44.192944 systemd-networkd[1783]: lxc_health: Link DOWN Jun 20 18:57:44.192951 systemd-networkd[1783]: lxc_health: Lost carrier Jun 20 18:57:44.215707 containerd[1906]: time="2025-06-20T18:57:44.215649305Z" level=info msg="shim disconnected" id=ec1e03099fb14592e265a8073898823bca2d18745bd0d01483d788b18502d457 namespace=k8s.io Jun 20 18:57:44.215707 containerd[1906]: time="2025-06-20T18:57:44.215703534Z" level=warning msg="cleaning up after shim disconnected" id=ec1e03099fb14592e265a8073898823bca2d18745bd0d01483d788b18502d457 namespace=k8s.io Jun 20 18:57:44.215707 containerd[1906]: time="2025-06-20T18:57:44.215712201Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 18:57:44.217578 systemd[1]: cri-containerd-63aa61da7be374fa943c00518c74631a9c4367ced05cee037926d92cf3afba68.scope: Deactivated successfully. Jun 20 18:57:44.217844 systemd[1]: cri-containerd-63aa61da7be374fa943c00518c74631a9c4367ced05cee037926d92cf3afba68.scope: Consumed 7.545s CPU time, 188.7M memory peak, 64.4M read from disk, 13.3M written to disk. Jun 20 18:57:44.238268 containerd[1906]: time="2025-06-20T18:57:44.238178915Z" level=info msg="StopContainer for \"ec1e03099fb14592e265a8073898823bca2d18745bd0d01483d788b18502d457\" returns successfully" Jun 20 18:57:44.245629 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-63aa61da7be374fa943c00518c74631a9c4367ced05cee037926d92cf3afba68-rootfs.mount: Deactivated successfully. Jun 20 18:57:44.254531 containerd[1906]: time="2025-06-20T18:57:44.254491332Z" level=info msg="StopPodSandbox for \"936789afe269089ea4fe39069054dee026650c2c8dd560c043cf09e1c5c7041c\"" Jun 20 18:57:44.259831 containerd[1906]: time="2025-06-20T18:57:44.255865293Z" level=info msg="Container to stop \"ec1e03099fb14592e265a8073898823bca2d18745bd0d01483d788b18502d457\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 20 18:57:44.262099 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-936789afe269089ea4fe39069054dee026650c2c8dd560c043cf09e1c5c7041c-shm.mount: Deactivated successfully. Jun 20 18:57:44.267353 containerd[1906]: time="2025-06-20T18:57:44.267292942Z" level=info msg="shim disconnected" id=63aa61da7be374fa943c00518c74631a9c4367ced05cee037926d92cf3afba68 namespace=k8s.io Jun 20 18:57:44.267353 containerd[1906]: time="2025-06-20T18:57:44.267348237Z" level=warning msg="cleaning up after shim disconnected" id=63aa61da7be374fa943c00518c74631a9c4367ced05cee037926d92cf3afba68 namespace=k8s.io Jun 20 18:57:44.267353 containerd[1906]: time="2025-06-20T18:57:44.267356852Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 18:57:44.272930 systemd[1]: cri-containerd-936789afe269089ea4fe39069054dee026650c2c8dd560c043cf09e1c5c7041c.scope: Deactivated successfully. Jun 20 18:57:44.291262 containerd[1906]: time="2025-06-20T18:57:44.290627874Z" level=info msg="StopContainer for \"63aa61da7be374fa943c00518c74631a9c4367ced05cee037926d92cf3afba68\" returns successfully" Jun 20 18:57:44.291585 containerd[1906]: time="2025-06-20T18:57:44.291540210Z" level=info msg="StopPodSandbox for \"c4958c52bed2fad40678d574209cf5ad4ee641646fb9193263eb78c84aa37cd5\"" Jun 20 18:57:44.291659 containerd[1906]: time="2025-06-20T18:57:44.291603742Z" level=info msg="Container to stop \"63aa61da7be374fa943c00518c74631a9c4367ced05cee037926d92cf3afba68\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 20 18:57:44.291693 containerd[1906]: time="2025-06-20T18:57:44.291660630Z" level=info msg="Container to stop \"10fd0b73b4897e9e46e1e3c6f341cbacec61421f406c2a2766576f4d615fddb8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 20 18:57:44.291693 containerd[1906]: time="2025-06-20T18:57:44.291671378Z" level=info msg="Container to stop \"e729957ffd21f0dba79e8645732f6d7636e7f2d3fe1d78440a5ea7e47a35d8bb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 20 18:57:44.291693 containerd[1906]: time="2025-06-20T18:57:44.291679732Z" level=info msg="Container to stop \"b45fbda7f961d567afa697ccae7fdede90a6d9677efeb763890449be21124f53\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 20 18:57:44.291693 containerd[1906]: time="2025-06-20T18:57:44.291688022Z" level=info msg="Container to stop \"989ba8e972d8926795423b21b9600d15a17edd8fd581ad93b776abcf3a35bc44\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 20 18:57:44.299508 systemd[1]: cri-containerd-c4958c52bed2fad40678d574209cf5ad4ee641646fb9193263eb78c84aa37cd5.scope: Deactivated successfully. Jun 20 18:57:44.318387 containerd[1906]: time="2025-06-20T18:57:44.318314121Z" level=info msg="shim disconnected" id=936789afe269089ea4fe39069054dee026650c2c8dd560c043cf09e1c5c7041c namespace=k8s.io Jun 20 18:57:44.318387 containerd[1906]: time="2025-06-20T18:57:44.318367974Z" level=warning msg="cleaning up after shim disconnected" id=936789afe269089ea4fe39069054dee026650c2c8dd560c043cf09e1c5c7041c namespace=k8s.io Jun 20 18:57:44.318387 containerd[1906]: time="2025-06-20T18:57:44.318376094Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 18:57:44.358895 containerd[1906]: time="2025-06-20T18:57:44.358763932Z" level=info msg="shim disconnected" id=c4958c52bed2fad40678d574209cf5ad4ee641646fb9193263eb78c84aa37cd5 namespace=k8s.io Jun 20 18:57:44.358895 containerd[1906]: time="2025-06-20T18:57:44.358812242Z" level=warning msg="cleaning up after shim disconnected" id=c4958c52bed2fad40678d574209cf5ad4ee641646fb9193263eb78c84aa37cd5 namespace=k8s.io Jun 20 18:57:44.358895 containerd[1906]: time="2025-06-20T18:57:44.358820764Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 18:57:44.360447 containerd[1906]: time="2025-06-20T18:57:44.360154441Z" level=info msg="TearDown network for sandbox \"936789afe269089ea4fe39069054dee026650c2c8dd560c043cf09e1c5c7041c\" successfully" Jun 20 18:57:44.360447 containerd[1906]: time="2025-06-20T18:57:44.360171924Z" level=info msg="StopPodSandbox for \"936789afe269089ea4fe39069054dee026650c2c8dd560c043cf09e1c5c7041c\" returns successfully" Jun 20 18:57:44.378584 containerd[1906]: time="2025-06-20T18:57:44.378541024Z" level=info msg="TearDown network for sandbox \"c4958c52bed2fad40678d574209cf5ad4ee641646fb9193263eb78c84aa37cd5\" successfully" Jun 20 18:57:44.378584 containerd[1906]: time="2025-06-20T18:57:44.378570693Z" level=info msg="StopPodSandbox for \"c4958c52bed2fad40678d574209cf5ad4ee641646fb9193263eb78c84aa37cd5\" returns successfully" Jun 20 18:57:44.463654 kubelet[3159]: I0620 18:57:44.463072 3159 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/016e655e-394d-46be-bb35-eb13463c6ac4-etc-cni-netd\") pod \"016e655e-394d-46be-bb35-eb13463c6ac4\" (UID: \"016e655e-394d-46be-bb35-eb13463c6ac4\") " Jun 20 18:57:44.463654 kubelet[3159]: I0620 18:57:44.463120 3159 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/016e655e-394d-46be-bb35-eb13463c6ac4-lib-modules\") pod \"016e655e-394d-46be-bb35-eb13463c6ac4\" (UID: \"016e655e-394d-46be-bb35-eb13463c6ac4\") " Jun 20 18:57:44.463654 kubelet[3159]: I0620 18:57:44.463145 3159 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/70026d22-746e-4af5-bc9e-220e8faac69a-cilium-config-path\") pod \"70026d22-746e-4af5-bc9e-220e8faac69a\" (UID: \"70026d22-746e-4af5-bc9e-220e8faac69a\") " Jun 20 18:57:44.463654 kubelet[3159]: I0620 18:57:44.463166 3159 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/016e655e-394d-46be-bb35-eb13463c6ac4-cilium-config-path\") pod \"016e655e-394d-46be-bb35-eb13463c6ac4\" (UID: \"016e655e-394d-46be-bb35-eb13463c6ac4\") " Jun 20 18:57:44.463654 kubelet[3159]: I0620 18:57:44.463326 3159 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/016e655e-394d-46be-bb35-eb13463c6ac4-hubble-tls\") pod \"016e655e-394d-46be-bb35-eb13463c6ac4\" (UID: \"016e655e-394d-46be-bb35-eb13463c6ac4\") " Jun 20 18:57:44.463654 kubelet[3159]: I0620 18:57:44.463342 3159 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6p2hj\" (UniqueName: \"kubernetes.io/projected/016e655e-394d-46be-bb35-eb13463c6ac4-kube-api-access-6p2hj\") pod \"016e655e-394d-46be-bb35-eb13463c6ac4\" (UID: \"016e655e-394d-46be-bb35-eb13463c6ac4\") " Jun 20 18:57:44.464189 kubelet[3159]: I0620 18:57:44.463360 3159 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/016e655e-394d-46be-bb35-eb13463c6ac4-cilium-cgroup\") pod \"016e655e-394d-46be-bb35-eb13463c6ac4\" (UID: \"016e655e-394d-46be-bb35-eb13463c6ac4\") " Jun 20 18:57:44.464189 kubelet[3159]: I0620 18:57:44.463393 3159 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/016e655e-394d-46be-bb35-eb13463c6ac4-host-proc-sys-net\") pod \"016e655e-394d-46be-bb35-eb13463c6ac4\" (UID: \"016e655e-394d-46be-bb35-eb13463c6ac4\") " Jun 20 18:57:44.464189 kubelet[3159]: I0620 18:57:44.463411 3159 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/016e655e-394d-46be-bb35-eb13463c6ac4-hostproc\") pod \"016e655e-394d-46be-bb35-eb13463c6ac4\" (UID: \"016e655e-394d-46be-bb35-eb13463c6ac4\") " Jun 20 18:57:44.464189 kubelet[3159]: I0620 18:57:44.463425 3159 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/016e655e-394d-46be-bb35-eb13463c6ac4-bpf-maps\") pod \"016e655e-394d-46be-bb35-eb13463c6ac4\" (UID: \"016e655e-394d-46be-bb35-eb13463c6ac4\") " Jun 20 18:57:44.464189 kubelet[3159]: I0620 18:57:44.463441 3159 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rjpxs\" (UniqueName: \"kubernetes.io/projected/70026d22-746e-4af5-bc9e-220e8faac69a-kube-api-access-rjpxs\") pod \"70026d22-746e-4af5-bc9e-220e8faac69a\" (UID: \"70026d22-746e-4af5-bc9e-220e8faac69a\") " Jun 20 18:57:44.464189 kubelet[3159]: I0620 18:57:44.463460 3159 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/016e655e-394d-46be-bb35-eb13463c6ac4-cni-path\") pod \"016e655e-394d-46be-bb35-eb13463c6ac4\" (UID: \"016e655e-394d-46be-bb35-eb13463c6ac4\") " Jun 20 18:57:44.464872 kubelet[3159]: I0620 18:57:44.463473 3159 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/016e655e-394d-46be-bb35-eb13463c6ac4-xtables-lock\") pod \"016e655e-394d-46be-bb35-eb13463c6ac4\" (UID: \"016e655e-394d-46be-bb35-eb13463c6ac4\") " Jun 20 18:57:44.464872 kubelet[3159]: I0620 18:57:44.463488 3159 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/016e655e-394d-46be-bb35-eb13463c6ac4-host-proc-sys-kernel\") pod \"016e655e-394d-46be-bb35-eb13463c6ac4\" (UID: \"016e655e-394d-46be-bb35-eb13463c6ac4\") " Jun 20 18:57:44.464872 kubelet[3159]: I0620 18:57:44.463504 3159 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/016e655e-394d-46be-bb35-eb13463c6ac4-clustermesh-secrets\") pod \"016e655e-394d-46be-bb35-eb13463c6ac4\" (UID: \"016e655e-394d-46be-bb35-eb13463c6ac4\") " Jun 20 18:57:44.464872 kubelet[3159]: I0620 18:57:44.463518 3159 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/016e655e-394d-46be-bb35-eb13463c6ac4-cilium-run\") pod \"016e655e-394d-46be-bb35-eb13463c6ac4\" (UID: \"016e655e-394d-46be-bb35-eb13463c6ac4\") " Jun 20 18:57:44.469642 kubelet[3159]: I0620 18:57:44.468428 3159 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/016e655e-394d-46be-bb35-eb13463c6ac4-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "016e655e-394d-46be-bb35-eb13463c6ac4" (UID: "016e655e-394d-46be-bb35-eb13463c6ac4"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 20 18:57:44.469642 kubelet[3159]: I0620 18:57:44.467985 3159 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/016e655e-394d-46be-bb35-eb13463c6ac4-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "016e655e-394d-46be-bb35-eb13463c6ac4" (UID: "016e655e-394d-46be-bb35-eb13463c6ac4"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 20 18:57:44.469642 kubelet[3159]: I0620 18:57:44.469431 3159 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/016e655e-394d-46be-bb35-eb13463c6ac4-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "016e655e-394d-46be-bb35-eb13463c6ac4" (UID: "016e655e-394d-46be-bb35-eb13463c6ac4"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 20 18:57:44.469642 kubelet[3159]: I0620 18:57:44.469449 3159 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/016e655e-394d-46be-bb35-eb13463c6ac4-hostproc" (OuterVolumeSpecName: "hostproc") pod "016e655e-394d-46be-bb35-eb13463c6ac4" (UID: "016e655e-394d-46be-bb35-eb13463c6ac4"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 20 18:57:44.469642 kubelet[3159]: I0620 18:57:44.469451 3159 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/016e655e-394d-46be-bb35-eb13463c6ac4-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "016e655e-394d-46be-bb35-eb13463c6ac4" (UID: "016e655e-394d-46be-bb35-eb13463c6ac4"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 20 18:57:44.481675 kubelet[3159]: I0620 18:57:44.481621 3159 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/016e655e-394d-46be-bb35-eb13463c6ac4-cni-path" (OuterVolumeSpecName: "cni-path") pod "016e655e-394d-46be-bb35-eb13463c6ac4" (UID: "016e655e-394d-46be-bb35-eb13463c6ac4"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 20 18:57:44.481675 kubelet[3159]: I0620 18:57:44.481676 3159 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/016e655e-394d-46be-bb35-eb13463c6ac4-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "016e655e-394d-46be-bb35-eb13463c6ac4" (UID: "016e655e-394d-46be-bb35-eb13463c6ac4"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 20 18:57:44.481853 kubelet[3159]: I0620 18:57:44.481696 3159 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/016e655e-394d-46be-bb35-eb13463c6ac4-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "016e655e-394d-46be-bb35-eb13463c6ac4" (UID: "016e655e-394d-46be-bb35-eb13463c6ac4"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 20 18:57:44.485807 kubelet[3159]: I0620 18:57:44.485524 3159 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/016e655e-394d-46be-bb35-eb13463c6ac4-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "016e655e-394d-46be-bb35-eb13463c6ac4" (UID: "016e655e-394d-46be-bb35-eb13463c6ac4"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jun 20 18:57:44.485931 kubelet[3159]: I0620 18:57:44.485906 3159 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/016e655e-394d-46be-bb35-eb13463c6ac4-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "016e655e-394d-46be-bb35-eb13463c6ac4" (UID: "016e655e-394d-46be-bb35-eb13463c6ac4"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jun 20 18:57:44.488378 kubelet[3159]: I0620 18:57:44.488348 3159 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/016e655e-394d-46be-bb35-eb13463c6ac4-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "016e655e-394d-46be-bb35-eb13463c6ac4" (UID: "016e655e-394d-46be-bb35-eb13463c6ac4"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jun 20 18:57:44.488521 kubelet[3159]: I0620 18:57:44.488433 3159 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70026d22-746e-4af5-bc9e-220e8faac69a-kube-api-access-rjpxs" (OuterVolumeSpecName: "kube-api-access-rjpxs") pod "70026d22-746e-4af5-bc9e-220e8faac69a" (UID: "70026d22-746e-4af5-bc9e-220e8faac69a"). InnerVolumeSpecName "kube-api-access-rjpxs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jun 20 18:57:44.490535 kubelet[3159]: I0620 18:57:44.490501 3159 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70026d22-746e-4af5-bc9e-220e8faac69a-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "70026d22-746e-4af5-bc9e-220e8faac69a" (UID: "70026d22-746e-4af5-bc9e-220e8faac69a"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jun 20 18:57:44.490621 kubelet[3159]: I0620 18:57:44.490553 3159 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/016e655e-394d-46be-bb35-eb13463c6ac4-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "016e655e-394d-46be-bb35-eb13463c6ac4" (UID: "016e655e-394d-46be-bb35-eb13463c6ac4"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 20 18:57:44.490621 kubelet[3159]: I0620 18:57:44.490571 3159 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/016e655e-394d-46be-bb35-eb13463c6ac4-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "016e655e-394d-46be-bb35-eb13463c6ac4" (UID: "016e655e-394d-46be-bb35-eb13463c6ac4"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 20 18:57:44.491158 kubelet[3159]: I0620 18:57:44.491104 3159 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/016e655e-394d-46be-bb35-eb13463c6ac4-kube-api-access-6p2hj" (OuterVolumeSpecName: "kube-api-access-6p2hj") pod "016e655e-394d-46be-bb35-eb13463c6ac4" (UID: "016e655e-394d-46be-bb35-eb13463c6ac4"). InnerVolumeSpecName "kube-api-access-6p2hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jun 20 18:57:44.564582 kubelet[3159]: I0620 18:57:44.564529 3159 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/016e655e-394d-46be-bb35-eb13463c6ac4-cni-path\") on node \"ip-172-31-28-28\" DevicePath \"\"" Jun 20 18:57:44.564582 kubelet[3159]: I0620 18:57:44.564574 3159 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/016e655e-394d-46be-bb35-eb13463c6ac4-host-proc-sys-kernel\") on node \"ip-172-31-28-28\" DevicePath \"\"" Jun 20 18:57:44.564582 kubelet[3159]: I0620 18:57:44.564587 3159 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/016e655e-394d-46be-bb35-eb13463c6ac4-clustermesh-secrets\") on node \"ip-172-31-28-28\" DevicePath \"\"" Jun 20 18:57:44.564582 kubelet[3159]: I0620 18:57:44.564596 3159 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/016e655e-394d-46be-bb35-eb13463c6ac4-cilium-run\") on node \"ip-172-31-28-28\" DevicePath \"\"" Jun 20 18:57:44.564807 kubelet[3159]: I0620 18:57:44.564604 3159 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/016e655e-394d-46be-bb35-eb13463c6ac4-xtables-lock\") on node \"ip-172-31-28-28\" DevicePath \"\"" Jun 20 18:57:44.564807 kubelet[3159]: I0620 18:57:44.564612 3159 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/016e655e-394d-46be-bb35-eb13463c6ac4-etc-cni-netd\") on node \"ip-172-31-28-28\" DevicePath \"\"" Jun 20 18:57:44.564807 kubelet[3159]: I0620 18:57:44.564620 3159 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/016e655e-394d-46be-bb35-eb13463c6ac4-lib-modules\") on node \"ip-172-31-28-28\" DevicePath \"\"" Jun 20 18:57:44.564807 kubelet[3159]: I0620 18:57:44.564628 3159 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/70026d22-746e-4af5-bc9e-220e8faac69a-cilium-config-path\") on node \"ip-172-31-28-28\" DevicePath \"\"" Jun 20 18:57:44.564807 kubelet[3159]: I0620 18:57:44.564635 3159 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/016e655e-394d-46be-bb35-eb13463c6ac4-cilium-config-path\") on node \"ip-172-31-28-28\" DevicePath \"\"" Jun 20 18:57:44.564807 kubelet[3159]: I0620 18:57:44.564642 3159 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/016e655e-394d-46be-bb35-eb13463c6ac4-hubble-tls\") on node \"ip-172-31-28-28\" DevicePath \"\"" Jun 20 18:57:44.564807 kubelet[3159]: I0620 18:57:44.564649 3159 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6p2hj\" (UniqueName: \"kubernetes.io/projected/016e655e-394d-46be-bb35-eb13463c6ac4-kube-api-access-6p2hj\") on node \"ip-172-31-28-28\" DevicePath \"\"" Jun 20 18:57:44.564807 kubelet[3159]: I0620 18:57:44.564656 3159 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/016e655e-394d-46be-bb35-eb13463c6ac4-host-proc-sys-net\") on node \"ip-172-31-28-28\" DevicePath \"\"" Jun 20 18:57:44.565614 kubelet[3159]: I0620 18:57:44.564665 3159 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/016e655e-394d-46be-bb35-eb13463c6ac4-cilium-cgroup\") on node \"ip-172-31-28-28\" DevicePath \"\"" Jun 20 18:57:44.565614 kubelet[3159]: I0620 18:57:44.564672 3159 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/016e655e-394d-46be-bb35-eb13463c6ac4-bpf-maps\") on node \"ip-172-31-28-28\" DevicePath \"\"" Jun 20 18:57:44.565614 kubelet[3159]: I0620 18:57:44.564680 3159 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rjpxs\" (UniqueName: \"kubernetes.io/projected/70026d22-746e-4af5-bc9e-220e8faac69a-kube-api-access-rjpxs\") on node \"ip-172-31-28-28\" DevicePath \"\"" Jun 20 18:57:44.565614 kubelet[3159]: I0620 18:57:44.564687 3159 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/016e655e-394d-46be-bb35-eb13463c6ac4-hostproc\") on node \"ip-172-31-28-28\" DevicePath \"\"" Jun 20 18:57:44.601728 kubelet[3159]: E0620 18:57:44.601670 3159 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jun 20 18:57:44.784837 systemd[1]: Removed slice kubepods-burstable-pod016e655e_394d_46be_bb35_eb13463c6ac4.slice - libcontainer container kubepods-burstable-pod016e655e_394d_46be_bb35_eb13463c6ac4.slice. Jun 20 18:57:44.785381 systemd[1]: kubepods-burstable-pod016e655e_394d_46be_bb35_eb13463c6ac4.slice: Consumed 7.631s CPU time, 189.1M memory peak, 65.4M read from disk, 13.3M written to disk. Jun 20 18:57:44.786439 kubelet[3159]: I0620 18:57:44.785637 3159 scope.go:117] "RemoveContainer" containerID="63aa61da7be374fa943c00518c74631a9c4367ced05cee037926d92cf3afba68" Jun 20 18:57:44.798317 containerd[1906]: time="2025-06-20T18:57:44.798074983Z" level=info msg="RemoveContainer for \"63aa61da7be374fa943c00518c74631a9c4367ced05cee037926d92cf3afba68\"" Jun 20 18:57:44.802968 systemd[1]: Removed slice kubepods-besteffort-pod70026d22_746e_4af5_bc9e_220e8faac69a.slice - libcontainer container kubepods-besteffort-pod70026d22_746e_4af5_bc9e_220e8faac69a.slice. Jun 20 18:57:44.803541 systemd[1]: kubepods-besteffort-pod70026d22_746e_4af5_bc9e_220e8faac69a.slice: Consumed 441ms CPU time, 33.1M memory peak, 7.5M read from disk, 4K written to disk. Jun 20 18:57:44.808538 containerd[1906]: time="2025-06-20T18:57:44.808385717Z" level=info msg="RemoveContainer for \"63aa61da7be374fa943c00518c74631a9c4367ced05cee037926d92cf3afba68\" returns successfully" Jun 20 18:57:44.818772 kubelet[3159]: I0620 18:57:44.818733 3159 scope.go:117] "RemoveContainer" containerID="e729957ffd21f0dba79e8645732f6d7636e7f2d3fe1d78440a5ea7e47a35d8bb" Jun 20 18:57:44.821355 containerd[1906]: time="2025-06-20T18:57:44.820848266Z" level=info msg="RemoveContainer for \"e729957ffd21f0dba79e8645732f6d7636e7f2d3fe1d78440a5ea7e47a35d8bb\"" Jun 20 18:57:44.826132 containerd[1906]: time="2025-06-20T18:57:44.826094779Z" level=info msg="RemoveContainer for \"e729957ffd21f0dba79e8645732f6d7636e7f2d3fe1d78440a5ea7e47a35d8bb\" returns successfully" Jun 20 18:57:44.826358 kubelet[3159]: I0620 18:57:44.826336 3159 scope.go:117] "RemoveContainer" containerID="989ba8e972d8926795423b21b9600d15a17edd8fd581ad93b776abcf3a35bc44" Jun 20 18:57:44.828129 containerd[1906]: time="2025-06-20T18:57:44.828102858Z" level=info msg="RemoveContainer for \"989ba8e972d8926795423b21b9600d15a17edd8fd581ad93b776abcf3a35bc44\"" Jun 20 18:57:44.835383 containerd[1906]: time="2025-06-20T18:57:44.835345094Z" level=info msg="RemoveContainer for \"989ba8e972d8926795423b21b9600d15a17edd8fd581ad93b776abcf3a35bc44\" returns successfully" Jun 20 18:57:44.835907 kubelet[3159]: I0620 18:57:44.835659 3159 scope.go:117] "RemoveContainer" containerID="10fd0b73b4897e9e46e1e3c6f341cbacec61421f406c2a2766576f4d615fddb8" Jun 20 18:57:44.837062 containerd[1906]: time="2025-06-20T18:57:44.836829061Z" level=info msg="RemoveContainer for \"10fd0b73b4897e9e46e1e3c6f341cbacec61421f406c2a2766576f4d615fddb8\"" Jun 20 18:57:44.842924 containerd[1906]: time="2025-06-20T18:57:44.842342002Z" level=info msg="RemoveContainer for \"10fd0b73b4897e9e46e1e3c6f341cbacec61421f406c2a2766576f4d615fddb8\" returns successfully" Jun 20 18:57:44.844499 kubelet[3159]: I0620 18:57:44.844224 3159 scope.go:117] "RemoveContainer" containerID="b45fbda7f961d567afa697ccae7fdede90a6d9677efeb763890449be21124f53" Jun 20 18:57:44.845178 containerd[1906]: time="2025-06-20T18:57:44.845150593Z" level=info msg="RemoveContainer for \"b45fbda7f961d567afa697ccae7fdede90a6d9677efeb763890449be21124f53\"" Jun 20 18:57:44.851281 containerd[1906]: time="2025-06-20T18:57:44.850882639Z" level=info msg="RemoveContainer for \"b45fbda7f961d567afa697ccae7fdede90a6d9677efeb763890449be21124f53\" returns successfully" Jun 20 18:57:44.851394 kubelet[3159]: I0620 18:57:44.851056 3159 scope.go:117] "RemoveContainer" containerID="63aa61da7be374fa943c00518c74631a9c4367ced05cee037926d92cf3afba68" Jun 20 18:57:44.855083 containerd[1906]: time="2025-06-20T18:57:44.855028292Z" level=error msg="ContainerStatus for \"63aa61da7be374fa943c00518c74631a9c4367ced05cee037926d92cf3afba68\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"63aa61da7be374fa943c00518c74631a9c4367ced05cee037926d92cf3afba68\": not found" Jun 20 18:57:44.855526 kubelet[3159]: E0620 18:57:44.855457 3159 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"63aa61da7be374fa943c00518c74631a9c4367ced05cee037926d92cf3afba68\": not found" containerID="63aa61da7be374fa943c00518c74631a9c4367ced05cee037926d92cf3afba68" Jun 20 18:57:44.871125 kubelet[3159]: I0620 18:57:44.855489 3159 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"63aa61da7be374fa943c00518c74631a9c4367ced05cee037926d92cf3afba68"} err="failed to get container status \"63aa61da7be374fa943c00518c74631a9c4367ced05cee037926d92cf3afba68\": rpc error: code = NotFound desc = an error occurred when try to find container \"63aa61da7be374fa943c00518c74631a9c4367ced05cee037926d92cf3afba68\": not found" Jun 20 18:57:44.871125 kubelet[3159]: I0620 18:57:44.871112 3159 scope.go:117] "RemoveContainer" containerID="e729957ffd21f0dba79e8645732f6d7636e7f2d3fe1d78440a5ea7e47a35d8bb" Jun 20 18:57:44.871608 containerd[1906]: time="2025-06-20T18:57:44.871500275Z" level=error msg="ContainerStatus for \"e729957ffd21f0dba79e8645732f6d7636e7f2d3fe1d78440a5ea7e47a35d8bb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e729957ffd21f0dba79e8645732f6d7636e7f2d3fe1d78440a5ea7e47a35d8bb\": not found" Jun 20 18:57:44.871727 kubelet[3159]: E0620 18:57:44.871672 3159 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e729957ffd21f0dba79e8645732f6d7636e7f2d3fe1d78440a5ea7e47a35d8bb\": not found" containerID="e729957ffd21f0dba79e8645732f6d7636e7f2d3fe1d78440a5ea7e47a35d8bb" Jun 20 18:57:44.871727 kubelet[3159]: I0620 18:57:44.871699 3159 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e729957ffd21f0dba79e8645732f6d7636e7f2d3fe1d78440a5ea7e47a35d8bb"} err="failed to get container status \"e729957ffd21f0dba79e8645732f6d7636e7f2d3fe1d78440a5ea7e47a35d8bb\": rpc error: code = NotFound desc = an error occurred when try to find container \"e729957ffd21f0dba79e8645732f6d7636e7f2d3fe1d78440a5ea7e47a35d8bb\": not found" Jun 20 18:57:44.871727 kubelet[3159]: I0620 18:57:44.871718 3159 scope.go:117] "RemoveContainer" containerID="989ba8e972d8926795423b21b9600d15a17edd8fd581ad93b776abcf3a35bc44" Jun 20 18:57:44.872461 kubelet[3159]: E0620 18:57:44.872065 3159 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"989ba8e972d8926795423b21b9600d15a17edd8fd581ad93b776abcf3a35bc44\": not found" containerID="989ba8e972d8926795423b21b9600d15a17edd8fd581ad93b776abcf3a35bc44" Jun 20 18:57:44.872461 kubelet[3159]: I0620 18:57:44.872086 3159 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"989ba8e972d8926795423b21b9600d15a17edd8fd581ad93b776abcf3a35bc44"} err="failed to get container status \"989ba8e972d8926795423b21b9600d15a17edd8fd581ad93b776abcf3a35bc44\": rpc error: code = NotFound desc = an error occurred when try to find container \"989ba8e972d8926795423b21b9600d15a17edd8fd581ad93b776abcf3a35bc44\": not found" Jun 20 18:57:44.872461 kubelet[3159]: I0620 18:57:44.872105 3159 scope.go:117] "RemoveContainer" containerID="10fd0b73b4897e9e46e1e3c6f341cbacec61421f406c2a2766576f4d615fddb8" Jun 20 18:57:44.872461 kubelet[3159]: E0620 18:57:44.872367 3159 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"10fd0b73b4897e9e46e1e3c6f341cbacec61421f406c2a2766576f4d615fddb8\": not found" containerID="10fd0b73b4897e9e46e1e3c6f341cbacec61421f406c2a2766576f4d615fddb8" Jun 20 18:57:44.872594 containerd[1906]: time="2025-06-20T18:57:44.871922408Z" level=error msg="ContainerStatus for \"989ba8e972d8926795423b21b9600d15a17edd8fd581ad93b776abcf3a35bc44\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"989ba8e972d8926795423b21b9600d15a17edd8fd581ad93b776abcf3a35bc44\": not found" Jun 20 18:57:44.872594 containerd[1906]: time="2025-06-20T18:57:44.872275643Z" level=error msg="ContainerStatus for \"10fd0b73b4897e9e46e1e3c6f341cbacec61421f406c2a2766576f4d615fddb8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"10fd0b73b4897e9e46e1e3c6f341cbacec61421f406c2a2766576f4d615fddb8\": not found" Jun 20 18:57:44.872655 kubelet[3159]: I0620 18:57:44.872458 3159 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"10fd0b73b4897e9e46e1e3c6f341cbacec61421f406c2a2766576f4d615fddb8"} err="failed to get container status \"10fd0b73b4897e9e46e1e3c6f341cbacec61421f406c2a2766576f4d615fddb8\": rpc error: code = NotFound desc = an error occurred when try to find container \"10fd0b73b4897e9e46e1e3c6f341cbacec61421f406c2a2766576f4d615fddb8\": not found" Jun 20 18:57:44.872655 kubelet[3159]: I0620 18:57:44.872474 3159 scope.go:117] "RemoveContainer" containerID="b45fbda7f961d567afa697ccae7fdede90a6d9677efeb763890449be21124f53" Jun 20 18:57:44.872766 containerd[1906]: time="2025-06-20T18:57:44.872733103Z" level=error msg="ContainerStatus for \"b45fbda7f961d567afa697ccae7fdede90a6d9677efeb763890449be21124f53\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b45fbda7f961d567afa697ccae7fdede90a6d9677efeb763890449be21124f53\": not found" Jun 20 18:57:44.872880 kubelet[3159]: E0620 18:57:44.872861 3159 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b45fbda7f961d567afa697ccae7fdede90a6d9677efeb763890449be21124f53\": not found" containerID="b45fbda7f961d567afa697ccae7fdede90a6d9677efeb763890449be21124f53" Jun 20 18:57:44.872952 kubelet[3159]: I0620 18:57:44.872878 3159 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b45fbda7f961d567afa697ccae7fdede90a6d9677efeb763890449be21124f53"} err="failed to get container status \"b45fbda7f961d567afa697ccae7fdede90a6d9677efeb763890449be21124f53\": rpc error: code = NotFound desc = an error occurred when try to find container \"b45fbda7f961d567afa697ccae7fdede90a6d9677efeb763890449be21124f53\": not found" Jun 20 18:57:44.872952 kubelet[3159]: I0620 18:57:44.872892 3159 scope.go:117] "RemoveContainer" containerID="ec1e03099fb14592e265a8073898823bca2d18745bd0d01483d788b18502d457" Jun 20 18:57:44.874003 containerd[1906]: time="2025-06-20T18:57:44.873913424Z" level=info msg="RemoveContainer for \"ec1e03099fb14592e265a8073898823bca2d18745bd0d01483d788b18502d457\"" Jun 20 18:57:44.879164 containerd[1906]: time="2025-06-20T18:57:44.879125831Z" level=info msg="RemoveContainer for \"ec1e03099fb14592e265a8073898823bca2d18745bd0d01483d788b18502d457\" returns successfully" Jun 20 18:57:44.879614 kubelet[3159]: I0620 18:57:44.879502 3159 scope.go:117] "RemoveContainer" containerID="ec1e03099fb14592e265a8073898823bca2d18745bd0d01483d788b18502d457" Jun 20 18:57:44.879883 containerd[1906]: time="2025-06-20T18:57:44.879842551Z" level=error msg="ContainerStatus for \"ec1e03099fb14592e265a8073898823bca2d18745bd0d01483d788b18502d457\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ec1e03099fb14592e265a8073898823bca2d18745bd0d01483d788b18502d457\": not found" Jun 20 18:57:44.880059 kubelet[3159]: E0620 18:57:44.880037 3159 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ec1e03099fb14592e265a8073898823bca2d18745bd0d01483d788b18502d457\": not found" containerID="ec1e03099fb14592e265a8073898823bca2d18745bd0d01483d788b18502d457" Jun 20 18:57:44.880124 kubelet[3159]: I0620 18:57:44.880063 3159 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ec1e03099fb14592e265a8073898823bca2d18745bd0d01483d788b18502d457"} err="failed to get container status \"ec1e03099fb14592e265a8073898823bca2d18745bd0d01483d788b18502d457\": rpc error: code = NotFound desc = an error occurred when try to find container \"ec1e03099fb14592e265a8073898823bca2d18745bd0d01483d788b18502d457\": not found" Jun 20 18:57:45.143301 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c4958c52bed2fad40678d574209cf5ad4ee641646fb9193263eb78c84aa37cd5-rootfs.mount: Deactivated successfully. Jun 20 18:57:45.143440 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c4958c52bed2fad40678d574209cf5ad4ee641646fb9193263eb78c84aa37cd5-shm.mount: Deactivated successfully. Jun 20 18:57:45.143510 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-936789afe269089ea4fe39069054dee026650c2c8dd560c043cf09e1c5c7041c-rootfs.mount: Deactivated successfully. Jun 20 18:57:45.143572 systemd[1]: var-lib-kubelet-pods-016e655e\x2d394d\x2d46be\x2dbb35\x2deb13463c6ac4-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d6p2hj.mount: Deactivated successfully. Jun 20 18:57:45.143636 systemd[1]: var-lib-kubelet-pods-70026d22\x2d746e\x2d4af5\x2dbc9e\x2d220e8faac69a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drjpxs.mount: Deactivated successfully. Jun 20 18:57:45.143697 systemd[1]: var-lib-kubelet-pods-016e655e\x2d394d\x2d46be\x2dbb35\x2deb13463c6ac4-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jun 20 18:57:45.143761 systemd[1]: var-lib-kubelet-pods-016e655e\x2d394d\x2d46be\x2dbb35\x2deb13463c6ac4-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jun 20 18:57:45.488435 kubelet[3159]: I0620 18:57:45.488396 3159 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="016e655e-394d-46be-bb35-eb13463c6ac4" path="/var/lib/kubelet/pods/016e655e-394d-46be-bb35-eb13463c6ac4/volumes" Jun 20 18:57:45.488990 kubelet[3159]: I0620 18:57:45.488969 3159 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70026d22-746e-4af5-bc9e-220e8faac69a" path="/var/lib/kubelet/pods/70026d22-746e-4af5-bc9e-220e8faac69a/volumes" Jun 20 18:57:46.023997 sshd[4939]: Connection closed by 139.178.68.195 port 56908 Jun 20 18:57:46.024789 sshd-session[4937]: pam_unix(sshd:session): session closed for user core Jun 20 18:57:46.029757 systemd[1]: sshd@22-172.31.28.28:22-139.178.68.195:56908.service: Deactivated successfully. Jun 20 18:57:46.032391 systemd[1]: session-23.scope: Deactivated successfully. Jun 20 18:57:46.033466 systemd-logind[1888]: Session 23 logged out. Waiting for processes to exit. Jun 20 18:57:46.034776 systemd-logind[1888]: Removed session 23. Jun 20 18:57:46.058611 systemd[1]: Started sshd@23-172.31.28.28:22-139.178.68.195:50888.service - OpenSSH per-connection server daemon (139.178.68.195:50888). Jun 20 18:57:46.223788 sshd[5098]: Accepted publickey for core from 139.178.68.195 port 50888 ssh2: RSA SHA256:sF0tjKSFADzF6g6JG756y/3bgw4kb0C1NHj6dI7T2go Jun 20 18:57:46.225124 sshd-session[5098]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:57:46.229877 systemd-logind[1888]: New session 24 of user core. Jun 20 18:57:46.236504 systemd[1]: Started session-24.scope - Session 24 of User core. Jun 20 18:57:46.861417 ntpd[1883]: Deleting interface #12 lxc_health, fe80::f05d:42ff:fe67:c0ff%8#123, interface stats: received=0, sent=0, dropped=0, active_time=55 secs Jun 20 18:57:46.861967 ntpd[1883]: 20 Jun 18:57:46 ntpd[1883]: Deleting interface #12 lxc_health, fe80::f05d:42ff:fe67:c0ff%8#123, interface stats: received=0, sent=0, dropped=0, active_time=55 secs Jun 20 18:57:46.872601 sshd[5100]: Connection closed by 139.178.68.195 port 50888 Jun 20 18:57:46.872484 sshd-session[5098]: pam_unix(sshd:session): session closed for user core Jun 20 18:57:46.881724 systemd[1]: sshd@23-172.31.28.28:22-139.178.68.195:50888.service: Deactivated successfully. Jun 20 18:57:46.885199 kubelet[3159]: E0620 18:57:46.885157 3159 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="70026d22-746e-4af5-bc9e-220e8faac69a" containerName="cilium-operator" Jun 20 18:57:46.885199 kubelet[3159]: E0620 18:57:46.885197 3159 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="016e655e-394d-46be-bb35-eb13463c6ac4" containerName="mount-cgroup" Jun 20 18:57:46.885661 kubelet[3159]: E0620 18:57:46.885207 3159 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="016e655e-394d-46be-bb35-eb13463c6ac4" containerName="apply-sysctl-overwrites" Jun 20 18:57:46.885661 kubelet[3159]: E0620 18:57:46.885233 3159 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="016e655e-394d-46be-bb35-eb13463c6ac4" containerName="clean-cilium-state" Jun 20 18:57:46.885661 kubelet[3159]: E0620 18:57:46.885243 3159 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="016e655e-394d-46be-bb35-eb13463c6ac4" containerName="cilium-agent" Jun 20 18:57:46.885661 kubelet[3159]: E0620 18:57:46.885252 3159 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="016e655e-394d-46be-bb35-eb13463c6ac4" containerName="mount-bpf-fs" Jun 20 18:57:46.885661 kubelet[3159]: I0620 18:57:46.885287 3159 memory_manager.go:354] "RemoveStaleState removing state" podUID="016e655e-394d-46be-bb35-eb13463c6ac4" containerName="cilium-agent" Jun 20 18:57:46.885661 kubelet[3159]: I0620 18:57:46.885301 3159 memory_manager.go:354] "RemoveStaleState removing state" podUID="70026d22-746e-4af5-bc9e-220e8faac69a" containerName="cilium-operator" Jun 20 18:57:46.887196 systemd[1]: session-24.scope: Deactivated successfully. Jun 20 18:57:46.891527 systemd-logind[1888]: Session 24 logged out. Waiting for processes to exit. Jun 20 18:57:46.917645 systemd[1]: Started sshd@24-172.31.28.28:22-139.178.68.195:50896.service - OpenSSH per-connection server daemon (139.178.68.195:50896). Jun 20 18:57:46.923920 systemd-logind[1888]: Removed session 24. Jun 20 18:57:46.945035 systemd[1]: Created slice kubepods-burstable-pod83170804_4d77_4d79_abc9_ce198ffbffd8.slice - libcontainer container kubepods-burstable-pod83170804_4d77_4d79_abc9_ce198ffbffd8.slice. Jun 20 18:57:46.980539 kubelet[3159]: I0620 18:57:46.980506 3159 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/83170804-4d77-4d79-abc9-ce198ffbffd8-host-proc-sys-net\") pod \"cilium-qwrj2\" (UID: \"83170804-4d77-4d79-abc9-ce198ffbffd8\") " pod="kube-system/cilium-qwrj2" Jun 20 18:57:46.980679 kubelet[3159]: I0620 18:57:46.980547 3159 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/83170804-4d77-4d79-abc9-ce198ffbffd8-host-proc-sys-kernel\") pod \"cilium-qwrj2\" (UID: \"83170804-4d77-4d79-abc9-ce198ffbffd8\") " pod="kube-system/cilium-qwrj2" Jun 20 18:57:46.980679 kubelet[3159]: I0620 18:57:46.980570 3159 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dt5jh\" (UniqueName: \"kubernetes.io/projected/83170804-4d77-4d79-abc9-ce198ffbffd8-kube-api-access-dt5jh\") pod \"cilium-qwrj2\" (UID: \"83170804-4d77-4d79-abc9-ce198ffbffd8\") " pod="kube-system/cilium-qwrj2" Jun 20 18:57:46.980679 kubelet[3159]: I0620 18:57:46.980600 3159 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/83170804-4d77-4d79-abc9-ce198ffbffd8-lib-modules\") pod \"cilium-qwrj2\" (UID: \"83170804-4d77-4d79-abc9-ce198ffbffd8\") " pod="kube-system/cilium-qwrj2" Jun 20 18:57:46.980679 kubelet[3159]: I0620 18:57:46.980620 3159 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/83170804-4d77-4d79-abc9-ce198ffbffd8-cilium-config-path\") pod \"cilium-qwrj2\" (UID: \"83170804-4d77-4d79-abc9-ce198ffbffd8\") " pod="kube-system/cilium-qwrj2" Jun 20 18:57:46.980679 kubelet[3159]: I0620 18:57:46.980642 3159 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/83170804-4d77-4d79-abc9-ce198ffbffd8-cilium-run\") pod \"cilium-qwrj2\" (UID: \"83170804-4d77-4d79-abc9-ce198ffbffd8\") " pod="kube-system/cilium-qwrj2" Jun 20 18:57:46.980964 kubelet[3159]: I0620 18:57:46.980664 3159 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/83170804-4d77-4d79-abc9-ce198ffbffd8-etc-cni-netd\") pod \"cilium-qwrj2\" (UID: \"83170804-4d77-4d79-abc9-ce198ffbffd8\") " pod="kube-system/cilium-qwrj2" Jun 20 18:57:46.980964 kubelet[3159]: I0620 18:57:46.980687 3159 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/83170804-4d77-4d79-abc9-ce198ffbffd8-cilium-cgroup\") pod \"cilium-qwrj2\" (UID: \"83170804-4d77-4d79-abc9-ce198ffbffd8\") " pod="kube-system/cilium-qwrj2" Jun 20 18:57:46.980964 kubelet[3159]: I0620 18:57:46.980711 3159 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/83170804-4d77-4d79-abc9-ce198ffbffd8-cni-path\") pod \"cilium-qwrj2\" (UID: \"83170804-4d77-4d79-abc9-ce198ffbffd8\") " pod="kube-system/cilium-qwrj2" Jun 20 18:57:46.980964 kubelet[3159]: I0620 18:57:46.980734 3159 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/83170804-4d77-4d79-abc9-ce198ffbffd8-cilium-ipsec-secrets\") pod \"cilium-qwrj2\" (UID: \"83170804-4d77-4d79-abc9-ce198ffbffd8\") " pod="kube-system/cilium-qwrj2" Jun 20 18:57:46.980964 kubelet[3159]: I0620 18:57:46.980757 3159 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/83170804-4d77-4d79-abc9-ce198ffbffd8-hubble-tls\") pod \"cilium-qwrj2\" (UID: \"83170804-4d77-4d79-abc9-ce198ffbffd8\") " pod="kube-system/cilium-qwrj2" Jun 20 18:57:46.980964 kubelet[3159]: I0620 18:57:46.980779 3159 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/83170804-4d77-4d79-abc9-ce198ffbffd8-bpf-maps\") pod \"cilium-qwrj2\" (UID: \"83170804-4d77-4d79-abc9-ce198ffbffd8\") " pod="kube-system/cilium-qwrj2" Jun 20 18:57:46.982086 kubelet[3159]: I0620 18:57:46.980811 3159 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/83170804-4d77-4d79-abc9-ce198ffbffd8-xtables-lock\") pod \"cilium-qwrj2\" (UID: \"83170804-4d77-4d79-abc9-ce198ffbffd8\") " pod="kube-system/cilium-qwrj2" Jun 20 18:57:46.982086 kubelet[3159]: I0620 18:57:46.980834 3159 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/83170804-4d77-4d79-abc9-ce198ffbffd8-hostproc\") pod \"cilium-qwrj2\" (UID: \"83170804-4d77-4d79-abc9-ce198ffbffd8\") " pod="kube-system/cilium-qwrj2" Jun 20 18:57:46.982086 kubelet[3159]: I0620 18:57:46.980858 3159 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/83170804-4d77-4d79-abc9-ce198ffbffd8-clustermesh-secrets\") pod \"cilium-qwrj2\" (UID: \"83170804-4d77-4d79-abc9-ce198ffbffd8\") " pod="kube-system/cilium-qwrj2" Jun 20 18:57:47.133092 sshd[5110]: Accepted publickey for core from 139.178.68.195 port 50896 ssh2: RSA SHA256:sF0tjKSFADzF6g6JG756y/3bgw4kb0C1NHj6dI7T2go Jun 20 18:57:47.134577 sshd-session[5110]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:57:47.139485 systemd-logind[1888]: New session 25 of user core. Jun 20 18:57:47.143386 systemd[1]: Started session-25.scope - Session 25 of User core. Jun 20 18:57:47.265755 sshd[5117]: Connection closed by 139.178.68.195 port 50896 Jun 20 18:57:47.266921 containerd[1906]: time="2025-06-20T18:57:47.266436833Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qwrj2,Uid:83170804-4d77-4d79-abc9-ce198ffbffd8,Namespace:kube-system,Attempt:0,}" Jun 20 18:57:47.267506 sshd-session[5110]: pam_unix(sshd:session): session closed for user core Jun 20 18:57:47.275059 systemd[1]: sshd@24-172.31.28.28:22-139.178.68.195:50896.service: Deactivated successfully. Jun 20 18:57:47.275360 systemd-logind[1888]: Session 25 logged out. Waiting for processes to exit. Jun 20 18:57:47.278091 systemd[1]: session-25.scope: Deactivated successfully. Jun 20 18:57:47.281175 systemd-logind[1888]: Removed session 25. Jun 20 18:57:47.308973 containerd[1906]: time="2025-06-20T18:57:47.308728273Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 20 18:57:47.308973 containerd[1906]: time="2025-06-20T18:57:47.308923045Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 20 18:57:47.309153 containerd[1906]: time="2025-06-20T18:57:47.309003230Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 18:57:47.309856 containerd[1906]: time="2025-06-20T18:57:47.309245905Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 18:57:47.315235 systemd[1]: Started sshd@25-172.31.28.28:22-139.178.68.195:50904.service - OpenSSH per-connection server daemon (139.178.68.195:50904). Jun 20 18:57:47.337470 systemd[1]: Started cri-containerd-39c918adb3b43b2fec1a2d674c34c15fcff9bb9128e9a9078c76aefcaaab2673.scope - libcontainer container 39c918adb3b43b2fec1a2d674c34c15fcff9bb9128e9a9078c76aefcaaab2673. Jun 20 18:57:47.368252 containerd[1906]: time="2025-06-20T18:57:47.368180464Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qwrj2,Uid:83170804-4d77-4d79-abc9-ce198ffbffd8,Namespace:kube-system,Attempt:0,} returns sandbox id \"39c918adb3b43b2fec1a2d674c34c15fcff9bb9128e9a9078c76aefcaaab2673\"" Jun 20 18:57:47.372538 containerd[1906]: time="2025-06-20T18:57:47.372338271Z" level=info msg="CreateContainer within sandbox \"39c918adb3b43b2fec1a2d674c34c15fcff9bb9128e9a9078c76aefcaaab2673\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jun 20 18:57:47.392441 containerd[1906]: time="2025-06-20T18:57:47.392209358Z" level=info msg="CreateContainer within sandbox \"39c918adb3b43b2fec1a2d674c34c15fcff9bb9128e9a9078c76aefcaaab2673\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"dde2994211333a7069ba02d10bc1bf48109047a30be004e53e6f49880f2c2157\"" Jun 20 18:57:47.395857 containerd[1906]: time="2025-06-20T18:57:47.393798065Z" level=info msg="StartContainer for \"dde2994211333a7069ba02d10bc1bf48109047a30be004e53e6f49880f2c2157\"" Jun 20 18:57:47.426416 systemd[1]: Started cri-containerd-dde2994211333a7069ba02d10bc1bf48109047a30be004e53e6f49880f2c2157.scope - libcontainer container dde2994211333a7069ba02d10bc1bf48109047a30be004e53e6f49880f2c2157. Jun 20 18:57:47.462285 containerd[1906]: time="2025-06-20T18:57:47.462139638Z" level=info msg="StartContainer for \"dde2994211333a7069ba02d10bc1bf48109047a30be004e53e6f49880f2c2157\" returns successfully" Jun 20 18:57:47.476840 systemd[1]: cri-containerd-dde2994211333a7069ba02d10bc1bf48109047a30be004e53e6f49880f2c2157.scope: Deactivated successfully. Jun 20 18:57:47.502826 sshd[5144]: Accepted publickey for core from 139.178.68.195 port 50904 ssh2: RSA SHA256:sF0tjKSFADzF6g6JG756y/3bgw4kb0C1NHj6dI7T2go Jun 20 18:57:47.505358 sshd-session[5144]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:57:47.512518 systemd-logind[1888]: New session 26 of user core. Jun 20 18:57:47.520408 systemd[1]: Started session-26.scope - Session 26 of User core. Jun 20 18:57:47.552562 containerd[1906]: time="2025-06-20T18:57:47.552496780Z" level=info msg="shim disconnected" id=dde2994211333a7069ba02d10bc1bf48109047a30be004e53e6f49880f2c2157 namespace=k8s.io Jun 20 18:57:47.552562 containerd[1906]: time="2025-06-20T18:57:47.552561508Z" level=warning msg="cleaning up after shim disconnected" id=dde2994211333a7069ba02d10bc1bf48109047a30be004e53e6f49880f2c2157 namespace=k8s.io Jun 20 18:57:47.552562 containerd[1906]: time="2025-06-20T18:57:47.552570698Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 18:57:47.565527 containerd[1906]: time="2025-06-20T18:57:47.565454833Z" level=warning msg="cleanup warnings time=\"2025-06-20T18:57:47Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jun 20 18:57:47.807854 containerd[1906]: time="2025-06-20T18:57:47.807682906Z" level=info msg="CreateContainer within sandbox \"39c918adb3b43b2fec1a2d674c34c15fcff9bb9128e9a9078c76aefcaaab2673\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jun 20 18:57:47.826417 containerd[1906]: time="2025-06-20T18:57:47.826339658Z" level=info msg="CreateContainer within sandbox \"39c918adb3b43b2fec1a2d674c34c15fcff9bb9128e9a9078c76aefcaaab2673\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"05eeb56af875e98c858cee0b91452b908ca0208ab9222ac8d289f08e3769b595\"" Jun 20 18:57:47.826901 containerd[1906]: time="2025-06-20T18:57:47.826776329Z" level=info msg="StartContainer for \"05eeb56af875e98c858cee0b91452b908ca0208ab9222ac8d289f08e3769b595\"" Jun 20 18:57:47.848517 systemd[1]: Started cri-containerd-05eeb56af875e98c858cee0b91452b908ca0208ab9222ac8d289f08e3769b595.scope - libcontainer container 05eeb56af875e98c858cee0b91452b908ca0208ab9222ac8d289f08e3769b595. Jun 20 18:57:47.879292 containerd[1906]: time="2025-06-20T18:57:47.879135630Z" level=info msg="StartContainer for \"05eeb56af875e98c858cee0b91452b908ca0208ab9222ac8d289f08e3769b595\" returns successfully" Jun 20 18:57:47.886456 systemd[1]: cri-containerd-05eeb56af875e98c858cee0b91452b908ca0208ab9222ac8d289f08e3769b595.scope: Deactivated successfully. Jun 20 18:57:47.921150 containerd[1906]: time="2025-06-20T18:57:47.921090009Z" level=info msg="shim disconnected" id=05eeb56af875e98c858cee0b91452b908ca0208ab9222ac8d289f08e3769b595 namespace=k8s.io Jun 20 18:57:47.921150 containerd[1906]: time="2025-06-20T18:57:47.921142329Z" level=warning msg="cleaning up after shim disconnected" id=05eeb56af875e98c858cee0b91452b908ca0208ab9222ac8d289f08e3769b595 namespace=k8s.io Jun 20 18:57:47.921150 containerd[1906]: time="2025-06-20T18:57:47.921151241Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 18:57:48.811437 containerd[1906]: time="2025-06-20T18:57:48.811401608Z" level=info msg="CreateContainer within sandbox \"39c918adb3b43b2fec1a2d674c34c15fcff9bb9128e9a9078c76aefcaaab2673\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jun 20 18:57:48.838901 containerd[1906]: time="2025-06-20T18:57:48.838858406Z" level=info msg="CreateContainer within sandbox \"39c918adb3b43b2fec1a2d674c34c15fcff9bb9128e9a9078c76aefcaaab2673\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"1f728cfdabb141b23ea7d17cbc7ef2783a5594a2e6a8fc11e5ac31acfb49774a\"" Jun 20 18:57:48.840619 containerd[1906]: time="2025-06-20T18:57:48.840585332Z" level=info msg="StartContainer for \"1f728cfdabb141b23ea7d17cbc7ef2783a5594a2e6a8fc11e5ac31acfb49774a\"" Jun 20 18:57:48.875616 systemd[1]: Started cri-containerd-1f728cfdabb141b23ea7d17cbc7ef2783a5594a2e6a8fc11e5ac31acfb49774a.scope - libcontainer container 1f728cfdabb141b23ea7d17cbc7ef2783a5594a2e6a8fc11e5ac31acfb49774a. Jun 20 18:57:48.913133 containerd[1906]: time="2025-06-20T18:57:48.913075928Z" level=info msg="StartContainer for \"1f728cfdabb141b23ea7d17cbc7ef2783a5594a2e6a8fc11e5ac31acfb49774a\" returns successfully" Jun 20 18:57:48.919775 systemd[1]: cri-containerd-1f728cfdabb141b23ea7d17cbc7ef2783a5594a2e6a8fc11e5ac31acfb49774a.scope: Deactivated successfully. Jun 20 18:57:48.955870 containerd[1906]: time="2025-06-20T18:57:48.955781611Z" level=info msg="shim disconnected" id=1f728cfdabb141b23ea7d17cbc7ef2783a5594a2e6a8fc11e5ac31acfb49774a namespace=k8s.io Jun 20 18:57:48.955870 containerd[1906]: time="2025-06-20T18:57:48.955843739Z" level=warning msg="cleaning up after shim disconnected" id=1f728cfdabb141b23ea7d17cbc7ef2783a5594a2e6a8fc11e5ac31acfb49774a namespace=k8s.io Jun 20 18:57:48.955870 containerd[1906]: time="2025-06-20T18:57:48.955857736Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 18:57:49.097955 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1f728cfdabb141b23ea7d17cbc7ef2783a5594a2e6a8fc11e5ac31acfb49774a-rootfs.mount: Deactivated successfully. Jun 20 18:57:49.603320 kubelet[3159]: E0620 18:57:49.603244 3159 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jun 20 18:57:49.815846 containerd[1906]: time="2025-06-20T18:57:49.815793218Z" level=info msg="CreateContainer within sandbox \"39c918adb3b43b2fec1a2d674c34c15fcff9bb9128e9a9078c76aefcaaab2673\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jun 20 18:57:49.838453 containerd[1906]: time="2025-06-20T18:57:49.838337077Z" level=info msg="CreateContainer within sandbox \"39c918adb3b43b2fec1a2d674c34c15fcff9bb9128e9a9078c76aefcaaab2673\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"13b9f82061162c5f4b20a99830d5775b36c199adf72f7daee1b382bc91e3789b\"" Jun 20 18:57:49.839561 containerd[1906]: time="2025-06-20T18:57:49.839385910Z" level=info msg="StartContainer for \"13b9f82061162c5f4b20a99830d5775b36c199adf72f7daee1b382bc91e3789b\"" Jun 20 18:57:49.879539 systemd[1]: Started cri-containerd-13b9f82061162c5f4b20a99830d5775b36c199adf72f7daee1b382bc91e3789b.scope - libcontainer container 13b9f82061162c5f4b20a99830d5775b36c199adf72f7daee1b382bc91e3789b. Jun 20 18:57:49.930374 systemd[1]: cri-containerd-13b9f82061162c5f4b20a99830d5775b36c199adf72f7daee1b382bc91e3789b.scope: Deactivated successfully. Jun 20 18:57:49.935681 containerd[1906]: time="2025-06-20T18:57:49.935648626Z" level=info msg="StartContainer for \"13b9f82061162c5f4b20a99830d5775b36c199adf72f7daee1b382bc91e3789b\" returns successfully" Jun 20 18:57:49.968593 containerd[1906]: time="2025-06-20T18:57:49.968504799Z" level=info msg="shim disconnected" id=13b9f82061162c5f4b20a99830d5775b36c199adf72f7daee1b382bc91e3789b namespace=k8s.io Jun 20 18:57:49.968593 containerd[1906]: time="2025-06-20T18:57:49.968555095Z" level=warning msg="cleaning up after shim disconnected" id=13b9f82061162c5f4b20a99830d5775b36c199adf72f7daee1b382bc91e3789b namespace=k8s.io Jun 20 18:57:49.968593 containerd[1906]: time="2025-06-20T18:57:49.968564293Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 18:57:50.098802 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-13b9f82061162c5f4b20a99830d5775b36c199adf72f7daee1b382bc91e3789b-rootfs.mount: Deactivated successfully. Jun 20 18:57:50.819891 containerd[1906]: time="2025-06-20T18:57:50.819545691Z" level=info msg="CreateContainer within sandbox \"39c918adb3b43b2fec1a2d674c34c15fcff9bb9128e9a9078c76aefcaaab2673\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jun 20 18:57:50.848418 containerd[1906]: time="2025-06-20T18:57:50.848373983Z" level=info msg="CreateContainer within sandbox \"39c918adb3b43b2fec1a2d674c34c15fcff9bb9128e9a9078c76aefcaaab2673\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"bfa8658c12be5cae6c60a12c4c670037c0b8f88f52bf618d2217cd5276c88f84\"" Jun 20 18:57:50.848885 containerd[1906]: time="2025-06-20T18:57:50.848863746Z" level=info msg="StartContainer for \"bfa8658c12be5cae6c60a12c4c670037c0b8f88f52bf618d2217cd5276c88f84\"" Jun 20 18:57:50.889482 systemd[1]: Started cri-containerd-bfa8658c12be5cae6c60a12c4c670037c0b8f88f52bf618d2217cd5276c88f84.scope - libcontainer container bfa8658c12be5cae6c60a12c4c670037c0b8f88f52bf618d2217cd5276c88f84. Jun 20 18:57:50.933679 containerd[1906]: time="2025-06-20T18:57:50.933148779Z" level=info msg="StartContainer for \"bfa8658c12be5cae6c60a12c4c670037c0b8f88f52bf618d2217cd5276c88f84\" returns successfully" Jun 20 18:57:51.365269 kubelet[3159]: I0620 18:57:51.363861 3159 setters.go:600] "Node became not ready" node="ip-172-31-28-28" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-06-20T18:57:51Z","lastTransitionTime":"2025-06-20T18:57:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jun 20 18:57:51.632284 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jun 20 18:57:51.843377 kubelet[3159]: I0620 18:57:51.843314 3159 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-qwrj2" podStartSLOduration=5.843295851 podStartE2EDuration="5.843295851s" podCreationTimestamp="2025-06-20 18:57:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 18:57:51.842954607 +0000 UTC m=+92.476700391" watchObservedRunningTime="2025-06-20 18:57:51.843295851 +0000 UTC m=+92.477041636" Jun 20 18:57:54.655126 (udev-worker)[5497]: Network interface NamePolicy= disabled on kernel command line. Jun 20 18:57:54.657412 (udev-worker)[5978]: Network interface NamePolicy= disabled on kernel command line. Jun 20 18:57:54.668891 systemd-networkd[1783]: lxc_health: Link UP Jun 20 18:57:54.675461 systemd-networkd[1783]: lxc_health: Gained carrier Jun 20 18:57:55.872379 systemd-networkd[1783]: lxc_health: Gained IPv6LL Jun 20 18:57:56.336116 systemd[1]: run-containerd-runc-k8s.io-bfa8658c12be5cae6c60a12c4c670037c0b8f88f52bf618d2217cd5276c88f84-runc.ruzJRg.mount: Deactivated successfully. Jun 20 18:57:58.644681 systemd[1]: run-containerd-runc-k8s.io-bfa8658c12be5cae6c60a12c4c670037c0b8f88f52bf618d2217cd5276c88f84-runc.fvtGfz.mount: Deactivated successfully. Jun 20 18:57:58.722571 kubelet[3159]: E0620 18:57:58.722468 3159 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:41724->127.0.0.1:33003: write tcp 127.0.0.1:41724->127.0.0.1:33003: write: broken pipe Jun 20 18:57:58.861441 ntpd[1883]: Listen normally on 15 lxc_health [fe80::74fe:fcff:fee7:edf4%14]:123 Jun 20 18:57:58.861869 ntpd[1883]: 20 Jun 18:57:58 ntpd[1883]: Listen normally on 15 lxc_health [fe80::74fe:fcff:fee7:edf4%14]:123 Jun 20 18:58:03.032761 sshd[5218]: Connection closed by 139.178.68.195 port 50904 Jun 20 18:58:03.034270 sshd-session[5144]: pam_unix(sshd:session): session closed for user core Jun 20 18:58:03.038046 systemd[1]: sshd@25-172.31.28.28:22-139.178.68.195:50904.service: Deactivated successfully. Jun 20 18:58:03.040077 systemd[1]: session-26.scope: Deactivated successfully. Jun 20 18:58:03.041108 systemd-logind[1888]: Session 26 logged out. Waiting for processes to exit. Jun 20 18:58:03.042437 systemd-logind[1888]: Removed session 26. Jun 20 18:58:17.083790 systemd[1]: cri-containerd-40a93ddbc8243ad603b5703a986af163c9db62c70a15c98289a45efe54ad72c1.scope: Deactivated successfully. Jun 20 18:58:17.084839 systemd[1]: cri-containerd-40a93ddbc8243ad603b5703a986af163c9db62c70a15c98289a45efe54ad72c1.scope: Consumed 3.529s CPU time, 73.1M memory peak, 30.4M read from disk. Jun 20 18:58:17.110342 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-40a93ddbc8243ad603b5703a986af163c9db62c70a15c98289a45efe54ad72c1-rootfs.mount: Deactivated successfully. Jun 20 18:58:17.132579 containerd[1906]: time="2025-06-20T18:58:17.132461075Z" level=info msg="shim disconnected" id=40a93ddbc8243ad603b5703a986af163c9db62c70a15c98289a45efe54ad72c1 namespace=k8s.io Jun 20 18:58:17.132579 containerd[1906]: time="2025-06-20T18:58:17.132523689Z" level=warning msg="cleaning up after shim disconnected" id=40a93ddbc8243ad603b5703a986af163c9db62c70a15c98289a45efe54ad72c1 namespace=k8s.io Jun 20 18:58:17.132579 containerd[1906]: time="2025-06-20T18:58:17.132537590Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 18:58:17.874940 kubelet[3159]: I0620 18:58:17.874849 3159 scope.go:117] "RemoveContainer" containerID="40a93ddbc8243ad603b5703a986af163c9db62c70a15c98289a45efe54ad72c1" Jun 20 18:58:17.878878 containerd[1906]: time="2025-06-20T18:58:17.878836552Z" level=info msg="CreateContainer within sandbox \"173127a87035844a941584b3076d34d364f9f911b8954dc0d185a8370f874476\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jun 20 18:58:17.899615 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3826578142.mount: Deactivated successfully. Jun 20 18:58:17.908533 containerd[1906]: time="2025-06-20T18:58:17.908489194Z" level=info msg="CreateContainer within sandbox \"173127a87035844a941584b3076d34d364f9f911b8954dc0d185a8370f874476\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"68649123c6c9d40692e59ce67ff3dee1ab9d329b7bc77f45f23f2723fcaf985c\"" Jun 20 18:58:17.909057 containerd[1906]: time="2025-06-20T18:58:17.908968958Z" level=info msg="StartContainer for \"68649123c6c9d40692e59ce67ff3dee1ab9d329b7bc77f45f23f2723fcaf985c\"" Jun 20 18:58:17.940393 systemd[1]: Started cri-containerd-68649123c6c9d40692e59ce67ff3dee1ab9d329b7bc77f45f23f2723fcaf985c.scope - libcontainer container 68649123c6c9d40692e59ce67ff3dee1ab9d329b7bc77f45f23f2723fcaf985c. Jun 20 18:58:17.987884 containerd[1906]: time="2025-06-20T18:58:17.987819337Z" level=info msg="StartContainer for \"68649123c6c9d40692e59ce67ff3dee1ab9d329b7bc77f45f23f2723fcaf985c\" returns successfully" Jun 20 18:58:18.109299 systemd[1]: run-containerd-runc-k8s.io-68649123c6c9d40692e59ce67ff3dee1ab9d329b7bc77f45f23f2723fcaf985c-runc.XsTsAt.mount: Deactivated successfully. Jun 20 18:58:19.520697 containerd[1906]: time="2025-06-20T18:58:19.520663745Z" level=info msg="StopPodSandbox for \"936789afe269089ea4fe39069054dee026650c2c8dd560c043cf09e1c5c7041c\"" Jun 20 18:58:19.521080 containerd[1906]: time="2025-06-20T18:58:19.520753013Z" level=info msg="TearDown network for sandbox \"936789afe269089ea4fe39069054dee026650c2c8dd560c043cf09e1c5c7041c\" successfully" Jun 20 18:58:19.521080 containerd[1906]: time="2025-06-20T18:58:19.520763052Z" level=info msg="StopPodSandbox for \"936789afe269089ea4fe39069054dee026650c2c8dd560c043cf09e1c5c7041c\" returns successfully" Jun 20 18:58:19.521080 containerd[1906]: time="2025-06-20T18:58:19.521073705Z" level=info msg="RemovePodSandbox for \"936789afe269089ea4fe39069054dee026650c2c8dd560c043cf09e1c5c7041c\"" Jun 20 18:58:19.521172 containerd[1906]: time="2025-06-20T18:58:19.521093974Z" level=info msg="Forcibly stopping sandbox \"936789afe269089ea4fe39069054dee026650c2c8dd560c043cf09e1c5c7041c\"" Jun 20 18:58:19.521172 containerd[1906]: time="2025-06-20T18:58:19.521139374Z" level=info msg="TearDown network for sandbox \"936789afe269089ea4fe39069054dee026650c2c8dd560c043cf09e1c5c7041c\" successfully" Jun 20 18:58:19.527505 containerd[1906]: time="2025-06-20T18:58:19.527454633Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"936789afe269089ea4fe39069054dee026650c2c8dd560c043cf09e1c5c7041c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 20 18:58:19.527634 containerd[1906]: time="2025-06-20T18:58:19.527528414Z" level=info msg="RemovePodSandbox \"936789afe269089ea4fe39069054dee026650c2c8dd560c043cf09e1c5c7041c\" returns successfully" Jun 20 18:58:19.527954 containerd[1906]: time="2025-06-20T18:58:19.527917806Z" level=info msg="StopPodSandbox for \"c4958c52bed2fad40678d574209cf5ad4ee641646fb9193263eb78c84aa37cd5\"" Jun 20 18:58:19.528052 containerd[1906]: time="2025-06-20T18:58:19.528002738Z" level=info msg="TearDown network for sandbox \"c4958c52bed2fad40678d574209cf5ad4ee641646fb9193263eb78c84aa37cd5\" successfully" Jun 20 18:58:19.528052 containerd[1906]: time="2025-06-20T18:58:19.528012964Z" level=info msg="StopPodSandbox for \"c4958c52bed2fad40678d574209cf5ad4ee641646fb9193263eb78c84aa37cd5\" returns successfully" Jun 20 18:58:19.528412 containerd[1906]: time="2025-06-20T18:58:19.528312223Z" level=info msg="RemovePodSandbox for \"c4958c52bed2fad40678d574209cf5ad4ee641646fb9193263eb78c84aa37cd5\"" Jun 20 18:58:19.528412 containerd[1906]: time="2025-06-20T18:58:19.528344248Z" level=info msg="Forcibly stopping sandbox \"c4958c52bed2fad40678d574209cf5ad4ee641646fb9193263eb78c84aa37cd5\"" Jun 20 18:58:19.528550 containerd[1906]: time="2025-06-20T18:58:19.528398169Z" level=info msg="TearDown network for sandbox \"c4958c52bed2fad40678d574209cf5ad4ee641646fb9193263eb78c84aa37cd5\" successfully" Jun 20 18:58:19.533475 containerd[1906]: time="2025-06-20T18:58:19.533430690Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c4958c52bed2fad40678d574209cf5ad4ee641646fb9193263eb78c84aa37cd5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 20 18:58:19.533741 containerd[1906]: time="2025-06-20T18:58:19.533483724Z" level=info msg="RemovePodSandbox \"c4958c52bed2fad40678d574209cf5ad4ee641646fb9193263eb78c84aa37cd5\" returns successfully" Jun 20 18:58:21.962400 kubelet[3159]: E0620 18:58:21.962325 3159 controller.go:195] "Failed to update lease" err="Put \"https://172.31.28.28:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-28?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jun 20 18:58:22.500046 systemd[1]: cri-containerd-a895e0542a759f57324f638e43e456f3ecc3f45020c2ff460f501ef188c82ffe.scope: Deactivated successfully. Jun 20 18:58:22.500748 systemd[1]: cri-containerd-a895e0542a759f57324f638e43e456f3ecc3f45020c2ff460f501ef188c82ffe.scope: Consumed 1.973s CPU time, 31.5M memory peak, 15.3M read from disk. Jun 20 18:58:22.523846 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a895e0542a759f57324f638e43e456f3ecc3f45020c2ff460f501ef188c82ffe-rootfs.mount: Deactivated successfully. Jun 20 18:58:22.550807 containerd[1906]: time="2025-06-20T18:58:22.550710694Z" level=info msg="shim disconnected" id=a895e0542a759f57324f638e43e456f3ecc3f45020c2ff460f501ef188c82ffe namespace=k8s.io Jun 20 18:58:22.550807 containerd[1906]: time="2025-06-20T18:58:22.550759379Z" level=warning msg="cleaning up after shim disconnected" id=a895e0542a759f57324f638e43e456f3ecc3f45020c2ff460f501ef188c82ffe namespace=k8s.io Jun 20 18:58:22.550807 containerd[1906]: time="2025-06-20T18:58:22.550773718Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 18:58:22.885972 kubelet[3159]: I0620 18:58:22.885865 3159 scope.go:117] "RemoveContainer" containerID="a895e0542a759f57324f638e43e456f3ecc3f45020c2ff460f501ef188c82ffe" Jun 20 18:58:22.887740 containerd[1906]: time="2025-06-20T18:58:22.887697309Z" level=info msg="CreateContainer within sandbox \"19c1eaccf18d85f8bc26fda22bb82b298788a22148987b49cd51f7056038d7f5\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jun 20 18:58:22.911312 containerd[1906]: time="2025-06-20T18:58:22.911265539Z" level=info msg="CreateContainer within sandbox \"19c1eaccf18d85f8bc26fda22bb82b298788a22148987b49cd51f7056038d7f5\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"efe737328dddbad93e9451ab90da7792802dd22705081fe6c859f2556728d8b6\"" Jun 20 18:58:22.911766 containerd[1906]: time="2025-06-20T18:58:22.911743426Z" level=info msg="StartContainer for \"efe737328dddbad93e9451ab90da7792802dd22705081fe6c859f2556728d8b6\"" Jun 20 18:58:22.947479 systemd[1]: Started cri-containerd-efe737328dddbad93e9451ab90da7792802dd22705081fe6c859f2556728d8b6.scope - libcontainer container efe737328dddbad93e9451ab90da7792802dd22705081fe6c859f2556728d8b6. Jun 20 18:58:22.995800 containerd[1906]: time="2025-06-20T18:58:22.995444053Z" level=info msg="StartContainer for \"efe737328dddbad93e9451ab90da7792802dd22705081fe6c859f2556728d8b6\" returns successfully" Jun 20 18:58:23.524553 systemd[1]: run-containerd-runc-k8s.io-efe737328dddbad93e9451ab90da7792802dd22705081fe6c859f2556728d8b6-runc.vyvnGH.mount: Deactivated successfully.